Stories - UTD-CRSS/exploreapollo-import-audio GitHub Wiki

A new story along with new moments can be entered in to the database via a properly formatted csv file. It should be noted that this process does not import any new audio/transcript/metric/media data into the database or S3. All it does is upload Story and Moment objects along with their descriptions and (met_start, met_end) interval. The data associated with a story is loaded from the front end based on this interval. These scripts first require the config file be filled out with the correct API server credentials.

LINK FOR GOOGLE SHEET TO GENERATE PROPER CSV: https://docs.google.com/spreadsheets/d/1hrYsR-P5IhunwZ96UPANXj_p9zK4tNCvi0nWuXpQOT8/edit?usp=sharing

Data requirements

The csv file containing the data for uploading a story has the following rules:

  • The name of the file (minus .csv extension) is the name of the story (Splashdown.csv -> "Splashdown" story)
  • each row in the csv file represents a moment
  • csv file must contain columns "Title", "met_start", "met_end", "Transcript Files", and "Details"
  • one row must be entered for the description of the story, to do this just specific the "Title" field in that row as "Description". and then put the story description in the "Details" field. You can leave the rest of the fields blank.
  • The "Transcript Files" field must be supplied a Transcript file from the corresponding moment with the format <mission>_<recorder>_<channel>_<start MET>. All this is for is to determine the channel ID from the file name.

There is an example available in the source repository under src/examples/Splashdown.csv.

Running

To run, copy the .csv file to the same directory as storyUpload.py and run

 python3 storyUpload.py <story name>.csv  

If there are errors, the script will inform you and not upload any moments or stories.

⚠️ **GitHub.com Fallback** ⚠️