In this post I will discuss the steps that we are using to get from our transcripts to our final corpus (as of 01/15/2013). This is still a messy process, but with this documentation anyone should be able to replicate our output (on a Mac).
Step 1. Download and unzip this folder where you would like to do your work.
Step 2. Install TreeTagger within ProjectFolder/TreeTagger (look inside the folder you just unzipped).
Step 3. Make sure that you have updated, complete versions of PHP and Python installed.
Step 4. Update TranscriptToSrt.py and SrtGatherer.py with your YouTube client id, secret, and developer key.
Step 5. Save your plain-text transcripts in Project/transcripts (one for each video).
Step 6. Update MainInput.txt with your information.
Step 7. Log in to your YouTube account.
Step 8. Open Terminal and navigate to ProjectFolder.
Step 9. Run MainBatchMaker.py by typing: python MainBatchMaker.py
Step 10. Run MainProcessor by typing: ./MainProcessor
And you’re done! You should now have fully tagged files in ProjectFolder/Processing/Tagged and closed caption files in ProjectFolder/Processing/SRT. And next time you’ll only need to do steps 5 – 10! ?
A few hints in case you run into trouble:
You may need to install some additional Python libraries as indicated by any relevant errors.
If you have an encoding error with some of the Spanish characters, you may need to edit srtitem.py. See my comment on StackOverflow.
If the scripts are successful at downloading some srt files from YouTube, but not others, it is probably a timing issue with YouTube’s API. I am currently trying to build in a work-around, but for now, just wait a few minutes, run MainProcessor again, and cross your fingers.
Finally, these scripts are not very efficient yet. When running them with around 30 videos and around 100,000 words, it takes about two hours on my MacBook Pro. Sorry about that. We will be working on optimizing these scripts as time permits.
Please contact me with any questions or suggestions!