During the Oct 23 Townhall, dmulvany wrote in the comments of its YT streaming:
“Deaf-blind people would benefit from seeing video descriptionas text, rather than as audio.”
In his reply (English subtitles, 61:16 – 63:04), Dean Jansen asked for further explanations about these accessibility issues, to be given in the forum. So here goes, tentatively (Dana, please correct me where I go wrong):
If you are deaf-blind, you can't hear the audio description of the visual parts of a video. But if this description is given as text, you can read it with a screen reader and a braille bar.
Now, back when Amara was Universal Subtitles and we were free to use it to time code anything we wanted concerning a video, I sometimes scripted audio descriptions in one of the subtitle tracks.
Then I fed these description scripts in the free version of ispeech.org to get the text-to-speech audio files of the right length, then I used the Universal Subtitles time codes to combine these description files with the original audio.
And almost invariably, when I listened to the result, I thought “Hell, if I were blind, I'd much rather have the whole thing as text I could skim through at the speed I decide with a screen reader, or read with a braille bar.” So I gave the thing as .txt as well: see e.g. the account of such an attempt in http://bit.ly/audiodescription.
But actually, in the HTML5 video accessibility and the WebVTT file format - Audio Described video (March 25, 2011), Silvia Pfeiffer explained how this WebVTT (Web Video Text Tracks) file format could be used in future to make text video descriptions for videos in a track that can be browser read (either as audio or on a braille bar), that would stop the video for the time needed to read it, and that could be switched on and off as easily as CC subs.
Is this what you meant in your question, Dana? Do you know how far advanced this WebVTT project is now?
Anyway, even if this is not yet ready, it would be great if Amara developers allowed us again to use subtitle tracks to script audio descriptions, with different time-codings than the subs for deaf people – among other interesting uses of independently time-coded subtitle tracks we were allowed before. And not just to change the timing of translations, as described by Dean Jansen during the Townhall.