Enter text to search the manual transcripts from the Edge Conference in London, 9 February 2013.
Click on a result to view video.
A timed transcript file in SRT format for each video is stored in the tracks folder. SRT files look like this (each item is called a cue):
1 00:00:00,000 --> 00:00:05,550 2 00:00:05,550 --> 00:00:06,410 JAKE ARCHIBALD: Good morning, everyone. 3 00:00:06,410 --> 00:00:11,140 Welcome to the offline part of the day. ...
The videos
object defined in js/videos.js
has data for each video, updated via XHR using the YouTube Data API.
An array of cues (captions) is added for each video by parsing the corresponding SRT file in the tracks
folder. Each cue has a start time, text, and the YouTube ID of the video it belongs to.
When text is entered in the query input element, the cues are searched and matches are displayed.
When a result is clicked, the src
is set for the embedded YouTube player, with a start value corresponding to the start time of the cue.
YouTube caption files are available from URLs like this: video.google.com/timedtext?v=Oic22dQMRXQ&lang=en&format=srt. (Both VTT and SRT format are available.)
This app is based on samdutton.net/trackSearch, which uses the track element to parse VTT files, storing and retrieving cue data via a Web SQL database.
Source code is available from github.com/samdutton.com/tracksearch.
For more information about the track element, see Getting started with the track element on HTML5 Rocks.