Video scribing / animating on a shoestring (#uoltech #edtech #lrnchat) - with thanks to @llewellynk & team

Video scribing / animating on a shoestring (#uoltech #edtech #lrnchat) – with thanks to @llewellynk & team

I have been reading quite a few books lately on making your ideas heard, having an impact on your audience and so forth (“Made to Stick” and “The Secret Language of Leadership” stand out). I am also learning a lot from the cool links shared by folks in my Twitter community – such as the work done by RSA Animate on a few TED talks, as well as the Common Craft videos. What’s even better, I work in a team who reads around a lot, talks things over and is not afraid to try and innovate.

This is how @llewellynk and I started talking about a new way of creating a report with a difference at the end of the Leeds Building Capacity project that Karen has been working on. Karen loved the RSA Animate stuff and I had been getting more and more into the visual aspect of presenting information – with a prime example of stick people at work in this interpreter training resourceΒ on public speaking that dispenses with written text on slides in favour of relevant drawings.

Our project

What to do then? Despite the cool animations available online from various sources, there was nothing teaching us how to do it. So we got in touch with colleagues in Design and we were fortunate to be recommended a brilliant artist – Misung – (I don’t think anyone, having seen my stick people above, would dispute that we needed someone who could actually draw). Once the team was in place: Karen (author and narrator), Misung (animator), Vanessa (consultant) and me (editor), we worked a lot on getting the storyboard right.

I also had some fun with a few techniques while Karen was getting the script ready. Here is my time lapse video that I mentioned in a previous post:

 

In the end we settled for a combination of filming Misung draw and then speeding up the video many, many times to make it match the narration, and adding additional effects and layers in Adobe After Effects. Here is the result:

 

Do you want to know how we did it; how much time it took; what kind of hardware and software you need; what things to watch out for so that you make the most of your time? All the answers are on YouTube in the video description, in the Show More area under the video. (EDIT: Karen’s just seen that the iPad YouTube client cuts the text in the video description and also messes up with the links… typical… we’ll make the resources available on the project website, too, shortly; EDIT2: The iPad is now displaying everything correctly, except for video annotations… confused? me too… πŸ˜‰

YouTube greatness

Now for the second part: we wanted to make the video more accessible, so we wanted to take advantage of the YouTube built-in subtitling functionality. This is where we were super impressed and felt like sending the YouTube groovy folks a virtual hug.

YouTube subtitles screen

YouTube subtitles screen

  1. First of all, for the fun of it, I went with the Machine Transcription. It was impressive to start with that there was such a functionality. It wasn’t too bad, but it would have needed quite a bit of editing, and we already had a narration script handy, so we decided to upload our own script
  2. I then uploaded our script, and the result was a whole bunch of gibberish and random symbols. Someone with a weaker heart may have been tempted to yell, but I just looked at the extension of the original file, sighed, nodded, and proceeded to re-save the original .docx as.doc
  3. When I imported the new .doc, I got all the words in. Success! What I didn’t expect to get – and nothing prepared me for it – was near-perfect synchronisation between the file we’d just put in and the audio narration. Obviously YouTube is bringing in its speech-to-text engine when processing folks’ own subtitles, too.
    • Even more amazing: our video includes 3 shorter videos from another event. What do the YouTube subtitles do? They stop until our own Karen resumes her speech. Then they pick up at the same time. Brilliant!
  4. Finally, I noticed two errors with the synchronisation. I worked out one was because of a missing full stop in the original transcript, and the other one … I just couldn’t work out. The easiest thing I could think of to achieve perfection quickly was:
    • download the transcript from YouTube and edit it rather than edit the original .doc. Why? Because the transcript will come with timecodes, and once you edit it and re-upload it, it’s almost instant (processing the .doc takes longer, presumably because of the speech-to-text engine)
YouTube subtitle file

YouTube subtitle file

Lessons

In terms of lessons at the end of the project, I have personally learnt a lot about the process and the implementation for such a product, so I can see quite a few alternatives. It always helps if you actually do things πŸ™‚

I am now pondering over whether to use the same technique for future videos (and live with the inconsistent contrast and occasional fuzzy video unless i get better cameras and lighting), or trade the human element brought by having the artist’s hand visible in order to have perfect image (which could be done in a number of ways, including using smartpens and graphics tablets – I can see advantages and disadvantages for both, but personally I would go with the smartpens). Hmmm… questions, questions…

In any case, it’s been great fun working on this project with this team and I could put you in touch with the relevent members if needed, too – just let me know.

Hoping the list of steps and project components we put together on YouTube is of help, I’m looking forward to seeing some of your creations.