Adobe Connect captioning

Lecture capture closed captioning with Adobe Connect & Dragon Naturally Speaking voice recognition

(a very) long time not to blog … – equivalent to my disappearing into the world of fun of my new role as Lecturer in Translation Studies. Most of my time therefore nowadays goes into making sure the face-to-face time I have with my students is meaningful and relevant, and I don’t get as much playtime as I used to… 😉

However, old habits die hard and I am still using quite a lot of technology in my teaching. Together with our local screen translation expert and professional theatre captioner Alina (Twitter: @gr82tweet), and friends in the University of Leeds Equality Team, I have been exploring the best way to make the face-to-face experience of our deaf and hard of hearing students as meaningful and engaging as everyone else’s. You are likely to hear quite a bit more about this as we hopefully start sharing what we have done and found at conferences, but in the meantime I thought I would share a fresh demo comparison of captioning/subtitling options available for one lecture capture set-up.

From what I have seen, lecture capture tools have only recently started to incorporate some sort of functionality to enable the captioning of live sessions. So far pretty much all the tools that have this functionality, have it for live subtitling only and I am yet to experiment with one which supports the adding of subtitles after the actual live session (deep down I am hopeful that the Matterhorn platform supports this and that is what the transLectures project uses to reach their excellent goal, but there may be another way which I am very keen to find out about).

Anyway, to keep this short, we have been using Adobe Acrobat Connect Pro this year to record a good number of our lectures, as well as enable student interaction and in-lecture note-taking/captioning. The captions and interaction are recorded alongside the live lecture and the resulting learning resources have been highly appreciated by our overseas students. BTW, I am aware that not many people consider Adobe Acrobat Connect Pro a lecture capture tool because ” it was not designed for education”,  but as far as I am concerned it is so full of excellent industry functionality that it is superior to pretty much all the other tools out there.

The story now: back in September we experimented with an Adobe Connect captioning plug-in (available here) and the results were not too great because it kept missing the synchronisation with the recorded presentation and it was not supported by tablets, either. We therefore used the chat pod instead since then to host the captions and it has been working well. This also has the advantage of the chat entries persisting in the recording as opposed to the captions disappearing when the user skips around in the video, as well as the much bigger advantage of the chat entries being searchable and acting as bookmarks for the entire lecture recording.

Moreover, Alina organised a branch of the project which looked at using voice recognition for captioning to increase productivity, but our results at the time of using Dragon Naturally Speaking to type directly into the chat pod were rather mixed.

Today I tested again the captioning plug-in as part of an experiment with Bee Communications (many thanks, Beth, for nudging me to test it again) and, although some of the initial problems persist (namely the fact that the plug-in is not supported on tablets, and the captions are not searchable), its behaviour in the recordings was much, much better. Its font resizing functionality is superior to the chat pod and is especially useful as, in a recording, the chat pod font size cannot be changed at all.

The captioning plug-in also appeared to me to be working faster with voice recognition systems — I was using Dragon NaturallySpeaking again — so if you do not have session participants wanting to use the tablets, it does look like the plug-in is now competent enough to do the job. However, be warned that if you want to correct any of the text Dragon NaturallySpeaking puts in, you need to dictate the caption again because Dragon simply overrides the whole caption with the correction – that applies to both their captioning plug-in and the chat pod, so it’s evens-stevens there.

Having said all that, I realise most people would like to see things for themselves rather than take anyone’s word for it, so I did a quick recording of an Adobe session in which I configure the plug-in, and then use voice recognition to type in the plug-in and in an Adobe Connect chat pod for comparison. Looking forward to hearing what you think about it!