A catalogue of errors. Lessons for students from professional audio description of live performances
While metrics have been developed for assessing quality in subtitling, for example the NER model (Romero-Fresco and Pérez, 2015) and the FAR model, (Pedersen, 2017) ways of evaluating quality in many AD genres have still to be definitively addressed. Marzà Ibañez (2010) has put together a set of evaluation criteria for teaching relevance in AD of Film. However there is nothing comparable for the AD of live events. In order to address this deficit, this paper draws on feedback given at the dry run for 15 live productions in the UK in order to provide both qualitative and quantitative data for common types of error that threaten to undermine quality for the AD user. The corpus comprises feedback notes for 16 audio described performances of 15 productions that took place in London, UK, between February 2011 and September 2016. The limitations of this opportunity sample are the diversity and duration of content which ranged from Pomona (2014, dir. Ned Bennett), hailed in its publicity as “a dystopian thriller” by Alistair McDowall which lasts 1 hour and 40 minutes to a 3 hour 35 minute production of Shakespeare’s Hamlet (2011 dir. Tim Van Someren). There was also diversity in the number of performers from Pomona with just three to Jesus Christ Superstar with a cast of 27. It should be noted that the errors are not likely to survive the dry run evaluation process such that they would persist through to the AD performance. They are used here to illustrate for AD students the ease with which errors can creep into an AD script even in the hands of professionals. The error type with the lowest frequency was microphone technique (m = 1) whereas the most common error type was found to be omission (m = 4.47). Interestingly duration was not shown to correlate with any particular type of error. The results will be discussed within the context of ADLAB PRO, which is establishing an online curriculum for training describers.