Media Accessibility is the research area concerned with "access to media and non-media objects, services and environments through media solutions, for any person who cannot or would not be able to, either partially or completely, access them in their original form " (Greco 2019, p.18; Greco 2016).
MAP adopts a holistic approach in which media accessibility does not tackles exclusively sensorial barriers. This is why a variety of access services will be considered in this platform. When the original language of audiovisual content cannot be accessed by the audience, various access services can be provided: lip-synch dubbing is a form of revoicing in which the original dialogue is replaced by a target-language version synchronised with the lip movements of the original characters. In voice-over, on the contrary, there is not lip synchrony and the original is generally audible underneath, especially at the beginning and end of each unit (E. Franco, A. Matamala and P. Orero 2010). A third modality is what has been called off-screen dubbing or narration, in which the original is not heard underneath and there is no lip synchronisation because characters are off-screen. This is usually the case of documentaries in many countries where the original off-screen narrator is replaced by a target language narrator.
Another way of providing access to content in a foreign language is through subtitles, written text onscreen that renders the words of the original dialogue in the target language. These interlingual subtitles are usually placed at the bottom of the screen and are received simultaneously with the original soundtrack. In some live performances though, they are placed at the top of the proscenium and are therefore called surtitles. Location and format may vary across technological solutions. When subtitles render not only the words of the original but also provide information about other audio elements (sound effects, mood, music, character identification, etc.), they are usually called subtitles for the deaf and hard of hearing (or captions). They can be either in the same language as the original or in a different one. When they are provided for live content, they are live subtitles, which may be produced through different techniques, such as respeaking or velotyping.
Conversely, audio description provides access to the visuals through the translation of images into words that are delivered aurally together with the original soundtrack. Audio description usually describes characters, locations, actions and other relevant information that cannot be accessed by those relying exclusively on the audio track (Fryer 2016). When these visuals include subtitles, they are normally voiced in what is called audio subtitling. Audio subtitling can be combined with audio description or offered independently. This is the case in subtitling countries in which all foreign language content is subtitled.
Media interpreting is another solution which renders available content for those who cannot access the original language: this form of interpreting can be done through spoken or sign language interpreting.
New technological developments may give birth to new access services and existing access services may be used with diverging terminology across languages and cultures. Different approaches also need to be considered. Whereas most access services are currently added as an afterthought, once the audiovisual content is finished and no longer involves communication between translators and the creative team, accessible filmmaking incorporates linguistic and sensorial access as an integral part of the production, through collaboration between translators and directors (Romero Fresco 2013). It is foreseen that the Internet of Things will bring about new solutions, such as the new immersive media, where hyper-personalisation will become a reality.
Against this ever-changing background, MAP aims to be as comprehensive as possible and cover all access services and approaches to media accessibility.