LEAD-ME Winter Training School Madrid 2021 Programme
Media Accessibility Training: Sign Language and Subtitling for the Deaf and Hard-of-hearing
15-17 November 2021 - Online
- Day1: Monday 15th November
- Day2: Tuesday 16th November
- Day3: Wednesday 17th November
- Speaker Bios
- Abstracts
Monday 15th November
Time (CET) | |
---|---|
09:00 - 09:15 | Winter Training School Madrid 2021 Welcome, Juan Pedro Rica Peromingo |
09:15 - 10:15 | Lecture - Esmeralda Azkarate-Gaztelu, Theatre surtitling for the deaf and hard-of-hearing in Spain |
10:15 - 10:30 | Coffee Break |
10:30 - 11:30 | Keynote - Pilar Orero, Sign Language Personalisation: Towards Diversity and Integration |
11:30 - 11:45 | Coffee Break |
11:45 - 13:30 | Workshop - Pablo Romero-Fresco, Accessible Filmmaking and Creative Media Accessibility |
13:30 - 15:00 | Lunch Break |
15:00 - 17:00 | Workshop - Josep Blat, Víctor Ubieto and Pablo L. García, Virtual Signers generation within SignON |
Tuesday 16th November
Time (CET) | |
---|---|
09:15 - 10:15 | Lecture - Josep Quer, Deaf signers as linguistically heterogenous groups |
10:15 - 10:30 | Coffee Break |
10:30 - 11:30 | Lecture - Anna Matamala, Involving users in media accessibility research: ethics and knowledge transfer |
11:30 - 12:30 | Lecture - Ana Laura Rodríguez Redondo, d/Deaf people’s conceptualization of sound and music: implication for learning Subtitling for the Deaf and Hard of Hearing (SDH) |
12:30 - 12:45 | Coffee Break |
12:45 - 13:30 | Lecture - Ana Tamayo Masero, Sign language interpreting, translation and live translation: different processes, different products |
13:30 - 15:00 | Lunch Break |
15:00 - 17:00 | Workshop - Verónica Arnáiz Uzquiza, The 5 Ws (and many Hs) of subtitling for the deaf and hard-of-hearing (SDH) |
Wednesday 17th November
Time (CET) | |
---|---|
09:15 - 10:15 | Lecture - Raúl Rodríguez Gutiérrez, The Unknown Facts behind Sign Language Interpreting |
10:15 - 10:30 | Coffee Break |
10:30 - 11:30 | Lecture - Laura Feyto, Automatic Subtitle Generation System with Artificial Intelligence Assistance in TVE |
11:30 - 12:30 | Lecture - Agnieszka Szarkowska, How to render multilingualism and linguistic diversity in subtitling for the deaf and the hard of hearing? |
12:30 - 12:45 | Coffee Break |
12:45 - 13:30 | Lecture Tomás Costal Criado, The state of videogame subtitling: from cued transcripts to SDH |
13:30 - 15:00 | Lunch Break |
15:00 - 17:00 | Workshop - Josélia Neves, Sound in film: can you see me? |
17:00 - 17:15 | Announcement next Summer Training School 2022, Chris Hughes |
Speaker Bios
Anna Matamala (Universidad Autónoma de Barcelona, Spain)
Anna Matamala, BA in Translation (UAB) and PhD in Applied Linguistics (UPF), is an associate professor at Universitat Autònoma de Barcelona. Currently leading TransMedia Catalonia, she has participated and led projects on audiovisual translation and media accessibility. She has taken an active role in the organisation of scientific events (M4ALL, ARSAD), and has published in journals such as Meta, Translator, Perspectives, Babel, Translation Studies. She is currently involved in standardisation work.
Pilar Orero (Universidad Autónoma de Barcelona, Spain)
PhD (UMIST, UK) works at Universitat Autònoma de Barcelona (Spain) in the TransMedia Catalonia Lab. She has written and edited many books, over 100 academic papers and almost the same number of book chapters all on Audiovisual Translation and Media Accessibility http://gent.uab.cat/pilarorero. Leader and participant on numerous EU funded research projects focusing on media accessibility. She works in standardisation and participates in ITU, CEN, ANEC and UNE. She is at present participating in the drafting of ISO20071-24 Sign Language representation for screens. She is the chair of the Media Accessibility Cost Action LEAD ME.
Josep Quer (ICREA-Universidad Pompeu Fabra, Spain)
Josep Quer is ICREA Research Professor at the Department of Translation and Language Sciences of the Pompeu Fabra University since 2009, where he leads the Catalan Sign Language Lab (LSCLab). His research focuses on different aspects of the syntax and semantics of sign languages, as well as on Romance languages and Greek. He has led the projects SignGram Blueprint and SIGN-HUB and has been co-editor of Sign Language & Linguistics (2007-2020). He is currently fellow at the Netherlands Institute for Advance Study in the Humanities and Social Sciences (2021/22) in the theme group project “Accessible Tools for Language Assessment at Schools (ATLAS)”.
Esmeralda Azkarate-Gaztelu (Freelance translator, Spain)
Esmeralda Azkarate-Gaztelu is a freelance audiovisual, theatre translator and audio describer. She started working in accessibility for Discovery Channel when it started offering AD in Spain. Since then she has worked in hundreds of feature films, TV shows and documentaries for the main cinema, VOD and TV channels and over 1500 accessible shows throughout Spain providing different services such as audio description, touch tours, surtitling, etc. for some of the most prominent theatre and institutions in Spain (Centro Dramático Nacional, Teatro Español, Teatros del Canal, Teatro La Latina, Festival Internacional de Teatro Clásico de Almagro, etc.)
Ana Laura Rodríguez Redondo (Universidad Complutense de Madrid, Spain)
Dr Ana Laura Rodríguez Redondo is an Associate Professor at the Universidad Complutense of Madrid in the English Studies Department. She is a cognitive linguist with a twofold line of research: English Diachronic Linguistics and visual symbolic means of communication. Her research interest within the synchronic study of language has focused on Sign languages, and on the study of gestures in discourse analysis. She has organized some congresses on Sign Language and the teaching and learning of this language. She has also published certain studies on the field. She is currently a member of Dr A. Barcelona’s project Empirical study of the role of conceptual metonymy in grammar, discourse and Sign language: the development of a metonymy database where she is the researcher in charge of the Sign language analysis.
Agnieszka Szarkowska (University of Warsaw, Poland)
Agnieszka Szarkowska is University Professor in the Institute of Applied Linguistics, University of Warsaw, and Head of the AVT Lab research group. She is a researcher, academic teacher, ex-translator, and translator trainer. Her research projects include eye tracking studies on subtitling, audio description, multilingualism in subtitling for the deaf and the hard of hearing, and respeaking. She now leads an international research team working on the project “Watching Viewers Watch Subtitled Videos”. Agnieszka is also the Vice-President of the European Association for Studies in Screen Translation (ESIST), a member of European Society for Translation Studies (EST) and an honorary member of the Polish Audiovisual Translators Association (STAW).
Pablo Romero Fresco (Universidad de Vigo, Spain)
Pablo Romero Fresco is Ramón y Cajal researcher at Universidade de Vigo (Spain) and Honorary Professor of Translation and Filmmaking at the University of Roehampton (London, UK). He is the author of the books Subtitling through Speech Recognition: Respeaking (Routledge), Accessible Filmmaking: Integrating translation and accessibility into the filmmaking process (Routledge) and Creativity in Media Accessibility (Routledge, forthcoming). He is on the editorial board of the Journal of Audiovisual Translation (JAT) and is the leader of the international research group GALMA (Galician Observatory for Media Access), for which he is currently coordinating several international projects on media accessibility and accessible filmmaking. Pablo is also a filmmaker. His first short documentary, Joining the Dots (2012), was used by Netflix as well as film schools around Europe to raise awareness about audio description. He has just released his first feature-length documentary, Where Memory Ends (2021), which has featured in El País and other leading Spanish media outlets.
Verónica Arnáiz-Uzquiza (Universidad de Valladolid, Spain)
Verónica Arnáiz-Uzquiza, PhD, is lecturer at the Faculty of Translation and Interpreting at the University of Valladolid (UVa, Spain), and the MAs in Translation in Multilingual Digital Environments (UVa), and Literary and Audiovisual Translation at the Universitat Pompeu Fabra (UPF, Spain). Her research interests include Audiovisual Translation, Accessibility and SDH; and T&I Training. She has participated in many national and international-funded projects on these fields and has published a number of papers and reviews in different magazines and volumes. She coordinates the Research Group Intersemiótica, Traducción y Nuevas Tecnologías and is currently leading the EU-funded project FOIL (Online Training for Language Industries).
Raúl Rodríguez Gutiérrez (Universidad Complutense de Madrid, Spain)
A. Raúl Rodríguez studied English studies at Universidad de La Laguna (Canary Islands); he holds a Master's degree in Specialized Translation at Universidad de Cataluña and wrote his Master's dissertation about Audiovisual Translation. He is a second year PhD student, investigating about live subtitling. He is a member of the INNOVA project of the Universidad Complutense de Madrid, directed by Juan Pedro Rica Peromingo, his thesis' director. Raúl is a sign language interpreter and collaborates with different artists in the area of music; interpreting concerts and songs in order to make sign language visible in the artistic field.
Josélia Neves (Hamad bin Khalifa University, Qatar)
Josélia Neves is Full Professor at Hamad bin Khalifa University, in Qatar, where she teaches on the MA in Audiovisual Translation. She has a PhD in Translation Studies, with a dissertation on Subtitling for the Deaf and the Hard of Hearing (SDH). As a strong advocate for inclusion and accessibility, she has developed her career around action research projects that bring together teaching, research and community development. Her areas of expertise are Subtitling and SDH, Audio description, Audio-tactile Transcreation, Accessible websites, and Intersensory translation. She has developed work with partners from the media, the arts, cultural heritage and museums, tourism, education, analogue and digital publishing.
Víctor Ubieto (Universidad Pompeu Fabra, Spain)
Víctor Ubieto is an assistant researcher in the Interactive Technologies Group at Universitat Pompeu Fabra, Barcelona. He is particularly interested in understanding and replicating reality in the virtual world. Currently working in an European project, and he is in charge of the synthesis of a virtual character which supports sign language animations. He is also in charge of human capture to create realistic animations. Bachelor's degree in Audiovisual Systems Engineering and Master's Degree in Computer Vision.
Pablo L. García (Universidad Pompeu Fabra, Spain)
Pablo Luis García, BS in Computer Engineering (UPF) and Master in Video Games: Design, Creation and Programming (UPF), is a research assistant at UPF's Interactive Technologies Group (GTI). His passion for computer graphics led him to work for three years in the Lighting Design industry giving solutions for interactive and procedural light art. He is now focused on crossing the language barrier between Deaf sign language users, hard of hearing and hearing people through the SignOn project.
Laura Feyto (Radio Televisión Española, RTVE, Spain)
Laura Feyto has been working in TVE since 1989. She has a technical background and she has worked in different areas of the company, she started in the MCR (Master Control Room) and she was responsible for it for 7 years, then she was responsible for the "Advertising and Ingest Unit" and since 2013, she manages the Accessibility Services Unit. A group of 40 professionals and several external suppliers composes the Accessibility Services department. In 2020, 42592 hours have been broadcast subtitled in Spanish in addition to 499 in English, 4555 audio-described hours and 2627 with sign language. The Unit collaborates with interactive services partners by providing them with subtitles (in the future they will also provide them with audiodescriptions). With a strong commitment to public service, the management to which the Unit belongs promotes multiple projects to improve the quality of accessibility services offered by TVE.
Tomás Costal Criado (Universidad Nacional de Educación a Distancia, UNED, Spain)
Tomás Costal works as technical secretary of UNED Pontevedra, where he also teaches syntax, literature and English language courses. He is also an audiovisual and technical translator who researches accessibility services, with an emphasis on subtitling, dubbing, and audiodescription.
Ana Tamayo Masero (Universidad del País Vasco)
Ana Tamayo has worked at the University of the Basque Country (UPV/EHU) since 2016. She obtained her BA and MA at Universitat Jaume I (Castellón, Spain). At the same university, she defended her PhD about captioning for d/Deaf and Hard of Hearing children in 2015. Currently, she is a member of the research group TRALIMA/ITZULIK (UPV/EHU) and collaborates with other two groups (TRAMA, UJI, and GALMA, UVigo). Her research interests focus on audiovisual translation and accessibility in different modalities. Currently, she is especially interested in contributing to research on accessible filmmaking, creativity, and captioning and sign language.
Chris Hughes (Salford University, UK)
Dr Chris Hughes is a Lecturer in the School of Computer Science at Salford University, UK. His research is focused heavily on developing computer science solutions to promote inclusivity and diversity throughout the broadcast industry. This aims to ensure that broadcast experiences are inclusive across different languages, addressing the needs of those with hearing and low vision problems, learning difficulties and the aged. He was a partner in the H2020 Immersive Accessibility (ImAc) Project.
Abstracts
Sign Language Personalisation: Towards Diversity and Integration
Technology is opening new and fascinating opportunities in all areas of media accessibility: Sign Language is no exception. From production to consumption and reception in the many situations from education, work or leisure. Sign Language is bound to have a massive impact from the adaptation of existing language technologies, and also from the second generation of sensors, which can capture movement and convert it into text- and vice versa. For years now academics and practitioners have been looking at quality in Sign Language towards meeting end user needs and expectations. In 2020 the World Federation of the Deaf (WFD) jointly with the World Association of Sign Language Interpreters (WASLI) issued a communication discouraging the use of avatars for Sign Language. The main reason for this being " a word-to-sign exact translation is not possible because any translation needs to consider the context and the cultural norms." Avatars are one of the many changes Sign Language will enjoy in the near future. Remote work practices are already a reality, and for years now the shift from analog to digital opened the possibility to enjoy simultaneously different SL languages (Orero et al 2014). Nowadays most Sign Language components may be altered towards achieving higher interaction with: the audience, the venue requirements, the media genres, or the different age groups. This personalization was already studied in 2009 during the DTV4ALL project, later improved in the HBB4ALL (2015) project and recently in ImAc (2019) and Content4All (2019). These projects show the interest of the EU in funding research towards Sign Language and at present three projects are funding towards translation, not only from sign into text --and vice versa - but also between different Sign Languages.
Technology is now allowing for the delivery and the user personalization of the many traits that impact Sign Language reception towards heightening its enjoyment and understanding.
Theatre surtitling for the deaf and hard-of-hearing in Spain
Subtitling for the deaf and hard-of-hearing can be found more often than ever in audiovisual productions everywhere. But what happens with theatre and the performing arts? What is the difference between surtitling and surtitling for the deaf and hard-of-hearing? Can accessibility experts work along or within a theatre company to develop custom-made surtitling solutions?
We will try to answer these questions by covering different approaches: from the most common practices to the ground-breaking alternatives with real examples of surtitled shows in Spain.
Involving users in media accessibility research: ethics and knowledge transfer
Users are central in media accessibility research. Opinions and preferences are gathered through qualitative methods. Users take part in experiments using a wide variety of tools and techniques. This presentation will consider two main aspects related to user involvement in media accessibility research: how to deal with ethical aspects and how to transfer scientific knowledge back to the users. A series of good practices from European and national projects will be used to show how to approach societal involvement in research and knowledge transfer to society.
Deaf signers as linguistically heterogenous groups
Deaf signers are often treated as members of uniform communities in terms of language use and competence. However, the reality is quite the contrary, since there is great diversity across the linguistic profiles of Deaf individuals stemming from different factors like the age of onset of deafness, the language(s) present in the family environment during the first years of life, the type of schooling received (mainstream, bilingual-bicultural, special), the competence in spoken (written) language, the sociolinguistic identity as a language user, or the use of hearing technologies, etc. In this talk we’ll review these factors and their consequences in order to be able to reflect on the choices about accessibility in the media.
d/Deaf people's conceptualization of sound and music: implication for learning Subtitling for the Deaf and Hard of Hearing (SDH)
The goal of this contribution is to explore the conceptualization of sound in the Spanish D/deaf community from a lexicographical onomasiological approach, and the implications of that conceptualization for the SDH practices developed by students. Members of the Deaf culture do not view their hearing loss as a communication disorder but as a feature that is part of their identity (CSNE 2016a; b; Ladd 2003; Marshall and Helsen 2003; Centero de Arce 2016). This includes their view on sound and music as a “weaving” reality that is more significant for D/deaf people than hearing people might think (Neves 2007:123). As it has been proved (Sandberg 1954; Gouge 1990; Darrow 1993; Sacks 1990) D/deaf people do not literally hear sound or listen to music but they perceive sounds through the perception of vibrations (Marshall and Helsen 2003; Tranchant et al 2017). Development of vibrotactile platforms for musical experience for the D/deaf (Baijal et al 2012) or the interest in the musical instruction for this community (Darrow 1993; 2006) show the importance that sound, and the emotion of sound and music (Schmitz et al 2020) have in the life of the D/deaf. In the field of SDH, the question of how to translate sound and music into a linguistic expression is also a key topic that, although recorded in all norms about subtitling, has not been sufficiently tackled in-depth (Neves 2010). While the question of subtitling music and sounds has been approached from different standpoints such as the reception of emotional information (Aleksandrowicz 2020) or from semiotics (Neves 2010) or corpus linguistics (Nascimento 2017) perspectives, the D/deaf people’s conceptualization of sound is still a topic not sufficiently weighted. I consider that to explore D/deaf community’s conceptualization of sound and music may, at least, raise some questions about the way written expression is used in SDH. Taking the different LSE expressions recorded in the official Dictionary of the Confederation of Spanish Deaf people as an initial stand, I present a Cognitive Cultural linguistic analysis that shows some of the ways sound is and may be conceptualized. This conceptualization is placed facing the expressions used by students of SDH to translate sounds and music. The results show that at least a few questions may be opened to debate and throw some light to help students get a deeper knowledge about the receptors of their future work such as: Is it necessary to tag a sound as exhale when the character in the film exhales even though the sound produced by this action does not seem to be relevant for the LSE community to be codified?
How to render multilingualism and linguistic diversity in subtitling for the deaf and the hard of hearing?
In recent years, more and more film directors incorporate multilingualism and linguistic diversity into their films, be it by film characters using different languages or speaking with an accent. When rendered in subtitling for the deaf and hard of hearing, however, multilingualism may become neutralised, washed out or simply reduced to a label. What is more, the addition of a descriptive label – or lack thereof – may carry important political and identity implications. In my talk, I will examine different ways in which multilingualism can be rendered in subtitling for the deaf and hard of hearing and discuss how they may affect viewers’ immersion in a multilingual story world.
The Unknown Facts behind Sign Language Interpreting
Sign Language is the primary way of communication between deaf people. Sign language interpreters are a part of a long-standing profession that have provided interpreting services to Deaf, hard of hearing, late-deafened and deafblind. The evident lack of Budget for the profession, the diseases caused by the physical effort and the job instability in Spain provoke an uncertain environment for the future generations of interpreters. Furthermore, it seems that society lacks of awareness when it comes to the real misión of an interpreter. Sign language interpretation comprehends not only an undervalued compensation for the enourmous amount of physical and psychological effort required, but also plays an underappreciated and even insignificant role in highschools, universities and may civil services.
Automatic Subtitle Generation System with Artificial Intelligence Assistance in TVE
Due to compliance with legal obligations, it has become increasingly common to include automatically - generated subtitling in the process of signal distribution. Although there are encoders that allow for automatic generation of good quality subtitles, it is necessary to use fully cloud-based systems that allow for an increase in subtitles quality through automatic learning techniques. The system implemented in RTVE (Radio Televisión Española, the largest public broadcaster in Spain) addresses the need to simultaneously subtitle 11 regional news programs. The impossibility of bringing together 11 stenotypists or 11 re-speakers simultaneously to do manual subtitling for these live news programs led RTVE to use the proposed cloud-based live subtitling system.
This is a clear use case that demonstrates how two key features of good automatic subtitling (high accuracy and low latency) are provided to perfectly serve the needs of all audiences (in particular those of deaf and hard-of-hearing people), improving latency times as compared to re-speaking while maintaining same levels of accuracy as stenotype. In the case of RTVE, a system architecture has been implemented and successfully tested that allows the audio to be collected from all 11 channels respectively located in 11 different broadcast centers, transcribed and coded into the cloud-based DVB-SUB Transport Stream (TS) and sent directly to the RTVE headend.
The subtitles contain punctuation marks (commas, full stops and interrogations, among others) based on both the speaker's tone and the context, which increases their quality.
The state of videogame subtitling: from cued transcripts to SDH
Digital entertainment products still differ considerably from audiovisual platforms in the manner that subtitles are presented to the gamer and spectator. From the colourful text-on-screen in tutorials, to dialogue transcripts in action-packed sections of the game, to cutscenes which are often cinematographic in quality, subtitles remain a rather fluid concept in very many productions, not just the indie, but the triple-A as well.
This presentation will endeavour to provide a panorama of the state of subtitling in videogames, pinpoint the commendable efforts which are being made to offer a variety of choices to the hard-core players along with the aficionados, and determine which set practices might be considered less advisable. A contrast will also be drawn between the more established subtitling norms for audiovisual productions and the main issues to transfer such norms to a rather different medium.
Sign language interpreting, translation and live translation: different processes, different products
Sign language interpreting (SLI), sign language translation (SLT) and sign language live translation (SLLT) have often been overlooked in both theoretical and more practical approaches within audiovisual translation studies. They are three different processes of making audiovisual content accessible through sign language that lead to different outcomes, which, although similar in many ways, cannot be regarded as equal. In this presentation, I will address these concepts and how their different processes lead to different products. Also, I will argue how creativity can be implemented in these three forms of translation and accessibility to facilitate engagement, raise awareness of a linguistic minority and bring together different audiences.
Virtual Signers generation within SignON
The SignON project aims at developing translation to and from Sign Languages (SLs). Within the project UPF-GTI is in charge of developing Virtual Signing (SL generation through animated 3D synthetic characters, Virtual Signers aka Signing Avatars).
This work involves:
- personalisation of the character involving the users
- interactive capturing of signs and their representation
- animating and rendering characters from abstract descriptions of signs UPF-GTI will present both the rationale behind this work and the proof of concept under development, around which workshop style work could be carried out.
Accessible Filmmaking and Creative Media Accessibility
Accessible filmmaking is the consideration of audiovisual translation and media accessibility as part of the filmmaking process. Unlike the current prevailing model, in which translation and accessibility are dealt with as add-ons in the distribution stage, accessible filmmaking factors them in from inception, which means that foreign, hearing- and visually impaired audiences are taken into account as the film is being made. This does not necessarily mean that films are always going to be altered by adopting this approach, but it does require the collaboration between translators/media access experts and filmmakers. Based on the work carried out by, amongst others, the researchers in the UVigo research group GALMA, the first part of this workshop aim of this class is to outline how this collaboration is currently materialising when it comes to subtitling and how it can evolve further in terms of research, training and professional practice.
The collaboration between filmmakers and translators/media access experts envisaged within accessible filmmaking is leading to increasingly creative examples of media accessibility. These are practices that do not only attempt to provide access for the users of a film or a play, but also seek to become an artistic contribution in their own right, often enhancing user experience in a creative or imaginative way. This may also be referred to as alternative media accessibility, in so far as it stands in opposition to most AVT and media accessibility guidelines (at least in their current state), which encourage professionals to focus on viewers’ comprehension (rather than on their engagement) and to provide them with the information that is missing due to their impairment or lack of knowledge of the language used in the film. The second part of the workshop presents creative media accessibility as a new reality and illustrates it with examples of current professional practice, with a focus on subtitling. Situated at the crossroads between creation and translation, these practices point to a new way to approach audiovisual translation and, more importantly, filmmaking itself.
The 5 Ws (and many Hs) of subtitling for the deaf and hard-of-hearing (SDH)
Long neglected by the media, but then considered the main form to grant access to audiovisual contents, SDH practice is moving back and forth between Audiovisual Translation and Media Accessibility. In contrast to former –or not…– examples, where the one-(standard)subtitle-fits-all strategy was adopted to meet the needs of specific audiences –normally just the D/deaf and hard-of-Hearing–, more recent innitiatives have moved from taylored SDH examples, to easy-to-read subtitles aimed at intended –or not…– viewers.
This hands-on workshop will provide a quick introduction into the specifities of SDH practice these days, addressing the 5Ws (and many Hs) of its practice: What is SDH? When was it born? Who is it addressed to? Why is it necessary? Where is it displayed? How does it look like? How is it produced? How is it perceived? How is it studied? How has it changed… but, mainly: how is it changing? And then, again, WHO is it addressed to?
Sound in film: can you see me?
When producing enriched subtitles for the benefit of viewers with hearing impairment, subtitlers need to be able to 'read' the soundscape to determine which sound(s) require rendering. This may be determined by establishing sound(s)' narrative value and correlation with visual information.
This workshop will provide subtitlers with valuable strategies for identifying and decoding sound in film and strategies for rendering sound(s) visually, in relevant and economic ways.