American Sign Language (ASL) interpreters play a vital role in ensuring full communication access for Deaf and Hard of Hearing students in classroom settings at UC Santa Barbara. Interpreters are trained professionals who facilitate communication between individuals who use spoken English and those who use ASL. In the classroom, interpreters are typically positioned in a spot where the student has a clear line of sight to both the instructor and the interpreter, allowing them to watch the lecture and interpretation simultaneously. The interpreter listens to the spoken content in real time and conveys it accurately and faithfully into ASL, including not only the words but also the tone, intent, and emotional nuance of the speaker.
Interpreters at UCSB are coordinated through the Disabled Students Program (DSP), which ensures that qualified professionals are assigned based on course schedules, subject matter, and student preferences. They may interpret lectures, labs, discussions, field trips, office hours, and campus events. In interactive classes, interpreters also voice the Deaf or Hard of Hearing student’s signed contributions into spoken English so that instructors and peers can understand and respond, creating a fully reciprocal communication environment. The process requires considerable skill and preparation; interpreters often review course materials, terminology, and readings beforehand to ensure that specialized vocabulary is rendered accurately during class.
Instructors play an important role in supporting effective interpretation. DSP recommends that faculty provide lecture notes or slides in advance when possible, face the class while speaking, and avoid talking while writing on the board. These practices assist the interpreter to maintain clarity and allow the student to engage fully without missing visual cues. The interpreter does not participate in class discussions but serves as a neutral communication conduit, ensuring that all participants have equal opportunity to exchange ideas.
Through this coordinated approach, UCSB’s DSP ensures that Deaf and Hard of Hearing students can access classroom instruction, participate in dialogue, and demonstrate their knowledge on an equal footing with their hearing peers. The presence of professional ASL interpreters underscores UCSB’s commitment to accessibility, inclusion, and the success of every student in its academic community.
Assistive Listening Devices
There are several types of Assistive Listening Devices, and students will often avail themselves of more than one style, depending on personal preference as well as the specific class environment (i.e. lecture hall vs small class section):
Roger On:
The Phonak Roger On is an advanced wireless microphone system designed as an assistive listening device to significantly improve speech understanding in noisy environments and over distance for people with hearing loss. Using proprietary Roger digital wireless technology, the versatile device can be worn by a speaker and captures the desired speech signal, significantly suppressing background noise, and wirelessly streams the clear audio directly to compatible with t-coil enabled hearing aids or cochlear implants, essentially bridging the gap between the speaker and the listener for a much richer communication experience.
Comtek:
A Comtek Assistive Listening Device is a personal wireless system utilizing FM radio frequency technology, designed to overcome the listening challenges of distance, background noise, and poor room acoustics for individuals with hearing loss. It consists of a microphone/transmitter worn by the speaker to capture their voice directly, and a portable receiver worn by the listener. This configuration creates a clear, direct audio link, effectively improving the speech-to-noise ratio by bringing the speaker's voice right to the listener's ear, either through headphones, earbuds, or a neckloop that transmits the signal wirelessly to telecoil-equipped hearing aids or cochlear implants. This direct sound delivery significantly enhances speech clarity in difficult environments like classrooms, meetings, and large public venues. Please use the following link to find learning spaces that are compatible with Comtek devices.
Listen:
A "Listen" Assistive Listening Device (ALD) provided by Listen Technologies, refers to a wireless system used in larger lecture halls and auditoriums to deliver clear, direct audio to individuals with hearing loss. These systems, which utilize Radio Frequency (RF), work by taking the main audio source—such as a speaker's microphone or the venue's sound system feed—and broadcasting it directly to a personal receiver or a listener's smartphone via an app. By bypassing the room's acoustics and amplifying the signal straight to the user's headphones, neckloop, or hearing device, a Listen ALD minimizes the effects of distance, background noise, and reverberation, ensuring a high-quality, personalized listening experience. Please use the following link to find learning spaces that are compatible with Listen devices.
Wireless Microphones:
A wireless microphone system is an audio technology that converts sound into a radio signal, enabling a user to transmit audio without a physical cable connection to the sound system or recorder. The core components include a microphone (like a handheld or discreet lavalier) to capture the sound, a battery-powered transmitter to convert the audio into a radio frequency (RF) signal, and a receiver that connects directly to the host device via a USB port.
Automated Captions and Transcriptions
Automated captions provided by the online resources listed below are generated by AI. Although they are not quite accurate enough to be ADA compliant, their accuracy is rapidly approaching that of professional human transcribers. Automated captions are an integral component of universal design: They may assist with comprehension for the disabled and nondisabled alike in real-time. Some of these are tools available to you as faculty for the purpose of remediating inaccessible course content. We also encourage you to explore the Library's faculty resources for course captioning. Please note that students may request the use of non-AI captioning or transcription services.
Jamworks:
Jamworks is an accessibility and productivity platform that provides live and recorded captioning, lecture transcription, and study tools specifically geared toward students. Using AI-based speech recognition, Jamworks transcribes lectures in real time and allows users to highlight key moments, create summaries, and review notes afterward. The platform integrates playback controls and timestamped notes so students can revisit exact moments in a lecture where key concepts were discussed. Although automated, Jamworks emphasizes educational usability—helping students, including those who are Deaf or Hard of Hearing, to engage actively with course content and improve retention through accurate, interactive transcripts.
Otter.ai:
Otter.ai is an AI-powered transcription tool designed for real-time captioning, note-taking, and meeting transcription. It can capture spoken content from in-person discussions, Zoom meetings, lectures, or uploaded audio files, converting it into searchable, editable text. Users can view live captions as people speak, and the software automatically identifies speakers, timestamps text, and integrates with common platforms like Zoom and Microsoft Teams. While automated, Otter.ai provides high-quality accuracy in many academic and professional settings and allows users to revise transcripts, share them collaboratively, and generate summaries for quick reference.
YouTube Automated Captioning:
YouTube’s automated captioning feature uses speech recognition technology to generate captions for uploaded videos, providing a quick and accessible way to make content more inclusive. Once a video is uploaded, YouTube’s system automatically detects spoken language and produces captions in real time or after processing. While this feature offers convenience and immediate accessibility, the accuracy of automated captions can vary depending on factors such as audio clarity, background noise, accents, and specialized terminology. Content creators can review and edit the captions directly within YouTube Studio to ensure they meet accessibility standards and accurately reflect the spoken dialogue. Note that YouTube Automated Captioning is imperfect, particularly in situations where there is background noise, a heavy accent or terminology which sounds similar to one another. It is always best practice to review automated captioning for accuracy and clarity.
Zoom:
The meeting host may turn on Zoom's built-in auto-caption feature and enable them for all attendees. Attendees may toggle the captions on or off. The live captions may be viewed within Zoom's video window, like subtitles, or the entire transcript may be viewed from the beginning along the edge of the window, as with the chat feature. To learn more about this feature, and how to activate it, visit our Closed Captions in Zoom page.
Communication Access Realtime Translation (CART) is a service that provides instant, verbatim text of spoken communication, allowing Deaf and Hard of Hearing individuals to follow along in real time. During a class, meeting, or event, a trained CART provider listens to everything that is said (often remotely) and uses specialized software and a stenotype machine to convert speech into text nearly instantaneously. This text appears on a laptop, tablet, or projected screen, giving the user full access to all spoken content—including lectures, discussions, and audience questions—just as they occur.
CART is distinct from closed captioning or automated transcription because it is performed by a live professional who ensures accuracy, context, and correct spelling of technical terms or proper names. At UCSB, CART can be provided in classrooms, campus events, or online settings. It is especially valuable for students who prefer reading text over using sign language, or who need visual reinforcement of spoken information. By providing immediate, precise, and complete access to speech, CART supports full participation and equity for Deaf and Hard of Hearing students in academic life.