Do you ever sink into the sofa at the end of the day, put on a go-to series, and instantly switch subtitles on?
If so, you’re in plenty of company.
A Vox survey found that 57 percent of viewers say they always watch TV with subtitles because they feel like they “can’t understand” what’s being said without them.
Edward Vega, a video producer at Vox, has said it’s not necessarily a sign that everyone’s hearing is getting worse. Instead, the main reasons trace back to how sound for TV and film is captured and mixed today compared with years ago.
In a YouTube explainer, Austin Olivia Kendrick, a dialogue editor for Pace Pictures, told Vox this is a question she gets constantly.

“I basically perform audio surgery on actors’ words,” Kendrick said, explaining her job. “I get asked this question all the time.”
For anyone looking for a single, simple explanation, though, it’s not that tidy.
As Kendrick frames it, the real answer is “layered and complex,” but most paths lead back to evolving technology and industry choices.
Microphones used to be “big, bulky and temperamental,” and crews often had to get inventive just to hide them. Despite major improvements in audio gear overall, the way sound is recorded today has changed dramatically.
Older mics were typically wired, and audio was stored on physical media that would be processed and then transferred. Much of the sound was captured onto a single track, which meant performers had to be positioned carefully and deliver lines in a way that ensured the mic picked them up clearly.
Now, microphones are smaller, higher quality, and wireless. Just as importantly, actors don’t have to share a single setup—each person can have their own mic.

That should be an advantage, letting performances feel more natural instead of “performed” for a mic. But it also means actors don’t need to project the way they once did to be heard in the recording.
The result is that dialogue can be delivered more softly—or with more realism—than older styles of TV and film.
“Naturalism isn’t always the best for intelligibility, though,” Vega pointed out.
Historically, when dialogue wasn’t clear enough, productions could bring actors back to re-record lines through ADR (automated dialogue replacement). While ADR is still used, Kendrick notes it’s expensive, and producers often try to avoid it.
Instead, dialogue editors work intensively to clean up and improve what was captured on set, enhancing clarity without needing actors to return for another recording session.
After dialogue editing, the project goes to a mixer, whose job includes making sure speech has room to cut through the rest of the soundtrack—music, background noise, and effects.
“That is a big challenge, carving out those frequencies and that space amongst every other element of the mix for the dialogue to be able to punch through and not be muddied up by any other sounds that exist,” Kendrick said.
Even with all that work, the final result doesn’t always translate into dialogue that’s easy to follow at home.
Kendrick also pointed to a trend toward “wall to wall, loud sound,” which can make the spoken word harder to pick out amid constant big audio.
And simply turning the volume up isn’t a real fix either. She explained that boosting levels can cause distortion, and it can also throw the overall balance off so that other sounds become uncomfortably loud.
“You need that contrast in volume in order to give your ear a sense of scale,” she explained.
How you’re watching matters too. The slim speakers built into modern flat-screen TVs often can’t replicate the fuller sound you’d get with dedicated speaker systems or cinema-grade setups, making it even easier for dialogue to get lost.
Put together, it’s easy to see why so many people rely on subtitles—and why it may not be changing any time soon. For a lot of viewers, turning captions on has simply become part of watching.

