Things to Know About Broadcast Captioning

Broadcast closed captioning can be extremely helpful in at least three different situations:

  1. It has been a great boon to hearing-impaired television viewers.
  2. It can also be helpful in noisy environments. For example, a TV in a noisy airport terminal can display closed captioning and still be usable
  3. Some people use captions to learn English or learn to read. For one good video on this, see:

Closed captioning is embedded in the television signal and becomes visible when you use a special decoder, either as a separate appliance or built into a television set. The decoder lets viewers see captions, usually at the bottom of the screen, that will tell them what is being said or heard on many TV shows. Since 1993, television sets with screens of 13 inches or more that are sold in the United States must have built-in decoders, under the Television Decoder Circuitry Act. Set-top decoders are available, too, for older TV sets.

The captions are hidden in the line 21 data area found in the vertical blanking interval of the television signal. The blanking interval is the area of the television signal that tells the electron gun to shoot back up to the upper left corner of the screen to begin painting the next frame. Line 21 is the line in the vertical blanking interval that has been assigned to captioning (as well as time and V-chip information). Each frame of video can transmit two characters of captioning information (or special commands that control color, popups, etc.)

Many shows and commercials carry captions nowadays. Older programs made before captioning became widespread often have captions added for their reruns. Captioned programs are marked in TV listings by “CC”. If you have a set with a decoder built in, you can turn on the captioning. Check your set’s manual for the instructions.

Live shows are captioned in real time by a stenographer or voice writer. That is, during a live broadcast of a special event or of a news program, captions appear just a few seconds behind the action to show what is being said. A stenographer listens to the broadcast and writes the words as near verbatim as possible on his or her steno machine.  The steno machine is connected to a computer which contains software that translates the stenography into English. The computer is connected to a network’s encoder via a modem on land line or via an internet connection. This process adds the captions to the television signal. This is why there is usually a delay of a few seconds from what is spoken to the actual words appearing on the screen. The stenographers or voice writers often are captioning at a rate in excess of 250 words a minute depending on the programming.

Other shows carry captions that get added after the show is produced. This is called “offline” captioning.  This process is used in anything other than live programming – sitcoms, series, other repeating shows, for example. Offline typists use scripts or video and listen to a show’s soundtrack so they can add words that explain sound effects. On a game show, for example, when there is no dialogue but there is laughter, the caption will say “Audience laughing.”

Sources for the above: and

It is important to note that broadcast captioning includes all Internet broadcasts also. The rapidly increasing number of professionally produced videos on the Internet require full, accurate captioning for inclusion, for literacy, for learning languages, for making transcripts available, and for search.

Broadcast captioning is also sometimes the “label” also for Sports’ Stadiums, where a screen shows all announcements and some replays. The CCAC has some good photo examples, if you want to inquire.

One of our members suggested this link as one useful document to review, i.e., to help establish captioning and webcasting policies: The New York State Information Technology Policy Best Practice Guideline for Webcasting Open Meetings No. NYS-G07-002 (