The Fundamentals of Television In this report on television I will discuss television signals, the components the make up a television, and how a television produces the picture and sound for the final output. The sound carrier is at the upper end of the spectrum. Frequency modulation is used to impress the sound on the carrier. The maximum frequency deviation is twenty-five kilohertz, considerably less than the deviation permitted by confessional FM stereo. As a result, a TV sound signal occupies less bandwidth in the spectrum than a standard FM broadcast station.
Stereo sound is available in TV, and the multiplexing method used to transmit two channels of sound information is virtually identical to that used in stereo transmission for FM broadcasting. The picture information is transmitted on a separate carrier located 4.5 MHz lower in frequency than the sound carrier. The video signal derived from a camera is used to amplitude modulate the picture carrier. Different methods of modulation are used for both sound and picture information so that there is less interference between the picture and sound signals. The full upper sidebands of the picture information are transmitted, but only a portion of the lower sidebands is suppressed to conserve spectrum space. The color information in a picture is transmitted by way of frequency division multiplexing techniques.
Two color signals derived from the camera are used to modulate a subcarrier that, in turn, modulates the picture carrier along with the main voice information. The color subcarriers use double-sideband-suppressed carrier AM. The video signal can contain frequency components up to 4.2 MHz. Therefore, if both sidebands were transmitted simultaneously, the picture signal would occupy 8.4 MHz. The vestigal sideband transmission reduces this excessive bandwidth.
Because a TV signal occupies so much bandwidth, it must be transmitted in a very high frequency portion of the spectrum. TV signals are assigned to frequencies in the VHF and UHF range. United States TV stations use the frequencies between 54 and 806 MHz. This portion of the spectrum is divided into sixty-eight 6MHz channels that are assigned frequencies. Channels 2 through 7 occupy the frequency range from 54 to 88 MHz. Additional TV channels occupy the space between 470 and 806 MHz. The video signal is most often generated by a TV camera, a very sophisticated electronic device that incorporates lenses and light-sensitive tranducers to convert the scene or object to be viewed into an electrical signal that can be used to modulate a carrier.
To do this, the scene to be transmitted is collected and focused by a lens upon a light-sensitive imaging device. Both vacume tube and semiconductor devices are used for converting the light information in the scene into an electrical signal. The scene is divided into smaller segments that can be transmitted serially over a period of time. It is the job of the camera to subdivide the scene in an orderly manner so that an acceptable signal is developed. This process is called scanning. Scanning is a technique that divides a rectangular scene up into individual lines. The standard TV scene dimensions have an aspect ratio of 4:3; that is, the scene width is four units for every 3 units of height.
To create a picture, the scene is subdivided into many fine horizontal lines called scan lines. Each line represents a very narrow portion of light variations in the scene. The greater the number of scan lines, the higher the resolution and the greater the detail that can be observed. United States TV standards call for the scene to be divided into a maximum of 525 horizontal lines. The task of the TV camera is to convert the scene into an electrical signal.
The camera accomplishes this by transmitting a voltage of 1 volt for black and 0 volts for white. The scene is divided into 15 scan lines numbered 0 through 14. The scene is focused on the light-sensitive area of a vidicon tube or CCD imaging device that scans the scene one line at time, transmitting the light variations along the lines as voltage levels. Where the white background is being scanned a 0 volt signal occurs. When a black picture element is encountered a 1 volt level is transmitted.
The electrical signals derived from each scan line are refereed to as the video signal. They are transmitted serially one after the other until the entire scene has been sent. Since the scene contains colors, there are different levels of light along each scan line. This information is transmitted as different shades of gray between black and white. Shades of gray are represented by some voltage level between 0- and 1-V extremes represented by white and black.
The resulting signal is known as the brightness, or luminance and is usually designated by the letter Y. Resolution in a video system is measured in terms of the number of lines defined within the bounds of the picture. For example, the horizontal resolution is given as the maximum number of alternating black and white vertical lines that can be distinguished. Assume closely spaced vertical black and white lines of the same width, when such lines are scanned they will they will be converted into a square wave. One cycle or period, of this wave is the time for 1 black and 1 white line. The video signal described so far contains the video or luminance information, which is a black and white version of the scene.
To add the color detail, this is done by dividing the light in each scan line into three separate signals, each representing one of the three basic colors, red, green or blue. In the same way, light in any scene can be divided into its three basic color components by passing the light through red, green and blue filters. This is done in a color TV camera, which is really three cameras in one. The lens focuses the scene on three separate light-sensitive devices such as a videcon tube or a CCD imaging device by way of a series of mirrors and beam splitters. The red light in the scene passes through the red filter, the green passes through the green filter and the blue passes through the blue filter.
The result is the generation of three simultaneous signals during the scanning process by the light-sensitive imaging devices. The R, G and B signals also contain the basic brightness or luminance information. If the color signals are mixed in the correct proportion, the result is the standard B&W video or luminance Y signal. The Y signal is generated by scaling each color signal with a tapped voltage divider and adding the signals together. The Y signal is made up of 30 percent red, 59 percent green and 11 percent blue.
The resulting Y signal is what a B&W TV set will see. The color signals must also be transmitted along with the luminance information in the same bandwidth allotted to the TV signal. This is done by a frequency division multiplexing technique. Instead of all three color signals being transmitted they are combined into color signals referred to as the I and Q signals. I is made up of 60 percent red, 28 percent green and -32 percent blue. Q is made up of 21 percent red, -52 percent green and 31 percent blue. The I and Q signals are referred to as the chrominance signals.
To transmit them they are phase-encoded. These I and Q signals are fed to balance modulators along with 3.58 MHz subcarrier signals that are 90 degrees out of phase. The output of each balanced modulator is a double-sideband supressed carrier AM signal. The resulting two signals are added to the Y signal to create the composite video signal. The combined signal modulates the picture carrier. The resulting signal is the NTSC composite video signal.
This signal and its sidebands are within the 6MHz TV signal bandwidth. The I and Q color signals are also called the R – Y and the B – Y signals as the combination of the three color signals produces the effect of subtracting Y from the R or B signals. The phase of these signals with respect to the original 3.58 MHz subcarrier signal determines the color to be seen. In many TV sets an extra phase shift of 57 degrees is inserted to ensure that maximum color detail is seen. There is still 57 degrees between the I and Q signals but their position is moved 57 degrees.
The reason for this extra phase shift is that the eye is more sensitive to the color orange. If the I signal is adjusted to the orange phase position better detail will be seen. Because of the frequency of the subcarrier, the sidebands produced during amplitude modulation occur in clusters that are interleaved between the other sidebands produced by the video modulation. The 3.58 MHz subcarrier is supressed by the balanced modulators and therefore is not transmitted. Only the filtered upper and lower sidebands of the color signals are transmitted.
To demodulate these double-sideband AM signals, the carrier must be reinserted at the receiver. A 3.58 MHz oscillator in the receiver generates the subcarrier for the balanced modulator-demodulator circuits. For the color signals to be accurately recovered, the subcarrier at the receiver must have a phase related to the subcarrier at the transmitter. To ensure the proper conditions at the receiver, a sample of the 3.58 MHz subcarrier signal developed at the transmitter is added to the composite video signal. This is done by gating 8 to 12 cycles of the 3.58 MHz subcarrier and adding it to the horizontal sync and blanking pulse.
The receiver uses this signal to phase-synchronize the internally generated subcarrier before it is used in the demodulation process. In a TV transmitter, the sweep and sync circuits that creates the scanning signals for the vidicons or CCDs as well as generate the sync pulses that are transmitted along with the video and color signals. The sync signals, luminance Y and the color signals are added to form the final video signal that is used to modulate the carrier. Low-level AM is used. The final AM signal is amplified by very high power linear amplifiers and sent to the antenna via a diplexer.
At the same time the voice or sound signals frequency modulate a carrier that is amplified by class C amplifiers and fed to the same antenn …