4

I understand that receivers are often used with mixers which take the sum and difference of two signals to make an output signal. How does a radio signal of say 102.5 MHz know to tell the speakers to output a person singing, a guitar with distortion, and a drums all at the same time in the range of 20-20,000 Hz? In the same vein, whenever a TV receives a signal for a channel of 150 MHz, how does it know what pixels to light up on the screen? This may be an obvious question and I'm just not making a connection, but any further explanation would be appreciated.

JYelton
  • 32,302
  • 33
  • 134
  • 249
Josh
  • 459
  • 1
  • 5
  • 13
  • 4
    Radios do not go about "outputting a signal of 102.5 MHz" An (FM) receiver may have an _input_ signal of nominal bandwidth 200 kHz centered at 102.5 MHz, and it then extracts from this the audio signal of nominal bandwidth 20 kHz to send to the speakers. It does not _tell_ the loudspeaker to produce the sounds of a person singing plus a guitar twanging and a drum banging. You are anthropomorphizing the issue here. – Dilip Sarwate Jun 20 '13 at 02:40
  • First you must consider the nature of that sound coming from the speaker "all at the same time"; Once you understand that what you hear is power distributed across a *range of frequencies* (e.g., about 50 to 20,000Hz), it should become clearer that what a mixer does is shift these *ranges* around. – JustJeff Jun 20 '13 at 04:06
  • @DilipSarwate Thanks, I was getting some of my terms mixed up. I edited the question. – Josh Jun 20 '13 at 11:40

1 Answers1

7

First you have to decide how represent your signal. Sound is easy: a microphone gives you a voltage proportionate to sound pressure. If we send this signal (with amplification) to a speaker, you get a reproduction of the sound. But how do we represent this signal as a radio signal?

The answer is modulation. Perhaps the easiest modulation to understand conceptually is upper single-sideband modulation (SSB). Say your audio signal consists of frequencies from 20 Hz to 20,000 Hz. To modulate these as a radio signal, you just shift all the frequencies up. So, maybe you decide to shift them all up by 100 MHz. Now your audio signal is represented by radio waves instead of sound waves, at the frequencies 100,000,020 Hz to 100,020,000 Hz.

The receiver does the opposite. If the dial is set to 100 MHz, then it will receive the radio waves as a voltage from an antenna. It will shift all these frequencies down by 100 MHz, and then the output voltage is your audio signal.

SSB is easy conceptually, but it's not the easiest to implement electrically. Easier to implement would be amplitude modulation (AM). Frequency modulation (FM) is used for FM radio.

The signal being transmitted, the audio signal in our example so far, is called the baseband signal. If you want to send video, your job becomes more complex, because you have to somehow represent video as a time-varying signal. In the recently-past era of analog TV, this was done by breaking each frame into a raster, and having the signal represent the intensity at each point along the raster scan. Then you add to that some information to allow the receiver to synchronize its raster scan to the transmitters, and maybe more information to allow for color, and maybe yet more information to have sound at the same time. There are several standards that define the process in detail, which may vary by region.

Having accomplished the task of representing your video as a baseband signal, you can then select a modulation to translate it up to radio frequency. The receiver reverses the process.

These days, most things are becoming digital. Here, the baseband signal isn't an analog signal, but rather a stream of bits. One can then use any number of digital modulation methods to send these bits as an RF signal.

Phil Frost
  • 56,804
  • 17
  • 141
  • 262