21

Everyone seems to have different definitions everywhere I look.

According to my lecturer:

\$ R_{bit} = \frac{bits}{time} \$

\$ R_{baud} = \frac{data}{time} \$

According to manufacturers :

\$ R_{bit} = \frac{data}{time} \$

\$ R_{baud} = \frac{bits}{time} \$

Which is the correct one and why? Feel free to give the origins of why it is defined as such too.

Related question: link.

dim
  • 15,845
  • 3
  • 39
  • 84
Psi
  • 409
  • 1
  • 4
  • 12
  • 1
    If it's just zeros and ones, baud is bits per second. – User323693 Jan 25 '17 at 12:44
  • 6
    Nobody will ever care again about this distinction once you leave college. The only rational thing to do is to stick with whatever your lecturer says it is. –  Jan 25 '17 at 12:51
  • 6
    Possible duplicate of [Difference between Hz and bps](http://electronics.stackexchange.com/questions/56265/difference-between-hz-and-bps) (The question is not an exact duplicate, but the answers answer this question) – The Photon Jan 25 '17 at 14:08
  • 5
    A bit can be a symbol. Baud is symbols per second – Voltage Spike Jan 25 '17 at 17:10

3 Answers3

47

Baud rate is the rate of individual bit times or slots for symbols. Not all slots necessarily carry data bits, and in some protocols, a slot can carry multiple bits. Imagine, for example, four voltage levels used to indicate two bits at a time.

Bit rate is the rate at which the actual data bits get transferred. This can be less than the baud rate because some bit time slots are used for protocol overhead. It can also be more than the baud rate in advanced protocols that carry more than one bit per symbol.

For example, consider the common RS-232 protocol. Let's say we're using 9600 baud, 8 data bits, one stop bit, and no parity bit. One transmitted "character" looks like this:

Since the baud rate is 9600 bits/second, each time slot is 1/9600 seconds = 104 µs long. The character consists of a start bit, 8 data bits, and a stop bit, for a total of 10 bit time slots. The whole character therefore takes 1.04 ms to transmit.

However, only 8 actual data bits are transmitted during this time. The effective bit rate is therefore (8 bits)/(1.04 ms) = 7680 bits/second.

If this were a different protocol that, for example, used four voltage levels to indicate two bits at a time with the baud rate held the same, then there would be 16 bits transferred each character. That would make the bit rate 15,360 bits/second, actually higher than the baud rate.

Olin Lathrop
  • 310,974
  • 36
  • 428
  • 915
  • 27
    It should also be noted that the bit rate can also be higher than baud rate if they symbol encoding used has more allows for multiple bits per symbol. This isn't possible on a simple binary link like RS-232 but is common on systems using more complex encoding schemes. – Andrew Jan 25 '17 at 13:10
  • @Andrew: Yes, good point. – Olin Lathrop Jan 25 '17 at 13:40
  • 6
    Whoever downvoted this, I am stumped as to what you think is wrong. – Olin Lathrop Jan 25 '17 at 13:41
  • 6
    It wasn't me, however I believe that start/stop bits account for the difference between raw bit rate and data rate, not for the difference between bit rate and baud rate (which are exactly the same for RS-232). – Dmitry Grigoryev Jan 25 '17 at 15:35
  • My recollection is that people didn't start noticing the distinction until modems with built-in compression started appearing. – Barmar Jan 25 '17 at 16:32
  • 5
    No, the baudrate is the number of symbols per second. In your example, bit rate = baud rate. When a symbol can carry more than one bit then the baud rate < bit rate. For example, 16-QAM carries sixteen bits per symbol. – Paul Elliott Jan 25 '17 at 16:38
  • 4
    @OlinLathrop The baud rate is almost always much *less* than the bit rate. While RS232 is common, it is nowhere near as common anymore as DSL, Ethernet, and many other protocols that have baud rates much lower than their bit rates. RS232 is the outlier because it is ancient. – David Schwartz Jan 25 '17 at 17:58
  • 1
    @Makyen: OK, I've added that to the answer. – Olin Lathrop Jan 25 '17 at 18:44
  • 3
    @PaulElliott: I think each symbol is a one-of-16 choice, isn't it? That would imply four one-of-two choices (bits) per symbol, not 16. – supercat Jan 25 '17 at 22:04
  • 1
    @DavidSchwartz Plenty of modern protocols work exactly the same as RS232. Only with modern hardware the 8bit per word UART have been replaced with Nbits per word (where N is typically > 16) SERDES. A SERDES however is merely a more generalised UART. HDMI, SATA and USB are not ancient. – slebetman Jan 26 '17 at 06:15
  • A modern case in point is the uniquitous gigabit Ethernet (1000baseT). Contrary to what its name suggests, it does not transfer bits at 10^9 per second, although 8-bit bytes can cross your network at close to 125M per second. At the wire (physical) level it is transferring data on all four twisted pairs in parallel, and it encodes on each pair more than one bit per clock period using multiple voltage levels. If you think this is complicated, 10GBaseT will blow your mind! – nigel222 Jan 26 '17 at 11:55
  • 1
    @Paul Elliott In N-something modulation types, the N is number of different symbols. To get the number of bits per symbol, you need to take \$log_2(N)\$. – AndrejaKo Jan 26 '17 at 12:56
  • 1
    I was wrong. 16-bit QAM encodes four bits per symbol. – Paul Elliott Feb 06 '17 at 06:24
30

The line bit rate is the number of bits per second being moved.

The data bit rate is the number of information bits being moved per second.

The baud rate is the number of symbols per second (Baud is named after Emile Baudot)

The line rate and information rate can be different due to line coding

An example of line coding is QAM; QAM64 encodes 6 bits per symbol (\$ 64\ =\ 2^6\$), so the baud rate would be the \$ \frac {line bit rate} {6}\$

As a (very contrived) example we might see something like this:

Base rate = 64000 bits per second - this is the data rate

Line coded using standard framing on a 32 bit basis adding 1 framing bit per word: this adds 2000 framing bits, so the line rate is now 66,000 bits per second.

Now we perform QAM16 (encodes 4 bits per symbol), so the baud rate (or symbol rate) = 16.5kBaud

Another way that the line bit rate and data rate may be different is where we need to stuff bits in the bitstream, such as SDLC.

The SDLC framing symbol is 01111110 (0x7E) and is used for both the start and end of frame; clearly we don't want data fields to be a frame symbol and erroneously flag a start or end of a frame which would render the link useless.

To prevent this, if a sequence of 5 '1' bits are detected within the payload section of the frame (which the transmit source knows about), a zero is inserted into the bit stream to prevent a premature end of frame symbol. The overhead on the channel is not deterministic, incidentally.

Peter Smith
  • 21,923
  • 1
  • 29
  • 64
  • 1
    And when can line bit rate and baud rate be different? – chtenb Jan 25 '17 at 14:36
  • 1
    @ChieltenBrinke: When error correction is used, extra bits are transmitted which don't actually serve additional information, only provide error checking for existing data. Also there is some overhead in the protocol being used, which is necessary but doesn't add additional information bits. – loneboat Jan 25 '17 at 15:16
  • According to this post, that only explains the difference between data rate and line bit rate. By reading this post however, I cannot deduce the difference between *baud* rate and line bit rate. – chtenb Jan 25 '17 at 15:59
  • 1
    This topic was discussed pretty heavily when 9600 bps modems first became widely available in the 1980s. I'm surprised no one has gone rooting around in the 20-to-30-year-old archives of [comp.dcom.modems](https://groups.google.com/forum/#!searchin/comp.dcom.modems/(subject$3Abit$20OR$20subject$3Abits)$20AND$20subject$3Abaud%7Csort:relevance) . – shoover Jan 25 '17 at 16:24
  • 3
    @ThomasHollis This should be the accepted answer. – tcrosley Jan 25 '17 at 16:42
0

Baud rate refers to the number of "slots" per second. With most forms of serial communication the data in each slot is a one or a zero. But one could, eg, transmit a voltage indicating a value between zero and three, for four (vs two) possible values per slot. With four values per slot one could transmit data twice as fast as with regular "binary" mode data.

This sort of encoding was used in the early days of telegraph (when all sorts of weird strategies were tried), but is hardly ever done anymore for communications of any distance. However, multi-level encoding is still sometimes done inside computer integrated circuits, to reduce the number of wires required.

Hot Licks
  • 752
  • 6
  • 10
  • 1
    Multi-level coding is extremely common in data communications. For example 1000BASE-T (Gigabit Ethernet) uses PAM-5 modulation. – Paul Elliott Jan 25 '17 at 16:41
  • 1
    This ignores the hundreds of other standards using QAM over long distance (WiFi, QAM TV, others) and other protocols which don't carry a 1:1 bits/symbol rate (USB, Firewire, SATA, Etherent, HD Radio, Digital Cellular standards (3G/4G/CDMA), etc...). Satellite uses PSK and QAM extensively, undersea cables use STM which adds error correction symbols. – Mitch Jan 25 '17 at 21:08
  • I guess I hadn't been aware that the scheme had survived, outside of an RF environment where the whole bit-rate thing gets muddled. – Hot Licks Jan 26 '17 at 00:02