14

This might be obvious but I don't understand why RS-232 needs a stop bit. I understand that the start bit is necessary to notify the other end about the beginning of a transmission.

Let's say we are communicating at 9600BPS. We go from high to low, so that the receiver will know something is coming. The receiver also knows that we are at 9600BPS and it will receive 7 bits of data in total.

So, after receiving 7 bits, the transmission will end. Since we can determine the end of the transmission just by calculation, why do we need a stop bit as well?

psmears
  • 678
  • 4
  • 6
Utku
  • 1,789
  • 2
  • 20
  • 29
  • 4
    If you have another byte coming.. how do you plan on separating them if the last bit in the first byte is also low? (Or is it high..I never remember) – Trevor_G Oct 02 '17 at 19:36
  • @Trevor Start bit + 7 bits = First byte is over. The first bit of the next byte might be high or low. So, how does the stop bit make a difference here? – Utku Oct 02 '17 at 19:39
  • 7
    Because the clocks are resynchronized on the first edge of the start bit. If the last bit of the first byte is the same level as the start bit.. you get no edge to synch on. – Trevor_G Oct 02 '17 at 19:40
  • 4
    Stop followed by start ensures you get a "START" edge – Trevor_G Oct 02 '17 at 19:41
  • It is a state machine that centre samples the data 8 cycles of a 16x clock after the leading edge of Stop Start transition. The last bit for parity should also be used for integrity. – Tony Stewart EE75 Oct 02 '17 at 20:26
  • 1
    Which leads you to the question... why do they give you the option of sending 2 stop bits... https://electronics.stackexchange.com/questions/29945/one-or-two-uart-stop-bits – Trevor_G Oct 02 '17 at 20:50
  • Why do cars need brakes? – Voltage Spike Oct 02 '17 at 22:59
  • 2
    @Trevor Two stop bits were sometimes needed to give the receiver a bit more time (literally!) to process the incoming byte. – TripeHound Oct 03 '17 at 00:14

3 Answers3

27

The thing to remember is that RS232 is an asynchronous protocol. There is no clock signal associated with it.

Timing diagram of 8 bit byte showing margin of error for clock

Figure 1. Receiver sampling points. Source: Sangoma.

The start bit is used to trigger the read cycle in the receiver. The receiver synchronises itself on the start bit and then waits 1.5 cycles to start sampling bits. Thereafter the bits are sampled at the baud rate. This initial delay means that even with a 5% clock error the receiver should still be within the bit timing for the last bit.

Since the start bit - shown low in Figure 1. - is identified by a falling edge then it must be preceded by a high and this is what the stop bit ensures. The alternative would be two start bits and no stop bits but it wouldn't change the total message length.

The linked article has some other points worth noting.

psmears
  • 678
  • 4
  • 6
Transistor
  • 168,990
  • 12
  • 186
  • 385
  • +1 I was just going to put my comments into a similar answer. You saved me the time, thanks :) – Trevor_G Oct 02 '17 at 19:50
  • 1
    Correct me if I'm wrong, (memory not what it was) but most UARTS also give you a framing error if the stop bit is not detected when expected too which gives you a little more error detection. – Trevor_G Oct 02 '17 at 19:52
  • 1
    @Trevor: It's called a "framing" error because what it really means is that the edge that the receiver thought was the leading edge of a start bit was really some other data bit -- i.e., the receiver had "framed" incorrectly on the data stream (misidentified the byte boundaries). This can happen when the UART is connected to a data stream that's already in progress, or is suddenly reset in the middle of a data stream. – Dave Tweed Oct 02 '17 at 20:02
  • @DaveTweed yes I know, but how does it know that if it does not look for the stop bit. I mean that's the only bit following that should be in a known state. – Trevor_G Oct 02 '17 at 20:04
  • 3
    @Trevor: Yes, it is indeed looking for the stop bit. I just wanted to point out that it isn't just a bit error in that bit, but an indicator of a different problem, which is how it gets its name. – Dave Tweed Oct 02 '17 at 20:06
  • @DaveTweed ah.. right gotchya – Trevor_G Oct 02 '17 at 20:06
  • The RS-232 standard actually does allow for clock lines, but wikipedia correctly describes them as 'seldom-used' -- even when sync _protocols_ were used, as I describe in my answer, they usually used their own clocking not 232. – dave_thompson_085 Oct 03 '17 at 03:12
5

RS-232 doesn't require it; some RS-232 devices do. In particular, serial/RS-232 interfaces on computers are often RS-232 with UART (Universal Asychronous Receiver/Transmitter) which supports only asynchronous transmission.

Back in its heyday, RS-232 was commonly used for networking protocols like 'bisync' (BSC), SNA/SDLC, X.25/LAPB, and DECnet/HDLC, which used synchronous transmission of a 'frame' or 'block', typically up to several hundred octets, continuous (no start or stop bits) from a beginning marker to an ending marker. The latter three used bit stuffing (transparent to software at either end) partly to ensure enough transitions to maintain bit-level synchronization regardless of data. Both UART (async only) and USART (sync and async) chips were available, but the former were cheaper and more commonly used.

By the 1990s most if not all synchronous uses of RS-232 were superseded by local Ethernet (and later Ethernet-emulating 802.11) or Token-Ring (now mostly forgotten but then a serious competitor) and remote T-1 ISDN or Frame Relay, while some connections that were naturally or conventionally async (such as cheapish dot-matrix printers) remained, so computer designers used a cheaper async-only serial interface (or in recent years none at all).

0

How accurately are you producing 9600 baud? Or more to the point, when you get up to 115.2 kbaud, how accurately are you doing that? And is your transmitting equipment capable of any other frequency - because if it is, the receiver needs to handle it?

A stop bit mitigates the problem. The receiver knows that from the falling edge of the start bit, it counts 1 start bit and 7 bits of data (in your example), and the 9th bit must always be high. If it isn't, the transmitter must not be transmitting at the right frequency, and we know we don't have a good connection.

Clearly that's not perfect, if the last few bits are all high, but it's a start. Error detection is never perfect, and it's always just a case of trying to catch the most common problems, most of the time. Your protocol can help with this, by ensuring any "ping" messages to establish the session include (or ideally start with) a zero byte.

Graham
  • 6,020
  • 13
  • 20