1

I've been reading a lot about the topic USB/FireWire recently and stumbled upon the fact that, despite USB 2.0 having a rate of 480MBit/s, it often does not reach the actual transfer rate of 60MB/s. Often there is some sort of overhead mentioned, but I really don't understand what it means and there doesn't seem to be any sort of comprehendable explanation for someone who's not really tech-savvy like me.

I've also read that FireWire has this overhead too, but it doesn't seem to be as severe as for USB. Why is that?

EDIT: I guess I'm talking about MBit. Also, so you could say it's because of extra data being sent too, like these overhead bits for error checking, that my data that I want to transfer, for example, is being sent far below the theoretical rate?

Boehmi
  • 113
  • 5

1 Answers1

8

USB 2.0 is quite inefficient compared to USB 3.0.

There are Link commands and other transactions meant to keep the link healthy and operational which do not carry data.

There are ACK packets for nearly every received data packet, and because the bus is uni-directional, those disrupt the data transfer.

There is error detection and correction information in the headers, 8/10 bit encoding and other Physical layer header fields, that do not carry payload.

The bus uses broadcast architecture for the downstream channel. That is, every device receives all packets from the host, and is supposed to process only those directed to it.

Furthermore, the USB2.0 bus is polled, that is the host polls all devices cyclically for new data, them being unable to interrupt the host. This is solved in 3.0 by NRDY and ERDY packet types.

Good thing is, the specification is free and available at usb.org, unlike the C spec, for example. Chapter 3 of the USB 3.0 specification provides quite a nice answer to "How is 3.0 better than 2.0? and What is happening under the hood, at the physical layer?".

Vorac
  • 3,091
  • 5
  • 29
  • 52