16

As a software developer the concept of parallel processing usually meant faster processing because it meant things don't have to wait for the other processes to finish if they don't need to.

So how come the serial connection is looked at (arguably) as the future while the parallel one as a thing of the past?

enter image description here

M. A. Kishawy
  • 419
  • 4
  • 10
  • you are assuming development to make serial communications faster couldn't occur on a parallel interface while overlooking the major disadvantage of parallel - parallel lines –  Aug 23 '16 at 14:12
  • 6
    So who said it is faster? The fact the existing parallel port data transfer in old computers are slower than the serial port is not imposing this conclusion. "Serial connection is the future" - is arguable. Right now it is developing faster as the hardware cost is much lower than for parallel one (less signals - less wires, less transceivers, less logic). – Eugene Sh. Aug 23 '16 at 14:14
  • @JonRB Are you saying parallel communication can be faster, but they were abandoned due to their physical size/look? or Am I missing your point here? – M. A. Kishawy Aug 23 '16 at 14:15
  • 3
    The DRAM in your PC sits in a socket with a 64 bit parallel bus. – Turbo J Aug 23 '16 at 14:16
  • 2
    The shift is from "parallel on a single clock" to "multiple serial links". Answer pending. – pjc50 Aug 23 '16 at 14:16
  • 5
    Doesn't deserve the downvotes, I would say... – Eugene Sh. Aug 23 '16 at 14:22
  • 1
    Its not a question of speed, there are other design considerations like number of conductors\transceivers, it just happens that the industry likes serial better than parallel. You could arguable take two sata ports and run them side by side and call it a parallel bus, which wouldn't be faster but would have more throughput. – Voltage Spike Aug 23 '16 at 16:57
  • While I'm not 100% convinced that this question fully qualifies for ee.se, I will state for clarity that the big "problem" with parallel communication, traditionally, has been with timing problems. Ensuring that all bits are received during the same data clock cycle requires lowering the data clock speed enough so that the maximum anticipated path-deviation error is "covered." By my understanding, that's why (as @pjc50 said) the current trend is towards using 1 or more serial links for maximum per-wire transmission speeds. – Robherc KV5ROB Aug 23 '16 at 20:57

4 Answers4

22

The problem is keeping the signals on a parallel bus clean and in sync at the target.

With serial "all you have to do", is be able to is extract the clock and as a result the data. You can help by creating lots of transitions, 8b/10b, bi-phase or manchester encoding, there are lots of schemes (yes this means you are adding even more bits). Yes, absolutely that one serial interface has to run N times faster than an N wide parallel bus in order to be "faster" but a long time ago now we reached that point.

Interestingly we now have parallel serial buses, your pcie, your ethernet (okay if you run 40GigE is 4 × 10 gig lanes 100Gig is 10 × 10 gig or the new thing coming is 4 × 25 gig lanes). Each of these lanes is an independent serial interface taking advantage of the "serial speed", but the overall transmission of data is split up load balanced down the separate serial interfaces, then combined as needed on the other side.

Obviously one serial interface can go no faster than one bit lane of a parallel bus all other things held constant. The key is with speed, routing, cables, connectors, etc keeping the bits parallel and meeting setup and hold times at the far end is the problem. You can easily run N times faster using one serial interface. Then there is the real estate from pins to pcboard to connectors. Recently there is a movement from instead of moving up from 10 gig ethernet to 40 gig using 4 × 10 gig lanes, to 25 gig per lane so one 25 gig pair or two 25 gig pairs to get 50 gig rather than four 10 gig pairs. Costing half-ish the copper or fiber in the cables and elsewhere. That marginal cost in server farms was enough to abandon the traditional path for industry standards and go off and whip one up on the side and roll it out in a hurry.

Pcie likewise, started with one or more serial interfaces with the data load balanced. Still uses serial lanes with the data load balanced an rejoined, the speeds increase each generation per serial interface rather than adding more and more serial pairs.

SATA is the serial version of PATA which is a direct decendent to IDE, not that serial was faster just that it is far easier to sync up with and extract a serial stream than it is to keep N parallel bits in sync from one end to the next. And remains easier to transmit and extract even if the serial stream is per bit lane 16, 32, 64, or more times faster than the parallel.

user1686
  • 105
  • 1
  • 5
old_timer
  • 8,203
  • 24
  • 33
  • 5
    +1 for this; You should remark the statement **"The problem is as speed increases, keeping the signals on a parallel bus clean and in sync at the target"** – Antonio Aug 23 '16 at 15:36
17

Why is the serial connection faster than the parallel connection?

You're making wrong assumptions. Take any serial connection. Now place 10 in parallel and call that the parallel version. Which one is faster ?

So how come the serial connection is considered the future while the parallel one as a thing of the past?

Says who ?

Parallel connections are still everywhere. For a fast short distance connection parallel beats serial anytime. For example the interface between a CPU and its RAM.

For long distance most connections are serial because the cost of multiple wires is higher. But in optical fiber we can use different wavelength signals through the same fiber. You could call that parallel.

Bimpelrekkie
  • 80,139
  • 2
  • 93
  • 183
  • 7
    I'd add that another disadvantage of long parallel buses is crosstalk/noise between the lines – m.Alin Aug 23 '16 at 14:34
  • 1
    Correct, that could be improved by using shielding but that makes the connection more expensive. – Bimpelrekkie Aug 23 '16 at 14:38
  • 1
    Yep, but I was mostly thinking of buses on a PCB, rather than in a cable.. – m.Alin Aug 23 '16 at 14:46
  • 2
    Shielding on a PCB is also done, you can put ground lines between the datalines and /or add ground layers. This takes more board space and this increases cost. So also true for PCBs. – Bimpelrekkie Aug 23 '16 at 14:48
  • 10
    Also, at high enough speeds, skews between different bits on a parallel signal become an issue : in practice it doesn't achieve quite the same speed gain you would expect. –  Aug 23 '16 at 15:35
  • @BrianDrummond which is why multiple serial links *usually* can achieve higher data transmission throughput "per pin" that a "true parallel" link, in all but the shortest/most protected links. – Robherc KV5ROB Aug 23 '16 at 21:00
16

The shift is from "parallel on a single clock" to "multiple serial links". Such as PCIe, where a card may have 1x to 16x "lanes".

There are two factors involved, skew and size.

Adding more connectors makes both the cable, its connectors and the receptacles on each device larger and more expensive. Look at how large things like Centronics printer cables and 40-pin SCSI cables were! You're not going to see a phone with a Centronics connector on. So as the devices get smaller there's pressure for smaller interfaces with fewer cables.

The devices have also got faster with better signal processing. So it's now possible to have much higher bitrates. However, this has a disadvantage for traditional parallel links: skew.

Skew is the difference in arrival times between signals in a group. The traditional parallel link has a single clock for all signals. It assumes that the clock and signals all arrive at roughly the same time. As the signals get faster, the tiny differences in arrival times become more important. This means that a wide parallel connection is limited in speed: you have to go slowly enough that all bits arrive within the same time window and are not overwritten by the next bit coming along.

pjc50
  • 46,540
  • 4
  • 64
  • 126
  • This. It's the number of pins. These serial busses are converted back to parallel into a register so parallel would be natural but you'd never have a cell phone that fits in your pocket without serial ports. System on a chip, etc. – Analog Arsonist Aug 24 '16 at 02:10
3

Wikipedia has an article with a list of x86 CPU sockets by year. Check it out: as CPUs became faster, their interface sockets grew from a mere 40 pins in the 1970s to 1500+ pins now. This cannot be explained by word width alone: if 40 pin socket was sufficient to fit a 16-bit CPU, surely a 64-bit CPU of a similar design should fit in 160 pin socket.

The truth is, important performance gains came from faster RAM access, which is achieved by adding more and more RAM data lines. It's not uncommon to see CPUs with quad-DDR memory interfaces (256 data lines in parallel), and nobody is planning to replace those with serial interfaces any time soon.

That being said, such massively parallel connections are very limited in transmission distance and connectors usability, so they are not used to connect off-board peripherals or external devices. This is where serial connections really shine.

Dmitry Grigoryev
  • 25,576
  • 5
  • 45
  • 106
  • 1
    Are those RAM connections truly parallel with single clock, or are they only multiple serial lanes (like in PCIe mentioned in other answers)? – Marki555 Aug 23 '16 at 15:45
  • 1
    @Marki555 At most there may be one clock signal per 64 bits of one RAM module. In reality I think they are all clocked with the same FSB clock signal. – Dmitry Grigoryev Aug 23 '16 at 16:23
  • 3
    They all have a single clock. The main difference is that all connection traces are on a single board. As such, only the motherboard designer is responsible for ensuring equal lenghts and impedances for all traces. The RAM boards themselves are pretty small, so it isn't that hard either. All other devices depend on both motherboard maker and the device maker and one error in one device could break the whole system. – Ronan Paixão Aug 23 '16 at 18:53
  • 2
    .. and DDR interfaces have an elaborate system for calibrating adjustable delay to make this work. – pjc50 Aug 23 '16 at 19:49