4

Is a Phase Locked Loop compulsory for decoding Manchester encoded data? Is the PLL used so that Manchester encoding supports different data rates?

useful links: http://www.electronicspoint.com/manchester-decoder-t68939.html, http://www.erg.abdn.ac.uk/~gorry/course/phy-pages/dpll.html,

Note: As I gather, the bit center transitions in MED(Mancheter Encoded Data) are used for clock sync at the receiver. So for decoding, one can detect these bit center transitions and then sample after 3T/4 to decode correctly.

Another question: Can I implement an oversampling Manchester decoder?

radagast
  • 1,794
  • 3
  • 17
  • 28
Mike George
  • 65
  • 1
  • 1
  • 4

2 Answers2

6

No, you don't need a PLL to decode manchester. That's only one way. In fact a PLL doesn't by itself decode anything, it only provides a clock at which you can reliably sample the manchester half-bits. If the bit rate of the manchester stream can vary, then something like a PLL that can adjust to the incoming frequency may be useful.

I have done several manchester decoders and none of them used a PLL. The first time I did this, my thought was to measure the time between edges by capturing a timer, and then decode the bitstream from there. That worked fine, but in subsequent projects I used a different scheme that allowed for higher manchester bit rate relative to the instruction rate. In these projects I simply sampled the incoming stream at regular intervals. The periodic interrupt counts how many successive samples the input is high or low and passes that to the next level up decoding logic. That then classifies each level as long, short, or invalid, which is then decoded up the protocol chain usually ending in fully received and validated packets.

Since manchester is usually used because data needs to be transmitted accross some analog medium (it's a bit silly to use manchester between two digital chips on the same board, for example), the raw input signal is often analog. Above I mentioned that I now usually sample the manchester signal at some multiple (like 8-12) of the expected bit rate. This is actually usually done with a A/D. By doing this you eliminate the need for analog data slicers.

Digital data slicers can easily be quite a bit better than analog ones of reasonable complexity. All you need to do externally is to low pass filter the signal to prevent aliasing at the fast sample rate. Since the manchester signal is being sampled around 8-12 times faster than the bit rate, such a filter won't cut into the real signal much at all. Usually two poles of R-C is good enough.

My digital data slicers work by keeping the last two bit times of samples in memory. For example, if the manchester data is being sampled 8x the bit rate, then this would mean the last 16 samples are kept in a rolling buffer. The reason for two whole bit times is that this is the minimum time for two full successive levels of opposite polarity (imagine a 101010... pattern). The data slicer computes the average of the max and min values in the buffer, and uses that as the high/low comparison threshold.

Another trick is to do a little low pass filtering on the string of A/D samples before data slicing. This is one of the few cases where a box filter is actually a good answer, as apposed to the usual knee jerk reaction of those that didn't pay attention in signal processing class. The convolution window width is simply the number of samples in a half-bit. Think of the case where the input is a perfectly clean digital signal. This signal will always have levels lasting either 1/2 bit or 1 bit time. The box filter ("moving average" for the knee jerkers) will turn the edges into ramps lasting 1/2 bit time each. A signal with a sequence of short levels therefore becomes a triangle wave. A sequence of long levels therefore a trapezoid with ramps last 1/2 bit time and solid levels between also 1/2 bit time long. Note that data slicing this signal to the average of its max and min value yields the same resulting stream as doing it on the unfiltered input.

So why filter? Because you get better noise immunity. As described above, a perfect signal isn't effected by this filter. However, a noisy signal is. The effect on the resulting 1s and 0s stream out of the data slicer from random noise added to the input samples is less with the filtering. I have implemented this algorithm in a dsPIC sampling at 9x the bit rate with a 12 bit A/D right from a analog RF receiver. This system was able to decode valid packets from RF transmissions that I could barely see on a scope by looking at the same signal going into the A/D. "Valid" packet means that no manchester violations were found, the bit stream decoded, and a 20 bit CRC checksum test passed. This stuff really works.

Olin Lathrop
  • 310,974
  • 36
  • 428
  • 915
  • Using a timer to count time between edges is equivalent to counting samples with same value at a fixed rate (except your interrupt rate is that many times less, at the expense of occupying that resource). – apalopohapa Nov 05 '12 at 16:15
  • The ADC part was very interesting. IIRC, some of these devices can trigger after doing a few conversions at a fixed rate. – apalopohapa Nov 05 '12 at 17:19
  • @apalop: There is significant difference between sampling and capturing edge times. I've done both, and the edge-time code was more complicated and eventually took more cycles even though the other method got more interrupts. Edge timing also assumes a external hardware data slicer. – Olin Lathrop Nov 05 '12 at 18:10
  • Capturing edge times: newbittime(tmr); reset_tmr(). Polling at fixed rate: if(curpin==oldpin) t++, else newbittime(t), t=0. Where is the significant difference? – apalopohapa Nov 05 '12 at 18:33
  • (not really resetting timer unless clock advantage is big, in reality an elapsed time calculation is preferable (and allows timer to be shared), but still the code to read # of cycles with same pin value very similar IMO). – apalopohapa Nov 05 '12 at 18:41
  • 2
    My point is that "time between edges" is the same as "count of samples with same value". The timer is your counter, and is incremented automatically for you, without the need for an interrupt for every count, just when your input changes so you can store the "counts" and reset your "counter". – apalopohapa Nov 05 '12 at 18:55
  • @apalop: Both methods eventually get you the same information, but they each dictate somewhat different firmware architecture and make different demands on the hardware resources. I have written both, and the lowest levels of the firmware are quite different. What's the point of your argument though? You can't quantify how different different is, and what would it matter if you could? – Olin Lathrop Nov 05 '12 at 19:43
2

It depends on your maximum frequency. If you have a system clock frequency, say, an order of magnitude faster, there is absolutely no need for a pll dedicated to the decoding of pretty much any protocol. Doing this in an fpga would be easy, and incidentally I have done this to decode linear timecode (frame rate adaptive).

By the way, you wouldn't have to sample it per se, because all you need is the time between transitions. If decoding it with a microcontroller, hook it up to a pin with interrupt on change, and inside the ISR record the time, and use this information for decoding (so no need for polling).

apalopohapa
  • 8,419
  • 2
  • 29
  • 39
  • Will the PLL provide any protection against jitter? – Mike George Nov 05 '12 at 08:35
  • @Mike if the data rate is slow with respect to your system clock, jitter and glitches can be compensated for without a pll. One advantage of Manchester is that you can measure its data clock from the data itself. Just capture enough bit times that can be classified a half-bit / full-bit, and take your average. Then you can set thresholds that will help you ignore glitches and overcome jitter. – apalopohapa Nov 05 '12 at 16:39