You don't want parity. You want a more reliable communication.
The reason parity is little used is that it's expensive in terms of data throughput. It starts by making a frame almost 10% longer: start bit + 8 data bits + parity + stop bit = 11 bits instead of 10. But it's far worse than that. If you have a way to tell if you received the data correctly you have the duty to do something with that. Simply ignoring the erroneous communication won't do; the transmitter has to send it again. So it needs to know whether it was received well. You'll have to send an acknowledge (ACK
/NAK
) after each byte, and the transmitter can't send the next byte before it has received the ACK
.
If you use ASCII codes that's 11 return bits. So this halves the thoughput, and we already lost 10%, so we're now at a 36% payload efficiency, from 80%. And that's the reason why nobody is really fond of parity.
Notes:
1. You don't need to acknowledge the receipt of an acknowledge; the Hamming distance between the ASCII codes for ACK
and NAK
is 3 (with parity even 4), so an error in the reception of ACK/NAK
can be not only detected, but also corrected.
2. Many UARTs can work with data lengths down to 5 bits, and it's possible to switch to 5 bits for sending the ACK
, but this is mere window-dressing, and it only complicates communication.
A better solution can be to use a CRC at the end of each block. CRCs are better than parity bits at capturing multiple errors (yet they still can't correct them). Improved efficiency can only be obtained for long blocks; if a block consists of only 2 bytes it's no use to add an 8-bit CRC.
Another disadvantage would be that you still have to acknowledge correct reception. So that's probably not it either.
How about self-correcting codes? Hamming codes add little overhead, and allow you to correct 1 erroneous bit yourself; no longer need for acknowledging. Like CRCs Hamming codes are more efficient on longer blocks; the number of additional bits is defined as
\$N + H < 2^H\$
where N = number of data bits, and H = number of Hamming bits. So to correct 1 bit in an 8-bit communication you need to add 4 Hamming bits; a fifth Hamming bit is only required from 12 data bits. This is the most efficient way of error detection/correction on short messages (a few bytes), though it requires some juggling with your data: the Hamming bits have to be inserted at specific positions between your data bits.
Now before you add Hamming error correction codes it's worth looking into your setup. You can expect errors on a 100m line running between heavy machinery, but you shouldn't have errors on a 2cm line. If it picks up noise it may be too high impedance. Are the drivers push/pull? If so they should be able to give you fast edges, except if you "cable" is capacitive, which it won't be at this short distance. Are there high current traces running parallel to the data lines? They could induce noise. Do you really need this high speed, and do clocks on both sides match closely enough? Slowing down to 57600 bits per second may solve the problem.