4

I am new to FEC (Forward Error Correction). Pointers highly appreciated.

I am experimenting software with configurable "error protections". Result improving as setting raise to 40%. Still not enough 'margin'. The software uses Reed Solomon inside.

Experiemntal data suggests, 'subjective performance' should be good at "60% to 200%". Speed is no issue. We can afford adding extra bits to raise transmission success rate.

a) Does Reed Solomon code has good scalable improvement over high number of reduanant bits? I believe many FEC schemes use 30% extra bits and we need much more.

b) Should we try newer Turbo and Low-density parity-check codes. Which one has better proformance at high redundant rate (40% to 200% reduanant bit added)?

c) Where can I get experimental FEC software with high configurable redundant rate (40% to 200%)?

The channel cannot be fully modelled since some of the error sources are human / machine operation allowance, etc. It will be heavy on 'real-life' subjective tests.

Trygve Laugstøl
  • 1,410
  • 2
  • 19
  • 28
EEd
  • 989
  • 4
  • 12
  • The channel is 'primative two ways'. Receiver can send ACK / NAK. Sender will re-sent on NAK or NO REPLY. The success rate need to be fairly high but no need 100% due to ACK/NAK – EEd Dec 22 '12 at 13:23
  • Reed Solomon FEC is most useful when not wanting to, or unable to resend data. As when a satellite transmits TV to receivers in thousands of homes, it's not able to deal with protocol at all and certainly not in real-time. Signal latency effects protocol, and can seriously slow communications, as with MNP. Programs like Kermit, sends a large block of data, then receives ACK and error packets report, then re-sends packets that were in error. Greatly speeds transfer rates. Reed Solomon FEC was created for Mag tape data. Since rewinding and replaying takes time. I like Richman's answer below. – Optionparty Dec 22 '12 at 16:07
  • It is very important to know what kind of errors are expected to occur. Different kinds of errors require different algorithms. – Al Kepp Dec 22 '12 at 16:56
  • possible duplicate of [Determining parity or FEC (Forward Error Correction) requirements from percent error](http://electronics.stackexchange.com/questions/38437/determining-parity-or-fec-forward-error-correction-requirements-from-percent-e) – Dave Tweed Dec 22 '12 at 18:38
  • Traditional analysis use channel model 'clever' short cut. Like, Wireless/CDROM has burst error. In this case, we are experimenting. Please assume NO KNOWLEDGE on channel and make no clever short cut. characteristic is VERY HIGH error rate at random bit position. We put in 40% 'protection level' and still not comfortable in actual test. Each transmission is 300 bits. We can allocate n for data and (300-n) for check bit. If needed, more check bit than data bit is acceptable. Question is choosing which coding scheme and find software to implement and test the idea. – EEd Dec 22 '12 at 19:12
  • At present, the rate of whole packet without error is too low. Re-sent does not help as most likely, re-sent packet will also has error. We aim to add enough FEC bits to raise the packet success rate to, says, 70% to 90%. Then retry will work. We aim to configure packet transmit time in range of 0.3 to 1 second. Retry turnaround time is 0.3 to 1 second and is not a major issue. – EEd Dec 22 '12 at 19:22
  • Turbo encoding has the interesting concept of orthoganality and using two codes to increase the recoverability instead of asymptotically approaching ideal by infinitely increasing code length. It worth researching it. – placeholder Dec 22 '12 at 19:25

0 Answers0