0

I need to design a flight controller for a UAV. I searched the internet and found that FPGAs are mostly used for communication (encryption, decryption etc.) and feature extraction. To me it seems that the FPGAs benefiting from ARM, like Zynq series, could be a very nice fit. The problem is I see nobody is using it. Is there any specific disadvantage to it?

Alireza
  • 39
  • 1
  • 5
  • 1
    The average flight control algorithms don't seem to be complex enough to warrant the added cost of an fpga – PlasmaHH Sep 04 '17 at 09:57
  • 1
    In general, the algorithms required for flight control run at low sample rates (100s or 1000s of Hz), and any reasonably high-powered MCU or DSP chip is capable of running them without hardware acceleration. An FPGA might be appropriate if machine vision (i.e., HD video at a sample rate of 148.5 MHz) is being incorporated into the flight controls. – Dave Tweed Sep 04 '17 at 10:55
  • 2
    To use an FPGA for aerospace/defense applications you need a device which is "certified" for this use case. Those are quite pricy. – andrsmllr Sep 04 '17 at 11:14
  • 3
    Refer to: https://electronics.stackexchange.com/questions/97277/when-can-fpgas-be-used-and-microcontrollers-dsps-not/97307#97307 ; it's more a question of "given that it's more expensive and harder to develop for, what advantages would it have?" – pjc50 Sep 04 '17 at 12:35

3 Answers3

3

A FPGA is a set of programmable logic gates made to excel in one thing, and one thing only.
A CPU is a set of fixed logic gates made to be used for a large variety of purposes.

If you decide to use the FPGA for something, you better be using it often. Otherwise you're carrying idle logic. This is why a CPU is often more cost effective.
When you're not doing thing A, you can do thing B. With an FPGA the hardware for thing A is useless when doing thing B. But you can do them at the same time!

There are certainly use cases for FPGA on flight controllers. Performing checksums on the data interface, digital filters and signal processing. These are small tasks that can be outsourced to the FPGA, since they are expensive on the CPU.
Yet, flying is still a relatively slow process with little data, so CPU's can keep up. Compared to high speed video processing.

Also, development for an FPGA is more expensive. The parts are pricey, high pin count thus the boards are expensive (> 2 layers) and the design software is expensive. Overall, development costs more time, and time is money.

Jeroen3
  • 21,976
  • 36
  • 73
  • 2
    With due respect, you need to reconsider your characterization of FPGAs. Various microprocessor cores are available for FPGAs, and folding one or more cores into a design is a widely-used option. I've seen designs which used 3 micro cores set up for majority voting for radiation tolerance. – WhatRoughBeast Sep 04 '17 at 13:52
  • @WhatRoughBeast That is a valid statement. An FPGA could very well be programmed as the logic for a CPU. But it's not ideal. – Jeroen3 Sep 04 '17 at 13:56
  • There are FPGAs which contain hard core(s), such as 32-bit ARM, which is more efficient (silicon usage, power and speed) than programming a soft core. OP mentions Zynq which has this feature. – Spehro Pefhany Sep 04 '17 at 14:04
  • @Jeroen3 - Depends on what you mean by ideal. A triple-redundant CPU, for instance, is an ideal use of an FPGA, in the sense that you just can't do it any other (economically feasible) way. – WhatRoughBeast Sep 04 '17 at 14:22
1

This question is basically about the nature of compilation, hardware vs software, and the speed of light (not kidding!)

Compilation (as in, what gcc does) is exploiting constants to improve the performance of your program.

  • The cpu type is a constant, therefore the program can be compiled to machine code instead of being interpreted on a virtual machine.
  • Variable types are constant, so proper instructions can be used (integer or float, etc).
  • Some loops have constant lengths, so they can be unrolled
  • Functions can be inlined, like memcpy() in which case if the length is constant, then it becomes a simple MOV. Or if the inlined function contains an if(), and the condition is constant, it can be eliminated.
  • etc.

However, cpus are designed to be flexible, to be good at anything. This means a generic instruction set. Thus, the amount of constants we can use to remove unnecessary operations and speed things up is limited.

For example, if you need some kind of CRC/checksum which comes down to simple bit-twiddling, rotates, etc, then you can design a specialized logic circuit that will be do only that, and very fast.

The reason for this is the speed of light (ie, causality).

When a CPU executes an instruction stream, each instruction that will require the result of a previous instruction must wait for it to finish.

  • c = a + b
  • d = c + e

The second one can't run before the value of "c" is known. Another example would be convolution:

for i in 0..16:
    sum += input[i+shift] * impulse[i]

The cpu must:

1. compute the array index [i+shift]
2. fetch input[i+shift] from memory
3. fetch impulse[i] from memory
4. multiply
5. add
6. increment i
7. test for the bounds on i
8. loop again or exit

Some of these can be parallelized on a superscalar cpu, but not all.

On a FPGA, you would have:

  • A counter for i, which is instantiated and will increment on each cycle.
  • An adder for [i+shift]
  • Two memory banks (one for the impulse response and one for the signal), each able to read one word per clock
  • A hard MAC unit, able to do one MAC per clock.

Everything runs in parallel, because there is a bit of hardware dedicated to each task. Also, this would be pipelined, with registers.

  • counter increments and feeds memories with new value
  • memory needs 1 cycle to respond, so it is currently outputting the values from cycle (i-1) into the MAC unit
  • MAC unit takes 1 cycle, so it is outputting the result of cycle (i-2)

To respect causality and instruction dependencies, the CPU needs a complex instruction scheduler. On the FPGA side, it is handled by pipelining: each small unit processes a bit of information in one clock, then passes it down the chain to the next unit. We have used the fact that the program is a constant to compile some more, down to dedicated hardware.

Of course this is much more complicated to program: on the cpu side, you just write a for loop. On the FPGA side, well, you have to think about signal paths, pipeline delays, etc. So, although FPGAs offer truly insane performance, basically if you can use anything else, like a GPU, then in practice that'll be a better choice.

For example, the latest Xilinx flagship has:

  • above 10000 MAC units, for >20000 integer GMac/s total, which is on par with a top of the line GPU, give or take 50% margin
  • however the cost is ludicrous, whereas a gpu costs little
  • but the FPGA has about 2500 RAM units running at... I didn't check, but let's say 300MHz... that's 1.7 terabytes per second memory bandwidth (actually double that since the RAMs are dual-port) while a modern desktop PC has like 5GB/s,

In the end, the reason why FPGA excel on cutting edge parallel processing is the truly insane bandwidth: everything is connected together with actual wires, every processing unit that needs data can have its own local memory instead of one memory for everything like in a PC, etc. Also it uses much less power than a GPU/CPU.

Applications for these are strictly located in the "insane" realm, like acquiring multichannel RF signals with very fast ADCs and do DSP on this (synthetic aperture radar, cellphone base station), or ludicrous speeds (100GbE routing, the chip has several of those, plus several dozens 33Gbps transceivers, etc). So you don't hear about those often, but you talk to one every time you pick up the mobile phone, and your internet packets probably run through them also.

On the opposite end of the scale you have much smaller and very cheap FPGAs with little DSP capability, but they are very good at IO, because they're hardware.

For example, if you want to do protocol translation between various serial interfaces up to 100-200 Mbps, or instantiate 50 UARTs, maybe pack a ton of glue logic into one chip to shrink your board, while using little power, then those would be a good candidate. Same for CPLDs.

In a project I'm looking at, I plan to use one of those to grab a bunch of parallel signals, serialize them, send them through cheap optical fiber, and deserialize them at the other end, with error correction. The signals are too fast for a micro, but good for a cheap flash FPGA.

Now back to your question: yes, you could do it in a FPGA, but considering you don't need the extra processing power, or the energy efficiency, it would not be a cost-effective solution at all. Modern micros will do the job just fine.

Basically the only reason to use a FPGA in your drone is if it's a missile with onboard radar/IR which needs processing and pattern recognition done fast.

bobflux
  • 70,433
  • 3
  • 83
  • 203
-2

Software engineers think that FPGAs are just faster processors, but this is a very limited scope of the advantage of using an FPGA.

Processing speed is secondary to reliability for safety cases where highly deterministic behaviour is required.

FPGAs are inherently parallel and deterministic (when designed correctly) and do not suffer degradation of service with increased load - processors do and must be specifically designed to deal with the maximum load.

FPGAs are designed with a Hardware Description Language (HDL) that are not tied into a specific manufacturer and so are not device specific and thus are not susceptible to becoming obsolete.

A processor becomes obsolete within months or years and the code will need to be re-written or heavily modified when using an alternative device, an FPGAs HDL is universally (in software terms) portable to other FPGAs or an ASIC.

Military projects can take tens of years and device obsolescence is a significant enough hurdle to warrant using a strategy where obsolescence is not a factor.

The problem is I see nobody is using it.

There's a difference between:

  • not seeing FPGAs being used
  • FPGAs not being used
sirnails
  • 13
  • 4
  • 4
    This does not answer the question, it's just advertisement for FPGA. A CPU is obviously also deterministic, and your HDL design _will_ be tied into a specific manufacturer when you start using the specific IP blocks for that target - exactly in the same way as your portable C code is not tied into a manufacturer until you start to use their custom integrated blocks. – pipe Sep 04 '17 at 11:41
  • 2
    `Software engineers think that FPGAs are just faster processors` - This software engineer does not agree with your characteristic of him ;) Please refrain from such generalizations in the future. Much said in this answer is [not even wrong](https://en.wikipedia.org/wiki/Not_even_wrong) ... – Morten Jensen Sep 04 '17 at 14:33