14

Almost everyone will now say the blessing:

performance!

Okay, C does allow to write athletic code. But there are other languages that can do so, after all! And the optimising power of modern compilers is awesome. Does C have some advantages that no other language has? Or there's simply no need for more flexible instruments in the domain?

ChrisF
  • 38,878
  • 11
  • 125
  • 168
vines
  • 1,176
  • 1
  • 8
  • 19
  • 1
    FWIW, Arduino can be controlled with C#: http://www.arduino.cc/playground/Interfacing/Csharp – FrustratedWithFormsDesigner Jun 16 '11 at 15:58
  • @Frustrated: Yes, but that is one example, and most people building devices are using Arduino. – Ed S. Jun 16 '11 at 16:00
  • 2
    Related: http://stackoverflow.com/questions/1601893/why-are-c-c-and-lisp-so-prevalent-in-embedded-devices-and-robots – Steve S Jun 16 '11 at 21:34
  • 2
    See also: http://stackoverflow.com/questions/812717/is-there-any-reason-to-use-c-instead-of-c-for-embedded-development/815197#815197 – Steve Melnikoff Jun 17 '11 at 09:57
  • Also related: http://stackoverflow.com/questions/1223710/we-have-to-use-c-for-performance-reasons – Steve Melnikoff Jun 17 '11 at 09:58
  • Don't underestimate the power of inertia and sloth. There were a LOT of "programmers" who screamed bloody murder about the mandatory strong type checking in PASCAL, then ate their broccoli and discovered it tasted pretty good in C++. (There are also some interesting anecdotes about guys forced to use Ada who previous screamed bloody murder about "bondage and discipline languages", who stopped screaming when they realized that the compiler was finding BUGS that would have eaten them alive during testing.) – John R. Strohm Jul 17 '14 at 20:42
  • @JohnR.Strohm: The problem is that Pascal's type checking was *too* strong. For example, arrays of different sizes are completely different and incompatible types. – dan04 Jul 18 '14 at 00:09
  • 1
    @dan04, in the vast majority of cases, that wasn't actually a problem. A 6DOF simulation group at Texas Instruments Defense Systems and Electronics Group did a little experiment in about 1988. Up until then, they'd done all their simulations in FORTRAN. They tried writing one in PASCAL, to see how bad it would hurt. They discovered that PASCAL gave them a small performance hit, but the increase in reliability and ease of debugging MORE than made up for it. Bluntly, they found that PASCAL's strong type checking was a GOOD thing. (And yes, they were doing arrays.) – John R. Strohm Jul 18 '14 at 11:38

8 Answers8

41

Almost everyone will now say the blessing:

performance!

That's part of it; deterministic resource use is important on devices with limited resources to begin with, but there are other reasons.

  1. Direct access to low level hardware API's.
  2. You can find a C compiler for the vast majority of these devices. This is not true for any high level language in my experience.
  3. C (the runtime and your generated executable) is "small". You don't have to load a bunch of stuff into the system to get the code running.
  4. The hardware API/driver(s) will likely be written in C or C++.
Ed S.
  • 2,758
  • 2
  • 21
  • 24
  • 14
    +1 Availability of compilers. Back when we were writing everything in assembly, the first gen compilers were a god-send. – Christopher Bibbs Jun 16 '11 at 16:18
  • 8
    +1, I think deterministic resource use is among the most important reasons. You don't have lots of memory to do all kind of fancy garbage collection in a dishwasher. – user281377 Jun 16 '11 at 16:31
  • 4
    +1 also for "deterministic resource use". On many embedded systems, this requirement even precludes the use of dynamic memory allocation. Many other languages rely heavily on dynamic memory allocation (even many beneficial aspects of C++ need dynamic memory). – Michael Burr Jun 16 '11 at 20:53
  • 2
    I'd add one more bullet point, which turns out to be a social rather than technical reason - I think that embedded software developers tend to be much more conservative and resistant to change than other developers. Depending on your point of view, this could be a good or a bad thing. – Michael Burr Jun 16 '11 at 20:55
  • 1
    Speaking as a systems guy, I am leery of large abstractions. They're great, until they stop working or do something funny, in which case it can be a huge headache to debug. That's not something I want in a low level system. – Ed S. Jun 16 '11 at 21:12
  • Correctness is always #1 on the list of requirements for a production program. Performance is #2. Far too often, programmers are not only willing but downright foaming-at-the-mouth eager to abandon enforcement of correctness in the name of performance. – John R. Strohm Jul 21 '14 at 05:27
  • 1
    @JohnR.Strohm: Well, I don't know if it's that they're willing to *abandon* correctness altogether. I think it's more a case of making the code harder to understand via "optimization" and just getting it wrong. The thing is, you don't always have a ton of options in the systems world, and most people in systems know C (and, thus, believe they can use it correctly), so C is an obvious choice. – Ed S. Jul 21 '14 at 23:17
  • 1
    You forgot "The operating system will most likely be written in C and will expose only a C API". – James Anderson Aug 20 '14 at 01:21
  • @JamesAnderson: Many languages can interop with C. – Ed S. Aug 20 '14 at 04:55
18

C was designed to model a CPU, because C was created to make Unix portable across platforms instead of just writing assembly language.

This mean that C programs work well as a programming language for programs that need to have an abstraction level very close to the actual CPU, which is the case for embedded hardware.

Note: C was designed around 1970 and the CPU's were simpler then.

  • Hmm. But it's a general-purpose language still... And then again! You said it: modern processors are complex. So, a **modern** language could provide proper constructions for the low-level things such as threading, caching, etc.. – vines Jun 16 '11 at 16:18
  • 3
    +1: This is definitely *the* reason. Maybe people have tried to design newer high-level languages that capture features of modern processors, but nobody's designed a language for that that's caught on. – Ken Bloom Jun 16 '11 at 16:20
  • 2
    @vines, C is a small language with a large runtime library. All this can be done in the runtime library. It just won't migrate automatically into the standard C library so it is platform specific. –  Jun 16 '11 at 16:22
  • 3
    +1. C was created for, and initially used on a PDP-7, which had a maximum of 64kilowords of 18-bit words. Many more "modern" languages have more difficulty fitting in that kind of space. *Especially* for writing an OS like Unix. – greyfade Jun 16 '11 at 16:30
  • 2
    @greyfade: not so. UNIX originated on the PDP-7, but C did not. To quote from the preface to *The C Programming Language*: "C was originally designed for and implemented on the UNIX operating system on the DEC PDP-11, by Dennis Ritchie." – Jerry Coffin Jun 16 '11 at 17:11
  • 1
    @vines: while it's certainly reasonable to contemplate direct support for threading in the language (c.f., Concurrent C), much of the point of a cache is that it makes things faster *without* any intervention on the part of the programmer or language. – Jerry Coffin Jun 16 '11 at 17:13
  • @Jerry Coffin: My mistake. But my point stands: Even the PDP-11 has very, very little memory and a very slow CPU. – greyfade Jun 16 '11 at 17:58
  • 1
    @greyfade: That is definitely true. The PDP-11 remained in production a *long* time though, and over time it got a lot faster (but still *slow* by modern standards -- clock speeds got up into the tens of MHz, but not even close to GHz). – Jerry Coffin Jun 16 '11 at 18:16
  • 1
    Back in the day, I wrote a lot of software for the PDP-11, but not in C. Instead, we used Pascal. I didn't use C until I started developing for the 8051 microcontroller. A PDP-11 had 64K of RAM, but an 8051 had exactly 128 bytes (not kBytes). BTW, a PDP-11 ran at 16 MHz. You could overclock it by unsoldering the 16 MHz crystal, and replacing it with an 18 or 19 MHz one. However, when you did this, you ran the risk of burning up the CPU. – mkClark Jun 16 '11 at 20:21
  • 1
    @KenBloom what would these newer language provide that C could not reasonably be made to provide? –  Jan 06 '12 at 00:45
  • 1
    @ThorbjørnRavnAndersen: I think that some features of modern processors that don't map well into C include atomic access to memory, memory barriers, and SIMD instructions. Yes, C has libraries to handle the first two and there are both libraries and the optimizer for SIMD instructions, but if C were being designed today, I suspect many of these things would be first class language features, rather than the bolt-ons that they are today. – Ken Bloom Jan 15 '12 at 14:59
11

One reason for the domination is that it has the right kind of tools for the task. After having developed in embedded platforms in both Java and C/C++, I can tell you that the bare to the bones approach of C++ is just more natural. Saving the developer from feeling that he or she is jumping through hoops because the language is too high level is quite an annoying thing. One good example is the absence of unsigned variables in Java.

And the handy features of VM/interpreted languages are usually not feasible and are left out of the implementation, e.g. Garbage collection.

celebdor
  • 529
  • 2
  • 5
  • 3
    "both Java and C/C++" - I hope you meant "all three : Java C and C++", since C and C++ are different languages. – BЈовић Aug 20 '14 at 11:53
  • 1
    @BЈовић: Replying years later to confirm that yes, that I meant all three. I was usign both following this definition: "used as a function word to indicate and stress the inclusion of each of two or more things " (two or more things) :-) – celebdor Mar 10 '16 at 23:20
10

C requires very little runtime support in and of itself, so the overhead is much lower. You're not spending memory or storage on runtime support, spending time / effort to minimize that support, or having to allow for it in the design of your project.

geekosaur
  • 741
  • 2
  • 5
  • 10
  • 1
    Does it really never happen that you *need* that functionality and reinvent it yourself? For example, large state machines built with `switch`es are horrible, and the same machines built with class hierarchies are nice and maintainable. – vines Jun 16 '11 at 16:06
  • 1
    @vines - you normally have a defined set of inputs, state machines built on switch/if ladders are clearer and more documentable than a heiracrchy of magic 'behind the scenes' polymorphic calls. – Martin Beckett Jun 16 '11 at 16:13
  • 1
    @vines: State machines built with graphical utilities from which the actual state machine code is generated; even better. I agree with Martin here; OO state machines can be a major pain. Large switch statements aren't a maintenance problem if they can be autogenerated. – Ed S. Jun 16 '11 at 16:20
  • 2
    @Martin: to someone with a little experience in OO development, polymorphic calls are neither "magic" nor "behind the scenes", and the notion that huge switch/if statements are clearer and more documentable seems outright bizarre. – Michael Borgwardt Jun 16 '11 at 16:21
  • @Martin Beckett: Sorry, can't agree. At the very least it's a matter of taste... I've once seen a three-screens-spanning USB stack switch written in a not too pedantic style... Apart from being error-prone, it's not that maintainable.. – vines Jun 16 '11 at 16:23
  • 3
    Find what happens when pin27 goes high. Option 1, search for "case PIN27:" Option 2 trace an iterator over a map of functors to discover which one would be called for a PIN objects that is assigned to pin 27 at runtime. The issue with a lot of OO code is that the only way to read it is to essentially run it. On a platform with no run time debugging or even a console that means tracing the code on paper or in your head. – Martin Beckett Jun 16 '11 at 16:45
  • 2
    Slight tangent to this discussion, there's a reason ladder logic (an even more primitive version of `switch`, you could say) is still used in many embedded applications. Easier to debug, easier to verify. – geekosaur Jun 16 '11 at 17:02
  • @Martin Beckett: if I have the pins numbered I'd like to have an array of them rather than a switch anyway. I understand your point, but it seems to me that for the case you describe I wouldn't want a class hierarchy too :) – vines Jun 16 '11 at 18:55
9

As mentioned in other answers, C was developed in the early 1970's to replace assembly language on a minicomputer architecture. Back then, these computers typically cost tens of thousands of dollars, including memory and peripherals.

Nowadays, you can get the same or greater computer power with a 16-bit embedded microcontroller that costs four dollars or less in single quantities -- including built-in RAM and I/O controllers. A 32-bit microcontroller costs maybe a dollar or two more.

When I am programming these little guys, which is what I do 90% of the time when I am not designing the boards they sit on, I like to visualize what the processor is going to be doing. If I could program fast enough in assembler, I would do so.

I don't want all sorts of layers of abstraction. I often debug by stepping through a dissembler listing on the screen. It's a lot easier to do that when you've written the program in C to begin with.

tcrosley
  • 9,541
  • 1
  • 25
  • 41
  • 2
    For some embedded applications, that "dollar or two more" is very significant. Nobody's going to notice the price impact on their car, but they will on their thermostat or CD player. – David Thornley Jun 16 '11 at 21:53
  • 4
    @David Thornley, yep, agree completely, that's why I've currently got projects going with 8, 16, and 32-bit micros all at the same time for different clients. (Power consumption is another reason for going with the smaller devices.) – tcrosley Jun 17 '11 at 03:13
  • 1
    The price is determined less by the processor cost than the pin count. Boards are a lot more expensive than chips. – Yttrill Dec 13 '11 at 00:52
7

It doesn't entirely dominate as C++ is increasingly being used as compilers have improved and hardware performance has increased. However C is still very popular for a few reasons;

  1. Wide support. Pretty much every chip vendor provides a c compiler and any example code and drivers will likely be written in c. C++ compilers are increasingly common, but not a dead cert for a given chip, and they are often buggier. You also know that any embedded engineer will be able to work in c. It's the lingua franca of the industry.

  2. Performance. Yup, you said it. Performance is still king and in an environment where core routines are still often written in assembler, or at least optimised in c with reference to the assembly output, never underestimate the importance of this. Often embedded targets will be very low cost and have very small memories and few mips.

  3. Size. C++ tends to be larger. Certainly anything using the STL will be larger. Generally both in terms of program size and in memory footprint.

  4. Conservatism. It's a very conservative industry. Partly because the costs of failure are often higher and debugging is often less accessible, partly because it hasn't needed to change. For a small embedded project c does the job well.

Luke Graham
  • 2,393
  • 18
  • 20
  • 11
    See, it's the #3 one that seems to be one of the most prevalent myths around about C++. Write a type-safe container for 5 distinct types in C and you'll have caused at least as much "bloat" as using a single STL container on 5 distinct types. C programmers get around this by writing containers on opaque types (void*). Comparing THAT to an STL template is a category error. Have to admit though that it is indeed one of the most common "reasons" to prefer C. – Edward Strange Jun 16 '11 at 16:28
  • I entirely agree that to replicate the full functionality in c you end up with the same footprint as c++. The 'advantage' of c is that it allows you to be selective. On any significant project I would prefer to use c++, but sometimes target hardware is constrained to a point where that's not practical. Having said that #1 is really the main reason in my experience. – Luke Graham Jun 17 '11 at 08:03
6

Embedded software is very different.

On a desktop app, abstractions and libraries save you a lot of development time. You have the luxury of throwing another couple megabytes or gigabytes of RAM or some 2+GHz 64-bit CPU cores at a problem, and someone else (users) is paying for that hardware. You may not know what systems the app will run on.

In an embedded project, resources are often very limited. In one project I worked on (PIC 17X-series processors) the hardware had 2Kwords of program memory, 8 levels of (in-hardware) stack and 192 bytes (< 0.2kB) of RAM. Different I/O pins had different capabilities and you configured the hardware as needed by writing to hardware registers. Debugging involves an oscilloscope and logic-analyzer.

In embedded, abstractions often get in the way and would manage (and cost) resources you don't have. E.g. most embedded systems have no file system. Microwave ovens are embedded systems. Car engine controllers. Some electric toothbrushes. Some noise-cancelling headphones.

One very important factor for me in developing embedded systems is knowing and controlling what the code translates to in terms of instructions, resources, memory and execution time. Often the exact sequence of instructions controls e.g. timing for hardware interface waveforms.

Abstractions and behind-the-scenes 'magic' (e.g. a garbage-collector) is great for desktop apps. Garbage-collectors save you a LOT of time chasing down memory leaks, when memory is / can be dynamically allocated.

However in the real-time embedded world we need to know and control how long things take, sometimes down to nanoseconds, and can't throw another couple meg of RAM or a faster CPU at a problem. One simple example: when doing software dimming of LEDs by controlling duty cycle (the CPU had only on/off control of the LEDs), it is NOT OK for the processor to go off and do e.g. garbage collection for 100ms because the display would visibly flash bright or go out.

A more-hypothetical example is an engine controller that directly fires spark-plugs. If that CPU does off and does garbage-collection for 50ms, the engine would cut out for a moment or fire at the wrong crankshaft position, potentially stalling the engine (while passing?) or damaging it mechanically. You could get someone killed.

Technophile
  • 191
  • 1
  • 4
  • That's just as true as it is irrelevant to C -- the problem you refer to is only the GC's behaviour... C++ doesn't have any GCs, and know what? I personally use it because of the _line-comments_ and stricter type safety =) – vines Sep 08 '15 at 19:46
6

For embedded systems, the big thing is performance. But like you said, why C and not some other performant language?

Many people so far have mentioned availability of compilers, but no one has mentioned availability of developers. A lot more developers already know C than, say, OCaml.

Those are the three biggies.