18

I'm not new to programming and I've even worked with some low level C and ASM on AVR, but I really can't get my head around a larger-scale embedded C project.

Being degenerated by the Ruby's philosophy of TDD/BDD, I'm unable to understand how people write and test code like this. I'm not saying it's a bad code, I just don't understand how this can work.

I wanted to get more into some low level programming, but I really have no idea how to approach this, since it looks like a completely different mindset that I'm used to. I don't have trouble understanding pointer arithmetics, or how allocating memory works, but when I see how complex C/C++ code looks compared to Ruby, it just seems impossibly hard.

Since I already ordered myself an Arduino board, I'd love to get more into some low level C and really understand how to do things properly, but it seems like none of the rules of high level languages apply.

Is it even possible to do TDD on embedded devices or when developing drivers or things like custom bootloader, etc.?

Jakub Arnold
  • 521
  • 1
  • 6
  • 12
  • 3
    Hi Darth, we really can't help you get over your fear of C, but the question about TDD on embedded devices is on-topic here: I've revised your question to feature that instead. –  Nov 12 '11 at 00:15

8 Answers8

18

First off, you should know that trying to understand code you didn't write is 5x harder than writing it yourself. You can learn C by reading production code, but it's going to take a lot longer than learning by doing.

Being degenerated by the Ruby's philosophy of TDD/BDD, I'm unable to understand how people write and test code like this. I'm not saying it's a bad code, I just don't understand how this can work.

It's a skill; you get better at it. Most C programmers don't understand how people use Ruby, but that doesn't mean they can't.

Is it even possible to do TDD on embedded devices or when developing drivers or things like custom bootloader, etc.?

Well, there are books on the subject:

enter image description here If a bumblebee can do it, you can too!

Keep in mind that applying practices from other languages usually doesn't work. TDD is pretty universal though.

Joey Adams
  • 5,535
  • 3
  • 30
  • 34
Pubby
  • 3,290
  • 1
  • 21
  • 26
  • 3
    Every TDD I have seen for my embedded systems only found errors in the systems that had easy to resolve errors I would have found easily on my own. They never would have found what I need help with, the time dependent interactions with other chips and interrupt interactions. – Kortuk Nov 12 '11 at 00:32
  • 4
    This depends on what kind of system you're working on. I've found that using TDD to test the software, coupled with good hardware abstraction, actually allows me to mock up those time dependent interactions much more easily. The other benefit that people often look over is that the tests, being automated, can be run at any time and not require someone to sit on the device with a logic analyzer to make sure the software works. TDD has saved me weeks of debugging in my current project alone. Often, it's the errors we think are easy to spot that cause errors we wouldn't expect. – Nick Pascucci Nov 12 '11 at 23:24
  • Plus it allows development and testing off-target. – cp.engr Jun 22 '16 at 22:19
  • Can I follow this [book](https://www.amazon.ca/Test-Driven-Development-Embedded-C/dp/193435662X) for understanding TDD for non-Embedded C? For any user space C programming? – overexchange Mar 18 '17 at 17:05
17

A large variety of answers here... mostly addressing the issue in a variety of ways.

I've been writing embedded low level software and firmware for over 25 years in a variety of languages - mostly C (but with diversions into Ada, Occam2, PL/M, and a variety of assemblers along the way).

After a long period of thought and trial and error, I have settled into a method that gets results fairly quickly and is fairly easy to create test wrappers and harnesses (where they ADD VALUE!)

The method goes something like this:

  1. Write a driver or hardware abstraction code unit for each major peripheral you want to use. Also write one to initialise the processor and get everything set up (this makes the friendly environment). Typically on small embedded processors - your AVR being an example - there might be 10 - 20 such units, all small. These might be units for initialize, A/D conversion to unscaled memory buffers, bitwise output, pushbutton input (no debounce just sampled), pulse width modulation drivers, UART / simple serial drivers the use interrupts and small I/O buffers. There might be a few more - eg I2C or SPI drivers for EEPROM, EPROM, or other I2C/SPI devices.

  2. For each of the hardware abstraction (HAL) / driver units, I then write a test program. This relies on a serial port (UART) and processor init - so the first test program uses those 2 units only and just does some basic input and output. This lets me test that I can start the processor and that I have basic debug support serial I/O working. Once that works (and only then) do I then develop the other HAL test programs, building these on top of the known good UART and INIT units. So I might have test programs for reading the bitwise inputs and displaying these in a nice form (hex, decimal, whatever) on my serial debug terminal. I can then move into bigger and more complex things like EEPROM or EPROM test programs - I make most of these menu driven so I can select a test to run, run it, and see the result. I can't SCRIPT it but usually I don't need to - menu driven is good enough.

  3. Once I have all my HAL running, I then find a way to get a regular timer tick. This is typically at a rate somewhere between 4 and 20 ms. This must be regular, generated in an interrupt. The rollover / overflow of counters is usually how this can be done. The interrupt handler then INCREMENTS a byte size "semaphore". At this point you can also fiddle around with power management if you need to. The idea of the semaphore is that if its value is >0 you need to run the "main loop".

  4. The EXECUTIVE runs the main loop. It pretty much just waits on that semaphore to become non-0 (the I abstract this detail away). At this point, you can play about with counters to count these ticks (cos you know the tick rate) and so you can set flags showing if the current executive tick is for an interval of 1 second, 1 minute, and other common intervals you might want to use. Once the executive knows that the semaphore is >0, it runs a single pass through every "application" processes "update" function.

  5. The application processes effectively sit alongside each other and get run regularly by an "update" tick. This is just a function called by the executive. This is effectively poor-mans multi-tasking with a very simple home-grown RTOS that relies on all applications entering, doing a little piece of work and exiting. Applications need to maintain their own state variables and can't do long running calculations because there is no pre-emptive operating system to force fairness. OBVIOUSLY the running time of the applications (cumulatively) should be smaller than the major tick period.

The above approach is easily extended so you can have things like communication stacks added that run asynchronously and comms messages can then be delivered to the applications (you add a new function to each which is the "rx_message_handler" and you write a message dispatcher which figures out which application to dispatch to).

This approach works for pretty much any communication system you care to name - it can (and has done) work for many proprietary systems, open standards comms systems, it even works for TCP/IP stacks.

It also has the advantage of being built up in modular pieces with well defined interfaces. You can pull pieces in and out at any time, substitute different pieces. At each point along the way you can add test harness or handlers which build upon the known good lower layer parts (the stuff below). I have found that roughly 30% to 50% of a design can benefit from adding specially written unit tests which are usually fairly easily added.

I have taken this all a step further (an idea I nicked from somebody else who has done this) and replace the HAL layer with an equivalent for PC. So for example you can use C / C++ and winforms or similar on a PC and by writing the code CAREFULLY you can emulate each interface (eg EEPROM = a disk file read into PC memory) and then run the entire embedded application on a PC. The ability to use a friendly debugging environment can save a vast amount of time and effort. Only really big projects can usually justify this amount of effort.

The above description is something that is not unique to how I do things on embedded platforms - I have come across numerous commercial organisations that do similar. The way its done is usually vastly different in implementation but the principles are frequently much the same.

I hope the above gives a bit of a flavour... this approach works for small embedded systems that run in a few kB with aggressive battery management through to monsters of 100K or more source lines that run permanently powered. If you run "embedded" on a big OS like Windows CE or so on then all the above is completely immaterial. But that's not REAL embedded programming, anyhow.

quickly_now
  • 14,822
  • 1
  • 35
  • 48
  • 2
    Most hardware peripherals you can't test through a UART, because quite often you are mainly interested in the timing characteristics. If you want to check an ADC sample rate, a PWM duty cycle, the behavior of some other serial peripheral (SPI, CAN etc), or simply the execution time of some part of your program, then you can't do this through a UART. Any serious embedded firmware testing includes an oscilloscope - you can't program embedded systems without one. –  Nov 15 '11 at 13:36
  • 1
    Oh yes, absolutely. I just forgot to mention that one. But once you have your UART up and running, its very easy to set up test or test cases (which is what the question was about), stimulate things, allow user input, get results and display in a friendly fashion. This + your CRO makes life very easy. – quickly_now Nov 16 '11 at 03:36
3

What I've done is separate out the device dependent code from the device independent code, then test the device independent code. With good modularity and discipline, you will will wind up with a mostly well-tested codebase.

Paul Nathan
  • 8,560
  • 1
  • 33
  • 41
2

Code that has a long history of incremental development and optimizations for multiple platforms, such as the examples you picked, is usually harder to read.

The thing about C is that it is actually capable of spanning platforms over a massive range of API richness and hardware performance (and lack thereof). MacVim ran responsively on machines with over 1000X less memory and processor performance than a typical smartphone today. Can your Ruby code? That's one of the reasons it might look simpler than the mature C examples you picked.

hotpaw2
  • 7,938
  • 4
  • 21
  • 47
2

I'm in the reverse position of having spent most of the last 9 years as a C programmer, and recently working on some Ruby on Rails front-ends.

The stuff I work on in C is mostly medium sized custom systems for controlling automated warehouses (typical cost of a few hundred thousand pounds, upto a couple of million). Example functionality is a custom in-memory database, interfacing to machinery with some short response-time requirements and higher level management of warehouse workflow.

I can say first of all, we don't do any TDD. I've tried on a number of occasions introducing unit tests, but in C it is more trouble than it's worth - at least when developing custom software. But I would say TDD is far less needed in C than Ruby. Mainly, that is just because C is compiled, and if it compiles without warnings, you've already done a fairly similar amount of testing to the rspec auto-generated scaffolding tests in Rails. Ruby without unit tests is not feasible.

But what I would say is that C doesn't have to be as hard as some people make it. Much of the C standard library is a mess of incomprehensible function names and lots of C programs follow this convention. I'm glad to say we don't, and in fact have a lot of wrappers for standard library functionality (ST_Copy instead of strncpy, ST_PatternMatch instead of regcomp/regexec, CHARSET_Convert instead of iconv_open/iconv/iconv_close and so on). Our in-house C code reads better to me than most other stuff I've read.

But when you say rules from other higher level languages don't seem to apply, I would disagree. A lot of good C code 'feels' object oriented. You often see a pattern of initialising a handle to a resource, calling some functions passing the handle as an argument, and eventually releasing the resource. Indeed, the design principles of object oriented programming largely came from the good things people were doing in procedural languages.

The times when C gets really complicated are often when doing things like device drivers and OS kernels which are just fundamentally very low-level. When you're writing a higher level system, you can also use the higher level features of C and avoid the low level complexity.

One very interesting thing you may want to have a look through is the C source-code for Ruby. In the Ruby API docs (http://www.ruby-doc.org/core-1.9.3/) you can click and see the source code for the various methods. The interesting thing is this code looks quite nice and elegant - it doesn't look as complex as you might imagine.

asc99c
  • 151
  • 3
  • "*... you can also use the higher level features of C ...*", as there are? ;-) – alk Nov 12 '11 at 09:09
  • I mean higher level than the bit manipulation and pointer to pointer wizardry you tend to see in device driver type code! And if you're not worried about the overhead of a couple of function calls, you can make C code that really looks reasonably high level. – asc99c Nov 12 '11 at 10:23
  • "*... you can make C code that really looks reasonably high level.*", absolutly, I fully agree to that. But though "*... the higher level features...*" are not of C's, but in your head, aren't they? – alk Nov 12 '11 at 10:35
2

There is no reason why you can't. The problem is that there may not be nice "off-the-shelf" unit testing frameworks like you have in other types of development. That's OK. It just means that you have to take a "roll-your-own" approach to testing.

For instance you may have to program instrumentation to produce "fake inputs" for your A/D converters or maybe you'll have to generate a stream of "fake data" for your embedded device to respond to.

If you encounter resistance to using the word "TDD" call it "DVT" (design verification test) that will make the EE's more comfortable with the idea.

Angelo
  • 1,614
  • 13
  • 9
0

Is it even possible to do TDD on embedded devices or when developing drivers or things like custom bootloader, etc.?

Some time ago I needed to write a first level bootloader for an ARM CPU. Actually there is one from the guys who sell this CPU. And we used a scheme where their bootloader boots our bootloader. But this was slow, as we needed to flash two files into NOR flash instead of one, we needed to build the size of our bootloader into the first bootloader, and rebuild it every time when we changed our bootloader and so on.

So I decided to integrate functions of their bootloader into ours. Because it was commercial code, I had to make sure that all things worked as expected. So I modified QEMU to emulate IP blocks of that CPU (not all, only those that touch the bootloader), and add code to QEMU to "printf" all read/write to registers that control stuff like the PLL, UART, SRAM controller and so on. Then I upgraded our bootloader to support this CPU, and after that compared the output that give our bootloader and their on emulator, this helps me catch several bugs. It was written partly in ARM assembler, partly in C. Also after that modified QEMU helped me catch one bug, that I could not catch using JTAG and a real ARM CPU.

So even with C and assembler you can use tests.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
-1

Yes it is possible to do TDD on embedded software. The people telling it is not possible, not relevant, or not applicable are not correct. There is serious value to be gained from TDD in embedded as with any software.

They best way to do it though is not to run your tests on the target but to abstract your hardware dependencies and compile and run on your host PC.

When you're doing TDD, you'll be creating and running a lot of tests. You need software to help you do this. You want a test framework to that makes it quick and easy to do this, with automatic test discovery and mock generation.

The best option for C right now is Ceedling. Here is a post about I wrote about it:

http://www.electronvector.com/blog/try-embedded-test-driven-development-right-now-with-ceedling

And it's built in Ruby! You don't need to know any Ruby to use it though.

cherno
  • 107
  • 2
  • answers are expected to stand on their own. Forcing readers to get to external resource to find out substance is [frowned upon at Stack Exchange](http://meta.stackexchange.com/q/225370/165773) ("read the article or check out Ceedling"). Consider [edit]ing to make it fit site quality norms – gnat Jan 28 '16 at 02:07
  • Does Ceedling have any mechanisms to support asynchronous events? One of the more challenging aspects of real-time embedded applications is that they deal with receiving inputs from very complex systems that are themselves difficult to model... – Jay Elston Feb 16 '16 at 19:36
  • @Jay It doesn't have anything specifically to support that. However I've had success testing that sort of thing with mocking, and by setting up an architecture to support it. For example, I recently worked on a project where interrupt-driven events were put into a queue and then consumed in an "event handler" state machine. This was essentially just a function that was called whenever an event occurred. When testing that function, I could mock the function call that pulled events out of the queue, and so could simulate any event occurring in the system. Test driving helps here too. – cherno Feb 18 '16 at 05:47