15

Serious help needed here. I love programming. I've been reading bunch of books(such as K&R) and articles/forums online for C language lately. Even tried looking into Linux code(although, i was lost where to start but peeking into small libraries helped?).

I started as a Java programmer and in Java it's pretty cut and dry; If programs gets too big, slice it in classes then further into functions. Guidelines like, keep the code readable and add comments. Use information hiding and OOP techniques. Some of which still applies for C.

I've been coding in C now and so far i get programs to work one way or the other. A lot of people talk about performance/efficiency, algorithm/design, optimization, and maintainability. Some people stress one more then the other but for non-professional software engineers you often hear something like e.g: Linux kernel dev won't just take any code.

My question is this: I plan on writing a code for 8-bit microcontroller without wasting any resources. Know that i'm coming from java background so things are not the same anymore... resources/books/links/tips will be much appreciated. Performance and size now matters. Resources/Tricks on efficient(within best practices) C code for 8-bit micro-controllers?

Also, inline assembly plays a vital role as well sticking close to micro-controllers standard. But are there general rule of thumb to efficiency that applies to all?

For example: register unsigned int variable_name; is preferred over char anytime. Or using uint8_t if you don't need big numbers.

EDIT: Thank you so much for all the answers and suggestions. I appreciate everyone's effort for sharing knowledge.

AceofSpades
  • 605
  • 1
  • 7
  • 12
  • 2
    This isn't often the case on an x86 processor, but on a microcontroller, if you want to make sure you get every last drop of performance, you'll probably want to use assembly instead. – Rei Miyasaka Jan 01 '12 at 01:45
  • Agreed, i'll update my question and add inline assembly. Know that i'm coming from java background so things are not the same anymore... resources/books/links will be much appreciated. – AceofSpades Jan 01 '12 at 01:49
  • 3
    @Rei : Hand crafted Assembly is rarely, if ever, uses less memory and is faster than modern C compilers. It's a waste of time to code in assembly when it can (as should) be done in C. – mattnz Jan 01 '12 at 02:18
  • 1
    @mattnz By modern, how new are you talking about? In all honesty I haven't written code for a microcontroller in nearly a decade. – Rei Miyasaka Jan 01 '12 at 02:52
  • 2
    One simple tip: in microncontrolles, it's still generally true that "if it's simpler, it's faster". On complex chips (ARM and upwards) the hardware does so many optimizations that you never know until you test. – Javier Jan 01 '12 at 05:43
  • 1
    @ReiMiyasaka at a vast increase of maintenance cost. A good C compiler can produce almost the same code as a highly experienced programmer. –  Jan 01 '12 at 16:07
  • 1
    As a Java programmer, chances are you have a certain admiration for the word "abstract". That's a good thing to set aside. It helps that you're working in C. As others have said, more or less, just do the least that will work, and you'll be in good shape. – Mike Dunlavey Jan 01 '12 at 18:52
  • 1
    @ReiMiyasaka: I've done a lot of that, and (IMHO) the reason assembly code runs faster is not that the compiler can be out-coded, but that it's so much harder to write asm that you write much less of it. It takes real discipline to be that spare when you're writing in a compiler language. – Mike Dunlavey Jan 01 '12 at 19:08
  • 1
    @mattnz: It varies. For most mainstream processors, yes, the C compiler will probably do a better job. Occasionally, on non-mainstream processors, the C compiler will not be anywhere near smart enough. For a concrete example, the C compiler for the pixel processors on the TI 320C80 image processor was nowhere near as good as an expert human assembly language programmer - and TI freely admitted it, and told programmers in no uncertain terms to hand-code their pixel-crunching kernels. (Been there, done that, sweated blood for MONTHS over a 20-some-instruction loop.) – John R. Strohm Jan 01 '12 at 23:43
  • @mattnz, I assume you dont do much embedded programming, certainly not on 8 bit micros. The compilers produce, huge, slow, rom consuming programs. You are much better off in assembler or a mixture of C and assembler. Compiler instruction set friendly architectures 16 and 32 bit in this market do have advantages to using C not assembler, but hand tuned assembler is still often used because C compilers cannot compete for size and speed. Over the course of a large project, yes the compiler wins over the human.For critical sections the human wins over the compiler,that is still the case today. – old_timer Jan 14 '12 at 16:25

5 Answers5

33

I have 20+ years of embedded systems, mostly 8 and 16 micros. The short answer to your question is the same as any other software development - don't optimise till you know you need to, and then don't optimise till you know what you need to optimise. Write you code so it's reliable, readable and maintainable first. Premature optimisation is as much, if not more of, a problem in embedded systems

When you program "without wasting any resources.", do you consider your time a resource? If not, who is paying you for your time, and if no one, do you have anything better to do with it. Once choice any embedded system designer has to make is the cost of hardware vs cost of engineering time. If you will be shipping 100 units, use a bigger micro, at 100,000 units, a $1.00 saving per unit is the same as 1 man year of software development (ignoring time to market, opportunity cost etc), at 1 Million units, you start getting ROI for being obsessive about resource usage, but be careful because many an embedded project never made the 1 million mark because they designed to sell 1 million (High initial investment with low production cost), and went bust before they got there.

That said, things you need to consider and be aware of with (small) embedded systems, because these will stop it working, in unexpected ways, not just make it go slow.

a) Stack - you usually have only a small stack size and often limited stack frame sizes. You must be aware of what your stack utilisation is at all times. Be warned, stack problems cause some of the most insidious defects.

b) Heap - again, small heap sizes so be careful about unwarranted memory allocation. Fragmentation becomes an issue. With these two, you need to know what you do when you run out - it does not happen on a large system due OS provided paging. i.e. When malloc returns NULL, do you check for it and what do you do. Every mallow needs a check and handler, code bloat?. As a guide - don't use it if there’s an alternative. Most small systems do not use dynmaic memory for these reasons.

c) Hardware interrupts - You need to know how to handle these in a safe and timely manner. You also need to know how to make safe re-entrant code. For instance, C standard libs are generally not re-entrant, so should not be used inside interrupt handlers.

d) Assembly - almost always premature optimisation. At most a small amount (inlined) is needed to achive something that C just cannot do. As an exercise, write a small method in hand crafted assembly (from scratch). Do the same in C. Measure the performance. I bet the C will be faster, I know it will be more readable, maintainable and extendable. Now for part 2 of the exercise - write a useful program in assembly and C.
As another exercise, have a look how much of the Linux kernal is assembler, nthen read ,the paragraph below about the linux kernel.

It is worth knowing how to do it, it might even be worth being proficient in the languages for one or two common micros.

e) "register unsigned int variable_name", "register" is, and always has been, a hint to the compiler, not an instruction, back in the early 70's (40 years ago), it made sense. In 2012, it's a waste of keystrokes as the compilers are so smart, and micros instructions sets so complex.

Back to your linux comment - the problem you have here is that we are not talking a mere 1 million units, we are talking 100's of millions, with a life time of forever. The engineering time and cost to get it as optimal as humanly possible is worth while. Although a good example of very best engineering practise, it would be commercial suicide for most embedded systems developers to be as pedantic as the linux kernal requires.

mattnz
  • 21,315
  • 5
  • 54
  • 83
  • 4
    mattnz : this is one of the loveliest answers on .stackexchange sites. – Ahmed Masud Jan 01 '12 at 07:16
  • 1
    I can't improve on this answer. I might only add that inserting assembly code seldom makes sense for performance, but it could make sense for things like poking I/O chips or other hardware tricks that might not be easy to do in C. – Mike Dunlavey Jan 01 '12 at 18:47
  • @mattnz Thanks for the well put answer. +1 – AceofSpades Jan 01 '12 at 22:36
  • 1
    @MikeDunlavey assembler is sometimes needed for exact timing. Just finished a video overlay that uses bit banging to generate NTSC video on an I/O pin, the timing is in terms of - voltage high for 3clock cycles, then low for 6 then .... – Martin Beckett Jan 14 '12 at 16:57
  • @Martin: That makes perfect sense. It's a long time since I've coded at that level. – Mike Dunlavey Jan 14 '12 at 20:56
3

Your question ("without wasting resources") is too general, so it's hard to give much advice. Taken literally, if you don't want to waste resources, maybe you should take a step back and evaluate whether you need doing anything at all, i.e., whether you can solve the problem in other ways.

Also, useful advice is very dependent on your constraints - what kind of system are you building and what kind of CPU are you using? Is it a hard real-time system? How much memory do you have for code and data? Does it support natively all C operations (most notably, multiplication and division) and for what types? More generally: read the entire data sheet and understand it.

The most important advice: keep it simple.

E.g.: forget about complex data structures (hashes, trees, possibly even linked lists) and use fixed-size arrays. Use of more complicated data structures is warranted only after you have proven by measurement that arrays are too slow.

Also, don't overdesign (something Java/C# devs have a tendency to do): write straightforward procedural code, without too much layering. Abstraction has a cost!

Get comfortable with the idea of using global variables and goto [very useful for cleanups in the absence of exceptions] ;)

If you have to deal with interrupts, read about reentrancy. Writing reentrant code is very non-trivial.

zvrba
  • 3,470
  • 2
  • 23
  • 22
3

I agree with mattnz's answer - for the most part. I started programming on the 8085 over 30 years ago, then the Z80, then quickly migrated to the 8031. After that I went to the 68300 series microcontrollers, then 80x86, XScale, MXL and (as of late) 8 bit PICS, which I guess means I've come full circle. For the record, I can state that FAE's at several major microprocessor manufacturers still use assembler, albeit in an object oriented fashion for purposeful code reuse.

What I don't see in the approved answer is a discussion of the target processor type and/or proposed architecture. Is it a 0.50$ 8 bitter with limited memory? Is it an ARM9 core with pipelining and 8Meg of flash? Memory management coprocessor? Will it have an OS? A while (1) loop? A consumer device with a intial production run of 100000 units? A start up company with big ideas and dreams?

While I do agree that modern compilers do a great job of optimization, I've never worked on a project in 30 years where I didn't stop the debugger and view the generated assembly code to see just what was going on under the hood (admittedly a nightmare when pipelining and optimization come into play), so knowledge of assembly is important.

And I've never had CEO, VP of Engineering, customer who didn't push me to try and stuff a gallon into a quart container, or save .05$ by using a software solution to fix a hardware problem (hey it's just software right? Whats so hard?). Memory (code or data) optimization will always count.

My point is if you view the project from a pure programming point of view you're going to get a narrower scope solution wise. Mattnz has it right - get it working, then get it working faster, smaller, better, but you still need to spend A LOT of time on the requirements and deliverables before you even think about coding.

Gio
  • 131
  • 1
  • Hi Gio, please avoid unnecessary HTML in your posts, and use the [Markdown syntax](http://programmers.stackexchange.com/editing-help) instead. For `
    ` you could just press enter, and for paragraphs just leave an empty line between them. Also when you reference another answer, please add a link to it. At this point there might only be a few answers but there could be more, distributed over many pages, and it won't be very clear which answer you mean. Check out the [revision history](http://programmers.stackexchange.com/posts/127956/revisions) to see my edits.
    – yannis Jan 02 '12 at 02:28
  • @Gio Thanks for mentioning other important factors. +1 :) – AceofSpades Jan 02 '12 at 02:46
  • +1 - Nice expansion of my answer. – mattnz Jan 02 '12 at 03:00
1

Manttz answer puts it very well the most key points about how to do "close-to-hardware" programming. This is after all what C is meant for.

However, i would like to add that while strict keyword of "Class" doesn't exists in C - it is quite straight forward to think in terms of Object oriented programming in C even when you are close to hardware.

You may consider this answer : OO best practices for C programs which explains this point.

Here are some resources which will help you write good object oriented code in C.

a. Object oriented programming in C
b. this is a good place where people exchange ideas
c. and here is the full book

Another good resource i would like to suggest you is :

The Write Great Code Series. This is two volume book. The first book covers very essential aspects of machines at lower level works. The second book touches upon is "Thinking low level - writing high level"

Dipan Mehta
  • 10,542
  • 2
  • 33
  • 67
1

You have a few issues. first do you want this project/code to be portable? Portability costs you performance and size, can your platform of choice and task you are implementing tolerate the excess size and lower performance?

Yes, absolutely on an 8 bit machine returning unsigned chars instead of unsigned ints or shorts, is one way to improve performance and size. Likewise on a 16 bit machine use unsigned shorts and a 32 bit machine unsigned ints. You can quite easily see though that if you for example just used unsigned ints everywhere for portability across the system that are taking over (ARM for example pushing its way down into the lowest power, smallest device markets) that code is a huge rom hog on an 8 bit micro. You could of course just use unsigned without int or short or char and let the compiler pick the optimal size.

Not just inline assembly, but assembly language in general. Inline assembly is very non-portable and harder to write well than just calling an asm function. yes you burn the call setup and return but at a cost of easier development, better maintenance, and control. The rule still applies, only write it in asm if you really need to, have you done the work to conclude that the compiler output is your problem in this area and how much of a performance gain you can get by doing it by hand. Then back to portability and maintenance, as soon as you start to mix C and asm, and every time you mix C and asm you might harm your portability, and might make the project less maintainable depending on who else is working on it or if this is a product you are developing now and someone else has to maintain down the road. Once you have done that analysis you automatically know whether or not you have to go inline or go with straight assembly. I have 25+ years in the field, write C and asm mixtures every day, live on/at the hardware/software layer and, well, never use inline asm. It is rarely worth the effort, too compiler specific, I write compiler non-specific code wherever possible (almost everywhere).

The key to your whole question is to disassemble your C code. Learn what the compiler does with your code, and with time, if you so desire, you can learn to manipulate the compiler into generating the code you want without having to resort to asm as much. With more time you can learn to manipulate the compiler to produce efficient code across multiple targets making the code more portable without having to resort to asm. You should have no problem seeing why the unsigned char works better as a status return from a function than an unsigned in on an 8-bit micro, likewise the unsigned char gets to be more costly on 16 and 32 bit systems (some architectures help you out, some dont). The llvm compiler system makes this education happen faster as you can compile one program and then examine the backend output for several targets without having to have several different cross compilers (like you would with gcc).

Some 8 bit microcontrollers, all?, are very compiler unfriendly and no compiler produces good code. There is not enough demand to create a compiler market for those devices to create a great compiler for those targets so the compilers that are there are there to attract more business, non-asm programmers, and not because the compiler is better than hand written asm. arm and mips getting into this world are changing that model as you have targets that do have compilers that have had a lot of work done on them, compilers that produce pretty good code, etc. For micros with processors like that you of course still have the case where you have to drop down to asm, but it is not as often, it is a lot easier to just tell the compiler what you want it to do than to not use it. Note that manipulating a compiler is not some ugly, unreadable code, in fact it is the opposite, nice clean, straightforward code, perhaps re-arranging a few items. Controlling the size of your functions and number of parameters, that kind of thing make a huge difference in compiler output. Avoid ghee-whiz features of the compiler or language, KISS, keep it simple stupid, often produces much better and faster code.

old_timer
  • 969
  • 5
  • 8
  • You do not state the type of products you produce, I would assume it's either very high volume, low margin or specific niche market with insane margins. This is critical to deciding at a business level if you should use a small 8 bit micro and hand crafted assembler, or a bigger micro and C. In resonse to your (deleted?) comment - I do work with 8 bit micros, however we start with a larger than needed micro and and revise down only when and if BOM cost becomes an issue) Time to market, opertunity cost and amortisated development cost allow us the luxery of adding 10 or 20 cents to the BOM. – mattnz Jan 15 '12 at 00:45