So recently in school we have started programming klm25z boards using ARM Assembly. I was wondering if in most industry jobs, microcontrollers are really programmed in assembly over C. It seems to take at least 2-3 times longer to program in assembly and does not seem to have any significant enhancements in speed, that are at least noticeable.
-
Are you just curious or are you using this as a basis for career preparation? – JeffO May 02 '16 at 18:04
-
A little bit of both, mostly just curious – user3183586 May 02 '16 at 18:05
-
This is not something we can answer, because it essentially is asking for a poll of programmers in a wide number of projects that likely are simply not contributors to this site. This also seems on the edge of "what language should I learn next?" or "which technology should I learn to be useful in the marketplace?" which are certainly interesting topics in their own right, but a poor fit for the Q&A format of this site. – May 02 '16 at 18:48
-
in C more than anything else. your class is probably using asm for educational purposes. in C you dont get to experience the architecture. this type of work though you may have to do some asm here and there, bootstrap, handling isr entry and return, etc. and you have to control the linking to match the rom/ram locations amounts, etc. so assembly/disassembly knowledge is more important for a bare metal mcu than applications on an operating system. Just getting the mcu booting is the first major challenge, using a toolchain. – old_timer May 03 '16 at 03:35
-
just look at the code that comes from the chip vendor for those products, you dont have to use that code certainly, but simply check what languages they provide libraries or example code for their products to get a feel for what their customer base is looking for. – old_timer May 03 '16 at 03:37
2 Answers
Most end-user applications are written in C or a close derivative of C, or another language, like Lua or BASIC or something. However, a lot of the really interesting jobs with microcontrollers require a thorough understanding of assembly, because you're writing or supporting the libraries, doing things with new parts that don't have support in a high-level language yet, building and troubleshooting circuits by reading datasheets that are written in terms of assembly instructions, etc.
In other words, if you want to work or do hobbies using prebuilt circuits and libraries, you don't often need assembly. If you want to be the guy that builds those circuits and libraries for other people to use, assembly will come up a lot. That's why schools make you do it the hard way.

- 146,727
- 38
- 279
- 479
-
1So even if 95% of the code is written in C, it's the assembly that can be a deciding factor for getting a job. – JeffO May 03 '16 at 00:47
Most of the embedded solutions are written in C. The reason being, C is a very powerful language and the user has a lot of control on hardware. On the other hand it also helps you create abstractions, however the development has to be done by the team.
This is the reasons most semiconductor companies provide C/C++ compiler with their toolset.
One resorts to assembly only when looking for very precise timings of hardware or control. However, this is on the decline especially with the increase of speed in semiconductors and the use of real time operating system.

- 79
- 5
-
They're written in a superset of C which was universally supported by all remotely-conforming general-purpose microcomputer compilers through 1990s, but which hyper-modern revisionists now claim was never defined [notwithstanding that compilers in the 1990s were 100% consistent about many things not required by the Standard]. 1990s C got a reputation for speed because it minimized the need for boundary checks in cases where the platform's "natural' behavior would meet requirements. Hyper-modern C requires adding boundary checks in cases where the platform wouldn't otherwise care... – supercat May 06 '16 at 04:31
-
-
Can you please provide a link/description of the changes in hyper-modern C? I suspect compiler technology has evolved beyond what is expected, causing the machine code to not be a true representation of the written program. – nikhil_kotian May 06 '16 at 14:04
-
Given "int x,y;" with values 57 and INT_MAX, nearly all microcomputer implementations in 1990 would have 100% consistently yielded "0" for "x < y+1;". In 2000, it would not be uncommon for a compiler to arbitrarily return 0 or 1 in such a case unless code did something to explicitly coerce the expression into the range of "int". I had no objection to that change in semantics, provided that a compiler would interpret something like "x < (int)(y+1)" as coercing the latter value into range. Modern compilers, however, will use an expression like "x < y+1" as an indication... – supercat May 06 '16 at 14:56
-
...that tests like "y==INT_MAX" should be omitted elsewhere in the code. I regard the 2000 version as being a useful development, and the modern one as insane. If code would meet requirements whether the comparison yields 0 or 1, letting the compiler select 0 or 1 in Unspecified fashion may allow improved code generation. Further, if code needs wrapping semantics, specifying that explicitly will make code clearer. Under modern semantics, however, if y might be INT_MAX there's no way readable code can meet any kind of requirements while allowing a compiler to arbitrarily yield 0 or 1. – supercat May 06 '16 at 15:04
-
Further, on a two's-complement system, I would suggest that there is only one sensible meaning for "x < (int)(y+1)"; requiring that the code must be written as "x < (int)(y+1u);" to yield predictable behavior [with the effect that if y was e.g. -2 it would be converted to UINT_MAX-1, then have 1 added yielding UINT_MAX< and then converted back to "int" yielding -1. IMHO, being able to have "(int)(i+1)" mean "add 1 to i, then add or subtract 2*INT_MAX+2 as needed to bring the result into the range of "int" made code read much more sensibly. – supercat May 06 '16 at 15:08