Right, so we have 8-bit, 16-bit and 32-bit microcontrollers in this world at the moment. All of them are often used. How different is it to program 8-bit and 16-bit micrcontrollers? I mean, does it require different technique or skill? Lets take microchip for example. What new things does a person need to learn if they want to transition from 8-bit microcontrollers to 32-bit microcontrollers?
-
No. Certainly there are different concerns, but those are largely in the level of device-specific detail. For example, is unaligned word access permitted? (On ARM it is not - yet on x86 it is). This question is not really specific enough. – Chris Stratton Apr 17 '14 at 15:09
-
Wow guys, thanks for the answers. So there are actually very important differences that we need to take into consideration when programming 32 bit processors vs 8 bit processors. Here I was referring to C as I think most people do not delve into Assembly for programming for reasons we all know very all. Thanks for detailed responses, I really appreciate it. – quantum231 Apr 27 '14 at 20:52
-
With the 32 bit uc's there is a LOT more options and a LOT more registers that you have to get right. I guess it depends on what you are doing. That said, these days you can get a development board, compilier, debugger, IDE for about $50. Back in the day that would cost close to $1000. – Feb 06 '15 at 13:28
5 Answers
In general, going from 8 to 16 to 32 bit microcontrollers means you will have fewer restraints on resources, particularly memory, and the width of registers used for doing arithmetic and logical operations. The 8, 16, and 32-bit monikers generally refers to both the size of the internal and external data busses and also the size of the internal register(s) used for arithmetic and logical operations (used to be just one or two called accumulators, now there are usually register banks of 16 or 32).
I/O port port sizes will also generally follow the data bus size, so an 8-bit micro will have 8-bit ports, a 16-bit will have 16-bit ports etc.
Despite having an 8-bit data bus, many 8-bit microcontrollers have a 16-bit address bus and can address 2^16 or 64K bytes of memory (that doesn't mean they have anywhere near that implemented). But some 8-bit micros, like the low-end PICs, may have only a very limited RAM space (e.g. 96 bytes on a PIC16).
To get around their limited addressing scheme, some 8-bit micros use paging, where the contents of a page register determines one of several banks of memory to use. There will usually be some common RAM available no matter what the page register is set to.
16-bit microcontroller are generally restricted to 64K of memory, but may also use paging techniques to get around this. 32-bit microcontrollers of course have no such restrictions and can address up to 4GB of memory.
Along with the different memory sizes is the stack size. In the lower end micros, this may be implemented in a special area of memory and be very small (many PIC16's have an 8-level deep call stack). In the 16-bit and 32-bit micros, the stack will usually be in general RAM and be limited only by the size of the RAM.
There are also vast differences in the amount of memory -- both program and RAM -- implemented on the various devices. 8-bit micros may only have a few hundred bytes of RAM, and a few thousand bytes of program memory (or much less -- for example the PIC10F320 has only 256 14-bit words of flash and 64 bytes of RAM). 16-bit micros may have a few thousand bytes of RAM, and tens of thousand of bytes of program memory. 32-bit micros often have over 64K bytes of RAM, and maybe 1/2 MB or more of program memory (the PIC32MZ2048 has 2 MB of flash and 512KB of RAM; the newly released PIC32MZ2064DAH176, optimized for graphics has 2 MB of flash and a whopping 32MB of on-chip RAM).
If you are programming in assembly language, the register-size limitations will be very evident, for example adding two 32-bit numbers is a chore on an 8-bit microcontroller but trivial on a 32-bit one. If you are programming in C, this will be largely transparent, but of course the underlying compiled code will be much larger for the 8-bitter.
I said largely transparent, because the size of various C data types may be different from one size micro to another; for example, a compiler which targets a 8 or 16-bit micro may use "int" to mean a 16-bit signed variable, and on a 32-bit micro this would be a 32-bit variable. So a lot of programs use #defines to explicitly say what the desired size is, such as "UINT16" for an unsigned 16-bit variable.
If you are programming in C, the biggest impact will be the size of you variables. For example, if you know a variable will always be less than 256 (or in the range -128 to 127 if signed), then you should use an 8-bit (unsigned char or char) on an 8-bit micro (e.g. PIC16) since using a larger size will be very inefficient. Likewise re 16-bit variables on a 16-bit micro (e.g. PIC24). If you are using a 32-bit micro (PIC32), then it doesn't really make any difference since the MIPS instruction set has byte, word, and double-word instructions. However on some 32-bit micros, if they lack such instructions, manipulating an 8-bit variable may be less efficient than a 32-bit one due to masking.
As forum member vsz pointed out, on systems where you have a variable that is larger than the default register size (e.g. a 16-bit variable on an 8-bit micro), and that variable is shared between two threads or between the base thread and an interrupt handler, one must make any operation (including just reading) on the variable atomic, that is make it appear to be done as one instruction. This is called a critical section. The standard way to mitigate this is to surround the critical section with a disable/enable interrupt pair.
So going from 32-bit systems to 16-bit, or 16-bit to 8-bit, any operations on variables of this type that are now larger than the default register size (but weren't before) need to be considered a critical section.
Another main difference, going from one PIC processor to another, is the handling of peripherals. This has less to do with word size and more to do with the type and number of resources allocated on each chip. In general, Microchip has tried to make the programming of the same peripheral used across different chips as similar as possible (e.g. timer0) , but there will always be differences. Using their peripheral libraries will hide these differences to a large extent. A final difference is the handling of interrupts. Again there is help here from the Microchip libraries.

- 47,708
- 5
- 97
- 161
-
It might be worth noting that at the assembly language level, 8-bit processors tend to have fewer registers and less orthogonal instructions (AVR is a more RISCy exception), a consequence of design constraints when they were developed. 32-bit processors tend to be RISC descendants (Renesas' RX, a modern CISC, is one exception, and Freescale's ColdFire descends from m68k). – Apr 17 '14 at 16:16
-
9To not start a new answer just for this addition, I think it's important to add that transition from 32 bit to 16 of from 16 to 8 can cause nasty surprises as arithmetics stop being atomic. If you add two 16 bit numbers on a 8-bit microcontroller, and use them in an interrupt, you have to take care of making them thread-safe, otherwise you might end up adding only half of it before the interrupt triggers, resulting in an invalid value in your interrupt service routine. – vsz Apr 17 '14 at 19:20
-
2@vsz -- Good point, forgot about that one. Generally one should disable interrupts around any access (including just reading) any volatile variable that is larger than the default register size. – tcrosley Apr 17 '14 at 20:51
-
1Is it true that 32-bit uC usually have 32-bit I/O interfaces? I think that anyway it's more commonly just serial communication. – clabacchio Dec 09 '14 at 12:28
-
1@clabacchio My experience is that all of the I/O port *registers* are defined as 32-bit, but sometimes the top 16 bits 16-31 are unused, so a parallel port is still 16 physical pins. In other cases, like a RTCC register, all 32 bits are used. – tcrosley Dec 09 '14 at 15:40
-
With regard to "number of registers", the PIC is an interesting case since most instructions are limited to using one particular register (W) as one of the source operands, but can directly use dozens or hundreds of registers as the other without needing intermediate or following load/store operations. – supercat Feb 06 '15 at 16:48
-
As a general rule of thumb, use `stdint.h` where available. For example, the fastest unsigned integer of at least 8 bits would be `uintfast8_t`. It could be 16 or 32 bits under the hood, but it's guaranteed to be at least 8 bits. So, on those 32-bit micros where 32-bit operations are faster than 8-bit ones, it will be 32-bit, whereas on 8-bit micros where 8-bit operations are faster it will be 8-bit. – Piper McCorkle Dec 24 '17 at 19:14
One common difference between 8-bit and 32-bit microcontrollers is that 8-bit ones often have a range of memory and I/O space which may be accessed in a single instruction, regardless of execution context, while 32-bit microcontrollers will frequently require a multi-instruction sequence. For example, on a typical 8-bit microcontroller (HC05, 8051, PIC-18F, etc.) one may change the state of a port bit using a single instruction. On a typical ARM (32-bit), if register contents were initially unknown, a four-instruction instruction sequence would be needed:
ldr r0,=GPIOA
ldrh r1,[r0+GPIO_DDR]
ior r1,#64
strh r1,[r0+GPIO_DDR]
In most projects, the controller will spends the vast majority of its time doing things other than setting or clearing individual I/O bits, so the fact that operations like clearing a port pin require more instructions often won't matter. On the other hand, there are times when code will have to "big-bang" a lot of port manipulations, and the ability to do such things with a single instruction can prove quite valuable.
On the flip side, 32-bit controllers are invariably designed to efficiently access many kinds of data structures which can be stored in memory. Many 8-bit controllers, by comparison, are very inefficient at accessing data structures which aren't statically allocated. A 32-bit controller may perform in one instruction an array access that would take half a dozen or more instructions on a typical 8-bit controller.

- 45,939
- 2
- 84
- 143
-
I assume you meant "bit-bang". Might be worth noting that ARM supports bit band regions (where word operations are single bit operations) and the MCU Application Specific Extension for MIPS provides Atomically Set/Clear Bit within Byte instructions (ASET/ACLR). – Apr 17 '14 at 15:56
-
@PaulA.Clayton: I haven't really looked at the MIPS in the last 20 years; as for the bit-band regions, I've never figured out a way to use them in reasonable-looking code, and even if I could use them they'd save only one instruction unless one used some insane programming trickery, in which case they might save two [load R0 with an even or odd address based upon whether the bit should be set or cleared, and adjust the offset on the store instruction as appropriate to compensate]. BTW, do you have any idea why the bit-band region uses word addresses? – supercat Apr 17 '14 at 16:06
-
@supercat: Word addressing lets you access the bit-band regions from C or C++ via pointer subscripting (`region_base[offset]`) – Ben Voigt Apr 17 '14 at 20:58
-
@BenVoigt And why could one not do that with byte addressing? (Perhaps one possible reason would be to remove the expectation/hope that two-bit and four-bit operations would be supported.) – Apr 17 '14 at 21:03
-
@BenVoigt: Having to scale the bit number by a factor of 4 will often cost an extra instruction. Actually, what I would have liked to to have seen, rather than a bit-band area, would be a set of couple of areas which would sit at a fixed offset relative to "normal" memory accesses, but specify that writes to one area will if possible only "set" bits, and writes to the other will only "clear" bits. If the bus had separate "write-ones-enable" and "write-zeroes-enable" control bits, one could achieve the things that bit-banding allows, but in many cases avoid read-modify-write. – supercat Apr 17 '14 at 21:06
-
@paul:the bit banding I've seen defines `base [offset] = value;` as `register = (register & ~offset) | (value & offset)`. That only works if the addressing is as wide as the register. Of course, word addressing limits the range on offset. Probably it is just so that bit banding accesses use the store instructions for the native word size. – Ben Voigt Apr 17 '14 at 21:10
-
@supercat: I've seen ARM implementations with set-only and clear-only features in the memory map of many peripherals. – Ben Voigt Apr 17 '14 at 21:12
-
@BenVoigt: Many individual peripherals have such features, but they all implement them differently; also, RAM does not have such a feature. To add such a feature to a RAM array would from what I understand merely require a change to the column-control logic; no additional per-cell logic would be required. I don't know of any implementations that work that way, though. – supercat Apr 17 '14 at 21:15
The biggest practical difference is the amount of documentation, really, to entirely understand the entire chip. There are 8-bit microcontrollers out there that come with almost a 1000 pages of documentation. Compare that to roughly 100-300 pages worth for a 1980's 8 bit CPU and the popular peripheral chips it would be used with. A peripheral-rich 32 bit device will require you to go through 2000-10,000 pages of documentation to understand the part. The parts with modern 3D graphics edge on 20k pages of documentation.
In my experience, it takes about 10x as long to know everything there's to be known about a given modern 32 bit controller as it would for a modern 8 bit part. By "everything" I mean that you know how to use all of the peripherals, even in unconventional ways, and know the machine language, the assembler the platform uses as well as other tools, the ABI(s), etc.
It is not inconceivable at all that many, many designs are done with partial understanding. Sometimes it's inconsequential, sometimes it isn't. Switching platforms has to be done with the understanding that there'll be a short- and mid-term price in productivity that you pay for perceived productivity gains from a more powerful architecture. Do your due diligence.

- 32,734
- 1
- 38
- 103
I personally wouldn't worry too much about upgrading (8bit->32bit) uC of the same family and you're increasing specs across the board. Generally, I don't doing anything out of the norm with the data types since it can be hard to maintain down the road.
Downgrading a device code is a different story.

- 139
- 3
-
3The size of the data types is determined by the compiler, not the processor architecture. An 8-bit processor can have 32-bit ints, even though it will take multiple instructions to manipulate them. – Joe Hass Apr 18 '14 at 01:30
-
-
@JoeHass: A compiler for an 8-bit processor *could* define `int` to be 32 bits, or even 64 for that matter, but I'm unaware of any existing 8-bit compilers which actually *do* define `int` to be larger than 16 bits, or promote 16 bit values to anything larger. – supercat Feb 06 '15 at 16:55
32-bit MCU's will eat up a whole lot more power for one. And require more support circuitry.
One does not really transition to 32-bits from 8-bits... You will keep using both, often together. The bottom line is you should use (and learn) whatever is appropriate for the job. Learn ARM because well it rocks the embedded world right now and will keep on doing it. Also learn AVR or PIC because they are awesome board controllers.
You will probably experience as much distress switching from AVRs to ARMs as you would from ARM to x86 anyways, the size of the bus really doesn't make that much of a difference. All the extra advanced hardware does though. Going from standard interrupts to a vectored interrupt array with 6 priority levels will be much harder than figuring out how to count to four billion.

- 27
- 1
-
4I don't know if it's accurate to claim 32 bit MCUs are intrinsically more power hungry. At least one company ([energy micro](http://www.energymicro.com/)) entire product line is ultra-low power MCUs, and they're all 32-bit ARM core based. – Connor Wolf Apr 18 '14 at 01:04
-
3Just worked out an stm32l1 circuit that should run for 7 years on a cr2032 – Scott Seidman Apr 18 '14 at 01:20
-
2Can you justify the comment that a 32-bit MCU needs more "support circuitry"? I think you are expressing several unjustified opinions here. – Joe Hass Apr 18 '14 at 01:32
-
For the simplest of 8-bit microcontroller circuit, all you really need is a ceramic resonator (or none at all on a internal RC oscillator), and a single more or less stable power source. For the simplest 32-bits microcontroller you will often need at least 2 supplies (with some exceptions, some ARM7's have internal regulators and run on a single 3.3v source), and a lot more bypass capacitor. More IO's also means more power. – Hugo Apr 18 '14 at 01:59
-
Also I would say yes, it is in fact in fact inevitable that a 32-bit chip would need more power than a 8-bit chip just because of the transistor count and larger memories. However, any CPU architecture will be inevitably more efficient than others at some tasks. – Hugo Apr 18 '14 at 02:09
-
As for your stm32l1, I see your 300uA standby mode and raise you the mega1284p's 100uA standby mode @ 6MHz. – Hugo Apr 18 '14 at 02:22
-
-
@Hugo - I raise your 100 uA standby with the EFM32's 600 **nano** amp hibernate mode. – Connor Wolf Apr 18 '14 at 09:33
-
1Also, your vectored interrupts comment doesn't make much sense, since you can get multiple priority levels in 8-bit microcontrollers (see Atmel xmega MCUs, which have 3 priority levels), and having vectored interrupts is irrelevant when every hardware device has it's own independent vectors anyways. – Connor Wolf Apr 18 '14 at 09:35
-
2I'm using a 32-bit Cortex-M0 processor to control a smart battery charger for an electric vehicle. It uses a single 3.3 V supply. It has an internal oscillator and PLL so I don't even need a crystal. I'm using a 28-pin DIP package but I can get a Cortex-M0 in an 8-pin DIP if I want. How can that be more complex than a typical PIC or AVR? – Joe Hass Apr 18 '14 at 13:55