4

In embedded programming it is my understanding that we do not use dynamic memory allocation since we are working with a fixed-resources system and the code needs to be compiled for worst case memory usage.

That being said; what if I wanted a user to enter a number between 1 and 5. This would represent how many objects are created of some class. This could be a radio transceiver where you want to have multiple frequencies or data pipes listening simultaneously.

Should you be creating an array of pointers to these radio objects as part of your initialization and of course only use the amount the user specifies and sets up? I am not sure how else to make sure the compiler plans for worst case, although compilers are smarter than me.

ocrdu
  • 8,705
  • 21
  • 30
  • 42
Steve4879
  • 43
  • 3
  • 4
    Yes. You would have an array of statically allocated objects. Each object requires X bytes for its variables and the linker has been told how much ram your micro has so its simple arithmetic to determine if you have enough ram at compile time. – Kartman Aug 15 '22 at 02:35
  • 2
    Keep in mind, that as soon as you're using higher languages the term "static memory allocation" becomes fuzzy, to say the least. With each call to a function for example, space for variables is dynamically allocated on the stack (except for static oder volatiles)... With OOP it's getting even more complicated in the background. – kruemi Aug 15 '22 at 04:40
  • It is unclear what the question is about. Is it C++, dynamic allocation, or what? You can use dynamic allocation or static allocation. You can have a list of C++ object instances and create any amount you want within limits of available memory. Or allocate a fixed amount of instances statically and use only the wanted amount. You can write dynamic object oriented code in C and static non-object oriented code in C++ if you want to avoid it, so the selection of C or C++ language is irrelevant, it's the mindset how you write code. – Justme Aug 15 '22 at 04:44
  • If the objects are fixed size you could more efficiently declare a bounded array of the objects themselves, no need for pointers. –  Aug 15 '22 at 10:42

2 Answers2

10

Consider the alternative: what else would you use the memory for?

If nothing, then it doesn't matter how you allocate it; fill a whole bunch extra with gibberish for all that matters!

Or, if you have a number of things to allocate, but need to fit the maximum of all of them, in the worst case condition: do that, static allocations of maximums.

If you can't fit the maximum in the worst case, and can't upgrade to a bigger MCU: you will have to restrict how much can be allocated, in whatever combinations would overallocate. For example, maybe you can only have 3 or 4 pipes open, while also doing -- I don't know, an adjustable (in this case unusually long) audio buffer or something?

Put another way: don't be lazy. malloc() lets you be lazy, in certain ways; and you can get away with it a lot of the time on the PC. Even ignoring the return (error) value because it "basically" never happens.

Force yourself to think about what resources you will need in the application, and divide them up suitably.

This also leads to the thought process: "what if I do use malloc()?" Well, you can put whatever you want there [on the heap], and obviously, be responsible about allocating and freeing (and not use-after-free) objects, but since you will be resource constrained (i.e., given the above assumption that maximum static allocations won't fit), then you must use the error condition (out of memory) to restrict further allocations, assigning priority (i.e., which things you allocate first) as needed.

And mind you may have to deal with fragmentation. Something like a radio, it's probably fine to just shut the whole thing down for a few milliseconds, while setting up buffers and everything. You could have an array of allocated objects (just pointer arrays, fairly compact) and just go through and dump them all (free()), then allocate the new set. No memory leaks, nothing is ever left hanging. (Or even re-init the allocator itself, literally forgetting about what allocations were assigned.) Whereas if the allocations need to be persistent and mutable (sometimes you add/remove a pipe here or there, other times you adjust the buffers, or load some files, or..), you may need quite a lot more memory than one would naively assume, due to fragmentation costing allocation efficiency.

malloc() is a rather heavy-weight solution for an otherwise-static problem, anyway; mind that you might not use it at all, as such, but your own handy lightweight allocator instead. For example, if you have two data types to allocate, you might allocate a char array or whatever for total size (as your not-heap), and just place objects into it starting from the bottom and top ends respectively, making sure not to overlap objects once it reaches capacity.

Or not even use separate types, but a union of both (or a superclass, since this is C++), and just ignore the wasted space / padding in the smaller type. The advantage is, by allocating objects of maximal size, you can free and reassign slots arbitrarily, with no worry of fragmentation. Or rather, the fragmentation is there, as you're always wasting the padding amount, but because everything stays organized (fixed size), it will never spiral out of control as general-purpose allocation can. Obviously, this is most reasonable when the objects have similar sizes -- maybe you need 50 bytes for a pipe, 36 bytes for a file handle, etc.; whereas, you wouldn't be able to (or, want to, at least) allocate up-to-2048 byte buffers alongside those same objects, say. You might want to solve such problems by using separate pools instead -- which is to say, you can always mix and match static and simple-allocator and general-malloc() methods, as long as you keep track of them all properly.

Disclaimer: this isn't much of an EE topic, this would be better placed on CS or Overflow. And I'm just a humble C programmer, I rarely use anything more than static as it is. (Feel free to comment or edit this post for correctness.)

Tim Williams
  • 22,874
  • 1
  • 20
  • 71
  • I was stuck between CS and EE as CS usually does not seem to consider embedded system programming but does not fit solely in EE either. But in embedded I was under the impression that you do NOT want to use malloc(). Not that it will break anything but just the execution cost and the fact you have a fixed hardware system unlike a PC. Thank you for your thorough answer! – Steve4879 Aug 15 '22 at 03:32
  • 3
    @Steve4879 It depend what you mean by embedded. Using malloc() may be perfectly fine on an ARM that runs an RTOS and has megabyte of RAM. It certainly is OK on a Raspberry Pi running Linux and has gigabytes of RAM. On a smaller PIC or AVR it may not even exist if they simply have no memory. But even 8-bit bare-metal AVRs with few hundred or thousand bytes of memory can use C++ and dynamic allocation, just look at Arduino. – Justme Aug 15 '22 at 04:52
  • 3
    @Steve4879 There are no hard set rules, only guidelines, unless the device outright can't do it. – DKNguyen Aug 15 '22 at 05:52
  • In the re-initialization case you still don't need `malloc` as you can use an arena allocator instead for less overhead. – user253751 Aug 16 '22 at 14:13
  • @Justme people using std::string on Arduino gives me the heebie jeebies. – user253751 Aug 16 '22 at 14:14
3

Dynamic allocation in embedded systems is only an issue if:

  1. You can't allocate all the objects you need. Though this would be an issue for statically allocated objects as well.
  2. If you are repeatedly allocating and de-allocating objects. The memory tends to become swiss cheese, and then the memory manager doesn't have enough contiguous free space to allocate an object, even though there is enough total free space.

The latter tends to happen because the memory managers in embedded systems are simple in nature. Though they are getting better. The more complicated / sophisticated they are, the more space they take up in flash, and the longer they take to run, and they may have a cleanup/consolidation cycle to contend with. So its a balance, like everything in engineering.

I've worked on many projects that dynamically allocate objects based on some variable that can't be known at compile time. But they tended to be ones that were created at or near power up, and then were kept for the life of the run. See #2 above.

As an example of the complexity trade off, here is the documentation from FreeRTOS:

heap_1 - the very simplest, does not permit memory to be freed.

heap_2 - permits memory to be freed, but does not coalescence adjacent free blocks. heap_3 - simply wraps the standard malloc() and free() for thread safety.

heap_4 - coalescences adjacent free blocks to avoid fragmentation. Includes absolute address placement option.

heap_5 - as per heap_4, with the ability to span the heap across multiple non-adjacent memory areas.

Aaron
  • 7,274
  • 16
  • 30
  • Allocation time--and more specifically, how deterministic that timing is--can also be an issue for some embedded systems, even if there is still enough contiguous memory for a particular allocation. If you're just routing IP packets, a few extra milliseconds now and then is probably fine, but people tend to be less forgiving about, for example, audible audio glitches, industrial control system instability, etc. – Urausgeruhtkin Aug 15 '22 at 19:18
  • That seem off subject from the OP. Be that as it may, memory access is the always the same speed on the same chip (barring defects). eg. 0x1000 takes the same time to access as 0x120000. If the allocation is done up front, then your described issues won't be an issue. – Aaron Aug 16 '22 at 03:02
  • I didn't mean access time, but rather the time it takes the allocator to find the appropriate space to allocate, in particular if it has to walk a tree or list structure, which generic library implementations may well do. It seemed like you knew this since you brought up alternate implementations in FreeRTOS, but your point #2 still suggests that it is only about contiguous memory space. While performing allocations at startup is a common tactic to avoid that issue, that doesn't change the fact that it is an issue. – Urausgeruhtkin Aug 16 '22 at 12:57