24

Let's say you are starting an embedded project with some known functionality. When you select a microcontroller how do you select how much RAM you need?

Do you use a developer board and code your project first and then see how much memory you have used and then select an appropriate microcontroller that fits that memory?

Do you just pick a beefy microcontroller for a prototype and then scale down after you have a working product?

Do you just pick something that you are sure will be enough and if you run out of space, just upgrade to a higher memory density one otherwise, you just keep the existing microcontroller?

What is considered to be good practice?

Peter Mortensen
  • 1,676
  • 3
  • 17
  • 23
efox29
  • 11,827
  • 9
  • 56
  • 102
  • It seems to me that it should be possible, from a information theoretic point of view, to estimate the RAM requirement to within a order of magnitude (dimentional reasoning style), from the task specification. Hmmm... – Frames Catherine White Nov 26 '14 at 01:07
  • If you use libraries, you can research their memory footprint. With your own code you have to go with experience. Compare the new project to old ones and determine if you expect it to be bigger or smaller. – jwsc Apr 26 '18 at 14:19

4 Answers4

20

Personally for hobby projects I tend to use the most powerful microcontroller in the family with the right footprint. I then develop the PCB, write some code and produce a prototype.

This has the advantage that I know the small number of microcontrollers fairly well, so I can rapidly prototype without having to read a whole datasheet. I also have breakout boards and code templates for them.

If it works and I'm making more than a handful I then buy the cheapest microcontroller that has the right peripherals and enough memory for whatever I coded previously. This can be annoying if internal registers change (happens on the PIC) or if either microcontroller has extra peripherals which need to be disabled to make code work.

However for production purposes this would let you shave off a fair amount from each unit.

David
  • 4,504
  • 2
  • 25
  • 44
  • For my personal projects I tend to use a similar approach. That same method sort of creeps into the office as well with me. Its not wrong, it works, but are there better ways etc..Appreciate the input! – efox29 Nov 25 '14 at 09:12
  • There will definitely be better ways in a "real" environment, let's wait for other answers! – David Nov 25 '14 at 09:44
  • Absatively. Develop in a big sandbox, and cut down later. The time you will save will more than cover the extra $4 you spend per microcontroller to develop on. This works for more than hobby-level stuff - and in fact is even more important. Picture 12 people waiting around for a shift to a larger controller to happen instead of one!! – Scott Seidman Nov 25 '14 at 13:36
13

Of course, for a single homemade prototype it may be a good recommendation to start with the most powerful of all compatible micros and scale down afterwards.

However if you want to win a quote, you have to tell your customer a price before you have the money to implement anything.

Therefore, good practice is to write down some kind of specification before you start programming. You know what you want to do, and you should write down how you are going to do it.

This "how" does also include thinking about a software design, answering questions like:

  • Do you need an Operating System? Which one? What resources does it need?
  • Do you want to have a layered architecture? This requires interfaces, which may consume RAM
  • What libraries are already available and useful/necessary for your purpose, and how much memory do they need (a good library documentation answers this based on at least one reference build)?
  • What structures and variables do you have to implement for your own drivers and your application?

Summing up all those values gives you a rough estimation. How far you can trust that depends on how detailed your analysis is, and it depends on your experience :-)
Adding a margin of at least 30..50% of your estimation is surely a good idea.

Once your product is finished and you have around 80..90% RAM in use, you can be quite sure that your selection was right - at least regarding RAM.

mic
  • 966
  • 1
  • 6
  • 14
  • 2
    Re:"80..90% RAM in use". Standard practice is to ensure that you only use a maximum of 50% utilization in both CPU and memory to be able to accommodate future upgrades and bug fixes. – Dunk Nov 25 '14 at 15:04
  • 1
    @Dunk: Depends on the business. In Automotive, 80% of all resources (CPU, RAM, Flash) at SOP is commonly accepted. In, for instance, cheap consumer electronics it may be even more: How likely is it to have an upgrade in a system with a lifetime of only 2-3 years? – mic Nov 25 '14 at 15:14
  • @Dunk: I could be wrong, but it sounds like you're used to desktop-style software with dynamic memory and all the uncertainties that go along with that. The vast majority of embedded applications allocate everything statically. Guaranteed no memory leaks. Then you can use exactly 100% and be fine forever as long as you don't modify it. Of course, that only works if you have a separate stack from your working RAM or if you know exactly how the stack will behave at all times. It's a good idea to leave some space for that, but 10-20% is easily enough for what I've done. – AaronD Nov 25 '14 at 15:38
  • The bigger problem in embedded software is rogue pointers, buffer overruns, divide by zero, and things like that. Some MCU's can throw exceptions in hardware, similar to interrupts, but everything I've used will cheerfully carry on as if nothing ever happened. There will be a result of some kind, but it's probably not what you expected, and so you'll have to check for that. Some things, like arithmetic over/underflow, are easy to check for and fix immediately; other things, like rogue pointers, can go completely unnoticed until a function that worked for years decides to blow up. – AaronD Nov 25 '14 at 15:47
  • It's the developers' challenge and responsibility to ensure that NOTHING unexpected happens ;-) Usually, 80% leaves enough margin to bring bugfixes in, because a fix doesn't change codesize (so much). If it does, then testing before SOP was not good enough. Cost and time pressure increase the risk of that. But cost pressure also requires to keep the microcontroller price as low as possible. Upgrading the software is a different thing, which may require a greater margin (e.g. a new codec for the sound system). – mic Nov 25 '14 at 16:03
  • OK, I'll retract my 50% comment. I've worked at 5 different companies on probably 50 projects and 20 "unique" customers and all have used the 50% rule. So that's why I apparently wrongly assumed that it was fairly standard practice. – Dunk Nov 25 '14 at 17:34
  • 3
    Whether you want to aim for 80% target or 50% target will depend on your customer. With a fixed spec & only bug fixes needed, 80% is fine. Unreliable spec, expected feature creep and a large enough margin to allow it might lead you to pay the extra for more headroom. We once ended up buying 2x as many micro-controllers as we needed and selected the ones that would overclock enough to give us the performance we needed, that was much cheaper than a PCB redesign for a more powerful chip. – Mark Booth Nov 25 '14 at 18:15
3

If only it were possible to code your embedded system first and then build the hardware. That would make everyone's life easier. Unfortunately, that also means your deadlines are out the window. Typically the hardware has to be designed long before the software is done because hardware parts frequently have long lead times.

Thus, embedded sw developers will usually need to estimate their program's memory and CPU needs. Your first step should be to try and convince the hardware guys to give you the most powerful microcontroller/CPU with the most RAM possible. That seldom works because they have requirements goals of their own, but every once in a while you get lucky.

If that doesn't work, then the next thing you'd do is a a high level software design and break the modules down into functionality. You'd then estimate lines of code for each function for each module in the system. You can then use a formula to convert lines of code to a ballpark estimate of code memory. You would also investigate any unusual memory requirements (like large arrays) and add some estimate to accommodate that. Then add some percentage on top of that total to cover anything you missed. Then double that in order to meet the typical 50% utilization requirement.

Yes, it takes time. Yes it is necessary to jump through all the hoops because changing the hardware is really hard after it's built.

Dunk
  • 171
  • 4
  • Where can we find the formula to convert lines of code to code memory? – EasyOhm Nov 26 '14 at 10:37
  • That one depends on what language and compiler you use. If you use Assembler one line roughly equals one word of memory (whatever is your chip's word size). If you use C it might be about 3-5 words a line and if you use C++ or something even more complex it might still be a lot more. The best thing to do is to compile a few programs written in that language and compare code lines to code memory to get an average. – Dakkaron Nov 26 '14 at 13:12
3

Generally, microcontroller vendors put a range of memory in their devices that is suitable for typical applications. So, if you only need a few I/O pins and one SPI in a small footprint device, you will be unlikely to find anything that ships with 500 kBytes of Flash and 64 kBytes of RAM. With larger devices, which are closer to SoC packages, even the smallest is almost certainly big enough unless you're planning to do some serious number crunching such as image processing.

In a professional environment the key to picking the right microcontroller is to use historical data. You will have a record of the other projects you've developed and know what memory and other silicon resources are required to implement each feature. You will know what the product is expected to do and therefore have a good feature list and can quickly and accurately calculate the resources that the microcontroller will need to provide. Trying to guess the resource requirements from an up-front design specification (developed at the start of the project when the least information about the system is available) is unreliable at the best of times and only very experienced engineers, who have built up a comprehensive database of historical data in their own heads, will have any kind of success in using this method.

Many companies have adopted an 'Agile' approach to both software and electronic design, which involves building a 'library' of small, feature boards (e.g. RS-485 boards, ADC boards, etc.) along with generic platform boards that host the microcontrollers, in a similar way to using a dev-kit and plug-ins. A product can then be prototyped rapidly (within hours) by selecting and connecting the set of boards required for the features. The software is similarly assembled from library modules and can be ported and tested quickly. Once the size of the hardware-specific part of the code is known it is usually sufficient to select the smallest part that will contain that. The exception being the one mentioned above where the functionality of the device involves big data or very complex algorithms. This method provides an accurate, reliable and traceable methodology, using real data from real working products, rather than guesses based on hopeful specifications.

(Another advantage of the Agile approach is that it allows software and electronic development to be done in parallel, with the elctronics design being an exercise in integrating the set of feature boards and doing the relevant EMC and other difficult stuff at the same time as the application software is being developed on the protoype assemblies. Some porting and integration is still necessary, but it is done when working software and electronics are both available.)

Evil Dog Pie
  • 290
  • 2
  • 11