24

I need to use a microcontroller on a system that must stay working without major changes for a long time (decades). To ensure that there always will be replacement parts, I need a microcontroller that will be long run produced or produced by some manufactures in a firmware binary and encapsulation pin compatible way. What can I do to ensure that the microcontroller I choose meets these criteria?

The application doesn't need much computing power. Its aim is to control motors and other industrial systems. A microcontroller of 8 bits capable of changing the state of about 8-16 IO pins at a frequency of 0.5-1 MHz is OK. An ADC may be valuable, but can be replaced by a simple external comparator.

asky
  • 105
  • 4
user3368561
  • 379
  • 1
  • 9
  • Can you refine the kind of microcontroller computing power required. If for low powered 8bits I would say the AVR family, but it might be completely inadequate for your requirements – Paulo Neves Jan 23 '16 at 14:37
  • hold on while I dust off my crystal ball ... (its a quartz crystal ofc) – brhans Jan 23 '16 at 14:37
  • 11
    PIC is famous for this. – Scott Seidman Jan 23 '16 at 14:41
  • 4
    In industries where this is important then the "software" is designed in VHDL and implemented in a FPGA or CPLD. This can be ported to any programmable device in the future as the function does not depend of the architecture of the device. – user1582568 Jan 23 '16 at 14:58
  • @brhans Microcontrollers are with us since the 70's. No crystal ball is needed, just knowing their market history. – user3368561 Jan 23 '16 at 15:14
  • 12
    Microchip has excellent history in this regard. You can still get a PIC 16C54 today, first introduced in the 1990s. I have heard Steve Sanghi (CEO of Microchip) state this as official policy. While nobody can promise what any company will do 20 years from now, using a Microchip PIC is the best choice given the information we have today. – Olin Lathrop Jan 23 '16 at 15:15
  • @user1582568 I considered this option, but I prefer to avoid making any hardware change if it is possible, only replacing broken subsystems (PCB format) with a new build of the same design. – user3368561 Jan 23 '16 at 15:16
  • Questions asking for product suggestions are off-topic here. I edited the question to make it more generic. People can give specific examples in their answers. – Adam Haun Jan 23 '16 at 15:34
  • 1
    You won't get the same MCU in production line after 10 yrs. Probabply you will still get it from some store, but the price will be much greater than today and inferior to the MCUs at that time. – Marko Buršič Jan 23 '16 at 15:44
  • 1
    It is very standard procedure to solve this issue by stockpiling all the parts on the BOM for the length of time that you intend to be able to support the product. If at the outset cash flow is limited you can start the stock pile by always just ordering say 30% more parts than the current build will consume. Do be aware that these days it is not uncommon for some parts to become EOL (end of life) even within one to three years!! – Michael Karas Jan 23 '16 at 16:54
  • 5
    @MarkoBuršič - that's not really true. There are lots of MCUs on the market that have been around for more than 10 years. – Chris Stratton Jan 23 '16 at 20:29
  • 2
    @ChrisStratton - but could you have predicted 10 years ago which MCUs those would be? That's why a crystal ball would be necessary now ... – brhans Jan 23 '16 at 21:06
  • 4
    @brhans I can die tomorrow and all this discussion will have been useless... This question is not about absolute certainties, but probabilities of success. – user3368561 Jan 23 '16 at 21:10
  • 1
    Can you use a PLC instead? – Shane Wealti Jan 24 '16 at 04:00
  • 2
    Atmel seems also like a good choice. Many of their current 8 bit microcontrollers are still pin-compatible and code-compatible with their pre-2000 versions. – vsz Jan 24 '16 at 09:35
  • @ShaneWealti Yes, while it offer all the functionality in a single package. Algorithms, although simple, may be too complex to be implemented by discrete logic. – user3368561 Jan 24 '16 at 11:27
  • I was at a Microchip seminar where the instructor stated that it was their policy to keep their MCUs available for a very long time. – Jeanne Pindar Jan 26 '16 at 23:20

6 Answers6

26

The FPGA manufacturers say if you use a 'soft core', that is, a microcontroller written in VHDL, then that VHDL design can be implemented on any future programmable FPGA hardware, thus freeing you from the likelyhood of any particular piece of hardware going out of production.

To buy that argument, you would need to assume that programmable hardware will continue to be available over your timespan (which is probable), and will continue to be available in chip sizes, costs and voltages that will suit your product (which I find harder to believe). To use this approach, you would have to accept that you may need to do a new hardware design to accept a new package, which kinda defeats your object of no major changes.

My approach, and my advice would be, to isolate your control processing from the rest of the circuitry on a small board, and define your own interface to it, the fewer pins the better. Perhaps SPI makes a suitable interface, or a nybble bus with data read/write and address strobes. Then if your chosen processor becomes obsolete during the product lifetime, you only have to redesign and test a small board, rather than a large board with vital analogue product functions on it.

Program the control processor in C. Split your code strictly into generic algorithm, and hardware interface modules. Then if particular bits of hardware have to change, you have isolated the rewrite to a small number of modules, and are not crawling all over your code.

Choose a suitable voltage, I'd prefer 3.3v to 5v for instance.

When you choose your small control board, you could do worse than to pick a form factor that matches an available Arduino or PIC dev board. Then, your development and prototyping get a leg-up, and you could even start low run production with bought modules before designing a lower cost replacement.

Neil_UK
  • 158,152
  • 3
  • 173
  • 387
24

Don't forget to consider the reliability of your programming toolchain. If there's special-purpose programming hardware, it also needs to last for decades, and you have to be able to talk to it. Imagine having to dig up a 20-30 year old DOS PC and install an ISA card -- don't forget to manually select the IRQ and DMA lines! Alternately, you might have to buy an expensive niche product that offers backwards compatibility. If you might need to modify the software, remember that compiler tools and libraries also change, often much faster than the hardware.

Also consider how long the MCU needs to function. If you want it to have a decent chance of running for many decades, you need to consider things like flash memory retention and long-term failure rates. If you're going to swap out the chip every ~15 years, that's not as big a problem. Manufacturers should have this information. Instead of going cheap, you might look at MCUs designed for safety-critical applications like aerospace or automotive. They often come with redundant hardware and better quality guarantees.

One option could be to store your own spare parts. If you buy enough, you might be able to get an MCU with a custom mask ROM and avoid the programming/data retention problem altogether.

Make sure everything is very well documented. The MCU itself, the software, memory allocation, CPU instruction set, all electrical interfaces, specifications, etc.

Give user44635's answer serious consideration. What happens if your supply of replacement parts dries up in 30 years, and any reasonable replacements all have 1.8V IOs? Or the oldest chips you can find all have 32-bit ARM CPUs (which are starting to devour the 8-bit market)? A separate board gives you the option of adding voltage regulators, level shifters, and other interface hardware if the worst happens.

Adam Haun
  • 21,331
  • 4
  • 50
  • 91
  • 3
    Consider creating a virtual machine (e.g., VMware) with the complete set of software - CAD, programmers, documentation, etc. - required to work on that system. It avoids having to keep one piece of hardware dedicated to a particular task and you can backup a VM and keep multiple copies with little cost. When you need to run it in future you just need a virtual machine 'player'. I'm sure that in twenty years there will be some issues but, hopefully, not so many. – Transistor Jan 24 '16 at 10:13
  • @Transistor Of course, VMs fall flat if hardware architecture changes between now and the time that the user wants to boot the old software on a machine lacking necessary interfaces. ISA was a great example, but we can equally imagine the same thing today, e.g. if the system used a FireWire port or something else that might be about to disappear. There's only so much that can be done to keep adapting old tech to the en vogue protocols of the day. And even if the tech remains in place, this assumes the host has transparent passthrough for it. – underscore_d Nov 28 '16 at 07:08
18

While some manufacturers have a better record than others, long product life vs. obsolescence of critical components is addressed on the operations level rather than on circuit design level.

Maintain an ongoing forecast of the quantity of microcontrollers that you will require. Monitor the supply chain. When the manufacturer announces NRND status, you - or your operations - should prickle ears. When the manufacturer announces upcoming obsolescence, they will give you right of last order. You procure the quantity which you have forecasted, and store in a flameproof cabinet.

This is not uncommon in certified industries such as medical device, avionics, defense. I have seen people do this. For example, an OEM supplier X produces WiFi modules for medical device field. The module uses a plain civilian vanilla SoC for WiFi. The SoC is produced by Broadcom for consumer market. The SoC is expected to stay in production only of a year or two. OEM supplier X is aware of this dynamics. They procure 10 years worth of these SoC. OEM supplier X charges premium for a part with a guaranteed long product life. OEM's customers stave-off costly re-certification of their product.

Typically, devices that require long-term support are manufactured in relatively small quantities.

Nick Alexeev
  • 37,739
  • 17
  • 97
  • 230
16

An alternative approach is to use the most generic part you can find, and in the case of MCUs it is the 8051 and its variants. There are many sources for it, even an open source soft core clone, the development tools are available for any platform from DOS to Windows 10. While Microchip is commendable for its commitment, it is not possible to predict the corporate appetite for mergers and acquisition and its impact on product lines and PIC has only one source.

Lior Bilia
  • 7,282
  • 1
  • 20
  • 30
  • Certainly it is an option to consider. – user3368561 Jan 23 '16 at 21:11
  • MCS51 has since been dropped by its original inventor (Intel), but it seems to hold its ground on and on and on... and the architecture just has style :) – rackandboneman Jan 23 '16 at 21:23
  • 1
    The main issue with the MCS51 family is that programming support is unusually difficult for it. (There is no generic ISP mechanism for it, and HVPP is a costly and hard-to-support route in this day and age.) – ThreePhaseEel Jan 24 '16 at 01:19
  • @ThreePhaseEel The production volume is very very small (a few units), so an inefficient programming is not an issue. The most important thing is to give customers the possibility to fix problems even if I disappear. – user3368561 Jan 24 '16 at 11:32
  • @user3368561If the production volume is very small, and you don't need high performance (so the µC probably costs < $1 in quantities of a 100), then just buy 100, put 90 in a safe-deposit box, along with spare PCBs and any other critical parts (in case your facility burns down) and be done with it. – tcrosley Jan 29 '16 at 20:55
  • the 6502 falls into this category, still being made by someone... – old_timer Jan 30 '16 at 04:05
  • The 6502 is not an MCU, it is a CPU. A modern 8051 variant (8751, 8951 etc) is a much better choice. – Lior Bilia Jan 30 '16 at 09:39
6

Microchip is probably your best choice if you need pin compatible parts. They have been very slow to fully retire even slow-selling products such as the OTP 17 series, and, as Olin says, Sanghi has expressed a corporate philosophy of maintaining supply through boom and bust as well as continued availability of parts, which is also very important (a part you can't get for 52 weeks, as has happened to some of us with suppliers such as M*t****a, might as well have been discontinued entirely). Part obsolescence can be triggered by falling sales, but also changes in process are a factor. Microchip owns their own fabs and can stockpile chips in wafer form even if they retire a process. Fabless companies must use whatever processes they can source from the foundrys.

Definitely avoid anything trendy- it's not unusual to find parts that are EOL after a few years. It's hard to quantify but parts that are used in cell phones would not be expected to be around all that long. A part that has been around for 5 years and is selling in volume to a stable and broad customer base (not just 3 tablet makers) is a better bet than a new chip that is in high demand right now, despite being 5 years through its product lifetime already. In the case of parts that require qualification testing (such as radiation testing) and even changes in packaging may jeopardize that, you may be able to do a lifetime buy.

For better or worse, there are very few microcontroller parts that have a true second source, and the ones that do (such as ye olde 8051 core parts) are not all that attractive in performance or cost.

As an out-of-box suggestion, I would suggest considering going through the entire design process with two fairly similar parts (eg. two ARM chips of a similar core type) but from different manufacturers and qualify both. That would only add a small amount to the total cost if it's all done up-front but it would give much better confidence of continued supply. The downside is that every revision requires testing on both parts, and whichever gets picked as the primary source will have more field history.

Spehro Pefhany
  • 376,485
  • 21
  • 320
  • 842
4

The simplest solution, is to have enough spare parts stored to provide the length of time required. If your part has a MTTF of 10 years, and you need to provide support for 100 years, you need to store 10 of them. If you need to provide this support to 100 "stations," then you need a total of 1,000. To ensure these parts are available when needed, you obviously need to store them in various "safe" locations. If the cost of this "insurance policy" is reasonable, you may want to double it, to take care of any unexpected failures.

Guill
  • 2,430
  • 10
  • 6
  • 1
    All components have a limited shelf life. –  Feb 06 '17 at 11:01
  • 1
    @JWRM22: most (if not all) processes that limit shelf life depend exponentially on temperature. So if the spare parts are not only stored safely but also cool and dry, one can work around this. The difficulty could be, however, to know how cold is cold enough. – oliver Jun 18 '18 at 07:02