How to go about selecting a microcontroller

by Duane Benson, Screaming Circuits , TechOnline India - February 02, 2012

With so many permutations, it never seems quite that easy to me though. In fact, even before getting to the selection of a specific device, the choice of manufacturers can be a bigger challenge.

From time to time, I read articles about MCU selection. One in particular on EE Times – Renesas simplifies MCU selection: What now for independents? – really got me thinking about the selection process.

Myself, I tend to use Microchip PIC processors. I could just as easily use Atmel parts or chips from a dozen other vendors. I find it very difficult to really differentiate between different families in the same class. I use 8-bit PICs because somewhere, way back when, the first robot kit I bought and built used a PIC 16F877. I got familiar with it and the tools surrounding it and have stuck with 16F and 18F MCUs ever since.

The article I mention above referred to a new parametric search engine Renasas has on their website. Microchip has their own form of parametric search and I expect that other manufacturers do as well. With so many permutations, it never seems quite that easy to me though. In fact, even before getting to the selection of a specific device, the choice of manufacturers can be a bigger challenge.

In making my selection for new designs, I start with the factors that will rule out a part family:

1. Can I start with larger, easy-to-handle form-factors and later replace the big parts with smaller packaged versions?
2. Can I easily move up and down through the family as I need different specific feature sets?
3. Can I use the same tool chain?

The feature set may seem like the logical place to look first, but in general, most manufacturers’ products offer the same peripherals. Everyone’s got PWM, A/D, GPIO, etc. Outside of the specific feature set, my selection parameter (1) is probably the tightest gating factor. The really high-performance chips may require memory management, critical high-speed PCB layout or a full operating system. I don’t know of any of the high-performance MCUs that come in thru-hole or even wide-pitch SMT. That one parameter keeps me down in the 8 or 16-bit processors.

My applications tend to have low-horsepower requirements so that’s okay. Still, I can move up in performance a while without package issues due to consistency up and down the range.  Being familiar with an 8-bit PIC product will give me the confidence to design in one of their 16-bit or DSP products. Moving up to 32-bit goes to a different architecture, so the familiarity advantage goes away at that jump.

Microchip and Atmel processors both meet my selection criteria (1) and (2), as would some low-end ARM processors, 8051-derivatives and a few other families. Criterion (3) keeps me with the PICs. I know the tools and the language quirks. I can’t see enough differentiation with anything else that meets all three to incentivize me to make a change.

Cost considerations come in two forms: development cost and manufacturing cost. For low-volume designs, development cost is usually paramount. High volume costs can more easily amortize development costs over the life of the design making it easier to justify learning curves and tool purchases. Manufacturing cost concerns can influence the peripheral set requirements as well. Built in PWM, I2C and such can reduce the software development costs, but if you need to drop out pennies per device, then going with a plain vanilla MCU and bit-banging the peripheral in software might make more sense.

Several of the newer ARM families have been created specifically to compete with eight and 16-bit processors in cost and power consumption. Again, design costs can come in to play. ARMs tend to come in tiny packages with higher clock speeds. That raises the level of hardware design required and increases the materials costs. You are likely to need 6 mil or less trace and space PCBs and if your tools and cost goals don’t allow that, you may need to stay away. On the other hand, if you start out with one of the newer M0 cores and later need to extend your design faster and farther, you can more easily move up all the way to something really fast like the Ti OMAP or nVidia Tegra. Certainly it won’t be a plug and play replacement, but the architecture, tool chains and design considerations will scale up on a nice consistent path.

Jim Condon commented on a prior article I wrote on the subject: “…Next, does it support Embedded Linux (our OS of choice)? We've invested in using embedded linux and all of our software development team is experienced and trained in it. The cost of a new embedded OS in training and learning curve can easily mask the cost of a 1000 CPUs…”

Moving up to applications that require an OS, real-time or otherwise, in some ways reduces the complexity of the decision. You automatically disqualify a large portion of the available chips. But there are still some significant selections to make. Going with a generic OS like Linux or Android might very well move your decision tree into something more like that of a PC processor decision tree: cost verses performance. The amount of expertise required at this level will likely add a lot more engineering and less preference into the design choice then is possible in lower-end applications.

Deadlines can also influence MCU choice. Very short timelines make allocation of time to learning a new architecture more difficult, or even impossible, to justify. It may make engineering sense to look at something new, but the time available might just make that impractical. In that case, you’ll need to look at a part you’ve used before, or something similar.

Thayden (another commenter on one of my previous postings) stated: “During design time, the inertia behind the use of known component families and toolsets makes me default to considering those familiar component families first. If time allows, I will investigate other vendors’ offerings to see what additional optimization (cost, space, features, power consumption, schedule, etc.) may be possible. I normally try to have the product landscape assessed before a part needs to be chosen for a particular task.”

Personal preference strikes again. There is a lot to be said for “tribal knowledge.” Once your company has a solid base of expertise with a particular product line, the efficiencies afforded by that knowledge base can overrule a lot of other considerations. I suspect that this line of logic tends to hold the greatest level of influence on the decision making process of any factor, often even overruling cost. That would imply that choosing within a given family is the biggest challenge for a typical engineer and getting an engineer to change families is the biggest challenge for a sales person.

An example selection

Let’s walk through the selection process for one of my projects. Take a small motor control. I’ll be driving two motors with a dual H-bridge chip. I need I2C for talking to other boards in the system, USB for debugging, boot-loading, and communications with a PC. I’ve got three analog signals to bring in and four LED status indicators. I want to have a few GPIO just for good measure. It won’t be a high-volume product; maybe in the hundreds. That gives hard requirements for: two PWM channels, three A/D, I2C, USB and six or eight additional GPIO. I’m not sure of the code requirements, but a rough estimate based on past experience suggests 16K program memory and at least 2K data memory. Many parts come with multiple memory options so that’s a less critical item at this point.

I rarely have good success with parametric search engines. They can sometimes get me close, but I tend to have an easier time just looking through feature tables. The parametric search on the Microchip site gave me one option: the PIC 18F25K80. I use a different processor for an earlier version of the project described so I know there are more options than that. They also have a “Product Selector” which allows you to select a range of features. That gave me eight choices, including the PIC18F2455 I used in the earlier version.

In my version of the selection process, I used product familiarity as my primary factor. If I hadn’t found anything that would work, I’d have the more difficult process of choosing a different product family or manufacturer. A large number of the family members have enough I/O and A/D so USB did the most narrowing of the field. Any of the eight will do the job but some have a bit more capability than I need. Half are 28 pin devices and half are 40 pin devices. That’s a consideration that I didn’t list above: PCB real estate. If a smaller pin count will do, I’ll use it and save some PCB cost or make the layout easier. Cost is not a first order factor, so I didn’t look at that until after I had my subset.

The end result is that, in the case of this particular project, I’ll start with the 18F2455, which has 24K program flash, and then I’ll switch up to the 18F2550, with 32K program flash, if I run short on space.

About the author:

Duane Benson’s involvement in the hardware and software design world goes back to the days of the CDP1802 and Z80 up through current processors such as PIC, AVR, and ARM. In his day job, he has been dishing out PCB layout and DFM (design for manufacturing) advice via the Screaming Circuits blog since 2006.

After hours, Duane designs microcontroller and motor control boards for small robots; primarily using PIC 16F and 18F series chips. His prototype assembly experience ranges from solderless prototype boards and wire-wrap to hand-soldered surface mount parts to fully-automated machine assembly. As the author of Screaming Circuits' blog, Duane presents solutions to technical challenges brought on by smaller chip packages, shrinking support staff, and tightened schedules. He is also a contributor to industry technical publications and conferences on the topics of design for manufacturability, trends in prototyping, and ways to improve efficiency in product development efforts.

Article Courtesy: Embedded.com

Comments

blog comments powered by Disqus