Embedded development, then and now

TechOnline India - September 22, 2009

These ripping yarns from an old-timer embedded systems developer will make other old-timers smile and new-timers thank their lucky stars.

From my picture at the bottom of this page, you might guess that I'm not exactly new to the process of developing embedded systems. This month, I'll be talking about the way the technology and techniques have changed over the years. Don't expect this to be one of those "Gee, I miss the Good Old Days" bits. It's more like a celebration of how far we've come, and how happy I am not to be in those days anymore.

Old-timers who've been there should get a kick out of the look backwards, and you new-timers can thank your lucky stars you don't still have to do things that way.

Simulating gyros
My first exposure to real-time systems actually predated the advent of microprocessors. NASA and their contractors were developing the hardware and control algorithms for spacecraft attitude control using control moment gyros (CMGs). We were developing a real-time, hardware-in-the-loop computer simulation that could test and simulate the behavior of the CMGs during typical simulated flights. I was developing the digital software; we had other computer programmers who spoke patch-cords, programming the analog side.

The problem with testing a CMG is that you need to see what it's doing in real time. That's even harder to do when the CMG is a virtual one. A CMG generates torque through the changes in angular momentum of a spinning wheel. As the torque is generated, the gyro precessors, and its angular momentum vector (the H-vector) changes. What we most wanted to see was the way the H-vectors changed with time.

The NASA guys had worked out a scheme to visualize the state of the system that was, at the same time, extremely crude and extremely clever. They hooked three output lines from the analog computer to an ordinary x-y oscilloscope. The computer generated signals to display the three H-vectors on the scope, as a 3D image. In 1972, this must surely have been one of the earliest 3D graphics displays ever built.

The system worked very nicely, but it did prompt me to formulate a rule of thumb I've never forgotten: if the extent of your development and testing system involves three or four PhDs sitting around on a concrete floor studying an oscilloscope, you have probably not yet optimized your test facility.

Intel comes to town
In 1974, microprocessors had arrived, and the Intel 8080 was the new kid on the block. I wanted in on the action in the worst way. I joined an existing but tiny company that already had contracts in hand. At the time, our entire development system consisted of an Intel single-board computer (SBC) based on the granddaddy of all microprocessors, the 4004. It boasted a PROM burner, and a line editor and one-line assembler in ROM. Access was via a Teletype ASR33 printing terminal. The only bulk storage was paper tape, accessed via the ASR33's 110-Baud reader/punch. Storage capacity depended on the length of your paper tape. Our configuration management system consisted of a bunch of tapes, tossed into a box.

By the time I arrived on the scene, the ROMs had been updated to support a 4040 assembler.

We had one in-circuit testing tool, which you had to see to believe. We called it the Blue Box. It was not exactly an in-circuit emulator (ICE), but it did give us a view, however darkly, into the computer and its CPU chip. There was no debugger proper, and you couldn't set breakpoints to stop the CPU. What you could do was to set a watchpoint, watching the data whistling through the data bus at the breathtaking rate of 93 kHz. All the addresses were set, and data displayed, in binary using toggle switches and LEDs. Hex was for sissies.

Here's how it worked. Using the toggle switches, you set a memory address into the Blue Box, and launched the computer program. The first time the selected memory location was accessed, the Blue Box grabbed the data on the data bus. After that, the CPU continued on its way, but you had that single byte of data to peruse at your leisure. Think of it as a memory buffer with a one-byte depth.

The main problem with this scheme was that the box could only see data that went out onto the data bus. If you needed to watch the contents of a CPU register, forget it. That data never got put onto the data bus, so you couldn't see it.

Fortunately, I didn't have to deal with this gadget for long. The SBC got updated to support the 4040, which was our target CPU. Extra hardware included an umbilical that clipped onto the CPU. I wouldn't exactly call this an ICE, because you couldn't actually emulate the 4040, only observe and control one. The assembler was still a one-line assembler (no symbolic addresses). And there were no upload/download facilities at all. Instead, we used the original SneakerNet, based on EPROM chips instead of floppies. You burned the assembled program into PROM chips, then plugged those chips into the target machine. Once the PROMs and the CPU umbilical were in place, you could do all the things we usually associate with hex debuggers, including setting breakpoints, single-stepping, and displaying or modifying any data in memory or registers. It may have been a small step for Intel, but it was a giant leap for us.

For PROMs, we used the UV-erasable 1702 EPROMs, each capable of holding an astonishing 256 bytes of data. The 1702 was a big step forward from earlier burn-once-erase-never fusible-link PROMS.

The 1702s had a little quartz window. You could erase the chip by shining UV light through the window.

That's if you had an EPROM eraser. Which we didn't. But we got the same effect the natural way. We'd simply set the EPROMs out on the hood of a car, in the bright Alabama sun. Timing was a matter of cooking until done, which took maybe 30 minutes. Success depended on the weather. No sun, no erasing.

We had a second project that involved an 8080. This one was actually my "baby." It also was for an embedded application, but I didn't have much interaction with the hardware, since it was our customer's hardware, 180 miles away in Atlanta. My job was to develop the software, which included a two-state Kalman filter, the floating-point library to execute it, and the math library to compute it.

"Downloading" was still a matter of PROM-based SneakerNet, only this time it was more like PickupTruckerNet. Or FedExNet.

To support this effort, we bought an Intel Intellec 8 computer. It had no hard drive at all--bulk storage was still via paper tape. But it had lots of RAM and ROM, and supported true symbolic assemblers for both the 8080 and its 4040 brother. It also had a decent line editor, not that far removed from the ed/edlin editors of RSTS, Unix, Multics, CP/M, and Unix. Not exactly emacs, but serviceable. The hex debugger was good for debugging 8080 code. For the 4040, we still needed to SneakerNet the PROMs to the 4004 SBC.

My 8080 code was a good bit bigger than the 4040 code, so the big bottleneck was reading and punching all those paper tapes at about 8 bytes per second. We couldn't do much about the punch side--you can only make holes in paper so fast. But we could improve the reading side. We found an ultra-cheap optical tape reader (for the record, the ASR33's reader used little mechanical fingers to read the holes). To me, the new reader was a marvel of Yankee ingenuity. It was asynchronous.

You see, the paper-tape format didn't depend on tape transport speed. The data format was self-clocking, thanks to a row of little synch holes. Theoretically, you could read the tape as fast as you could get it through the reader, limited only by the I/O speeds of the Intellec parallel port.

So to read our paper tapes, we simply pulled them through the reader, much like a sailor hauling in his anchor rope. There was no takeup reel; the tape simply spilled out onto the floor, as it always had.

I went out and bought a used film editor, which had a crank-turned feed. We cobbled up a wider reel to hold the paper tape. From then on, reading a file into memory became, literally, a turn-the-crank process.

{pagebreak}

Despite the crudeness of these early development efforts, I learned a lot of lessons, several of which have persisted to this day. First and foremost, debugging on target hardware is painful. The longer you can put it off, the better. You don't even know if the target hardware is working properly. You don't know if a problem is in the software, the hardware, the power supply, the connection to the ICE, or even a bug in the ICE itself.

Perhaps one of the I/O chips is in backwards. Come to think of it, I had one case where the circuit-board layout used a mirror image of the A-to-D chip, thereby criss-crossing the pin assignments. Needless to say, those things need to get fixed before you start testing "flight" software.

Perhaps because my target machine was hours away, I formed a habit of testing as much of my software as possible on the Intellec's internal 8080. I tested the software exhaustively, carefully single-stepping through every executable instruction and comparing the results with hand checks.

I didn't just test the software in one huge glop-- the "Big Bang" approach to testing. Each time I developed an algorithm, even as simple as a square root or absolute value function, I wrote a separate test driver for it and wrung it out in solitary splendor. I think they have a name for that. It's called unit testing, and it's highly recommended.

By the time the software got burned into ROMs and loaded into the target machine, I could never be exactly sure where a problem was. But I could be about 99.44% certain where it wasn't.

Knowing what I know now, I would have extended the testing even more to include simulating the entire system on a mainframe or desktop computer. But that lesson was still in my future.

The Z-8000
Several years later, I worked on a project based on the Zilog Z-8000. This was actually quite a nice chip, just slow because it lacked a barrel shifter. The development system, though, had to be seen to be believed. When we first got started on the project, my boss came by to say, "Your development system is here." Excited, I went to see it, only to find a circuit board about the size of a National Geographic. Not even an SBC really, just an evaluation kit. Think of Synertek's old SYM-1 evaluation board and you won't be far off the mark.

I go, "What is this?" He says, "It's your development system." I look around. I don't see a single floppy drive anywhere, much less a hard drive. I ask, "Where's my editor? Where's my assembler? Where's my debugger." He said, "They're all in the ROM." Wonderful. Yet another line-by-line assembler.

"Where's my bulk storage?" I ask. He points to the terminal --a 110-Baud thermal-print terminal with two cassette drives. He says, "What do you think the tape drives are for?"

I should explain, this was no toy project. It was a serious research project, funded at the corporate level of a huge multinational corporation, on a project that promised to improve product performance by a factor of ten.

And I was supposed to save the company using an evaluation kit.

Fortunately for me, I was able to scream loud enough to talk management into buying a decent, disk-based development system. I don't think I made any points with my boss, though. I learned another good lesson from this experience. It's called, "Never let a hardware guy choose your development system for you."

Understand, the term "decent" is a relative one. The new system, supplied by Zilog at an exorbitant fee, was nothing more than a CP/M computer hiding a Z80 chip. Its only bulk storage was a single 5-inch floppy drive. Even so, it was head-and-shoulders better than the silly eval kit.

The Z-8000 cross-assember was serviceable but horribly slow. Instead of reading the source file into RAM and assembling it, this assembler only read the source file one source line at a time. After leisurely pondering that line, it would finally get around to reading the next one. I've never seen or heard of such an approach, before or since.

This project involved interfacing with real hardware: a gyro sampled at the high rate of 1,000 Hz. No single-stepping here; whatever you were going to do with the gyro data, you had better do it before the next interrupt came in. Every clock cycle counted, and I absolutely did count them all. To get the data through the system in time, I didn't have the luxury of moving data in and out of RAM. Every byte that could be, needed to be stored in CPU registers. To figure out how best to allocate the registers, I invented my own, pencil and paper version of a graph-coloring algorithm.

For this project, I applied the lesson learned in the previous ones: emulate, simulate, and unit-test every line of code before committing it to the target hardware. Because this target hardware couldn't be breakpointed, my abilities to test were very limited.

Where was the Blue Box when I needed one? A Blue Box would have let me take a snapshot of the data without halting the unit under test (UUT). So we built one. We added a piggyback board with a second Z-8000, run from a common clock. The Z-8000 had a control pin, a chip-enable, if you like, that would pause the CPU. I needed a pause in the UUT like I needed a hole in the head, but a little math showed that I had saved enough clock cycles to let the CPU store a small amount of data--10 words or so--into an area of shared RAM. The Z-8000's block-transfer instruction let us do the move pretty quickly, and the second CPU could move the block of data into a safe place before the next major cycle. Cross-connecting the hold pins kept the CPUs from walking on each other.

Finally, I wrote a hex cross-debugger for the test CPU, and the tiniest bit of debug support in the UUT. This support let me read a single word from the shared RAM and store it somewhere else, including a register.

After all was said and done, the system worked pretty well, and we had the software working in time to meet a revised, delayed, revised-again delivery date.

Stated another way, we achieved every software person's goal, which is to finish before the hardware is finished.

For the record, the hardware was never finished. In operation, this system was supposed to monitor inputs from the gyro and grab only those just before and just after a certain event. The hardware designer thought it would be helpful to shuffle the data in hardware, from time-sequential to something else. Unfortunately, the hardware sometimes dropped a data point if it was too close in time to the triggering event. So the data would come in shuffled. It was something I couldn't correct in software because the needed data wasn't there. And the hardware never got fixed.

From this experience, I was able to add a new entry to my "lessons learned" list: when working in an embedded environment, never let the hardware guy or a systems engineer make all the design decisions.

{pagebreak}

More Z-8000
I did a couple more embedded projects using the Z-8000. One was on a government contract, so we went with a high-end development system from Tektronix, costing a mere $30,000. This one actually had a hard drive, and it was a full-blown ICE. Still no high-order language, just your everyday cross-assembler with hex debugger. It got the job done, but it couldn't run at the top speed of the Z-8000, and the debugger sucked rocks. I mean, this thing was bad news.

Over time, I've learned how to build hex debuggers for just about any CPU, and I've done it many times--sometimes for hobby systems like Z-80s and 68000s. It's not exactly rocket science, after all. My debuggers are typically minimal systems, less than 1K in size. But on my worst day, I could never write a debugger as bad as this one. Some of the instructions required commas between fields, some a space. Never both together, and never a tab or more than one space. It couldn't even accept lower- and upper-case characters. I'd be embarrassed to have my name associated with something like that.

One other thing was notable on this project: the hardware was being developed along with the software, and its first few iterations didn't work. It also suffered from a persistent design flaw: its two CPUs were connected via FIFOs, but they ran on different clocks. Can we see the flaw in that design? No matter how deep the FIFOs, one is going to fill up to overflowing, and the other is going to run dry.

On this project, I learned to expand my list of "never let" rules. Because the hardware was being developed in parallel with the software, we had to determine, by testing, if a given problem was in the software or the hardware. Whenever we told the hardware guys that we'd found a hardware problem, they wouldn't believe us. Thus began a time-consuming and often acrimonious debate.

So the new rule is and should be: Never develop software for a system that isn't built yet.

For that matter, there's a top-level rule that subsumes all the others: if you're the manager of an embedded system project, never divide your people into "hardware guys" and "software guys." They all have to be "team guys."

Proprietary computer
In the 1970s, most of the companies building military hardware used their own proprietary computer designs. Ours was no exception. At first glance, it seemed to be a ruggedized version of our 16-bit minicomputer, but it wasn't. It was a custom system with an instruction set optimized for real-time embedded systems.

My job was to build an 18-state Kalman filter and related software. And this time, we certainly couldn't say the development system wasn't big enough or fast enough. It was a Multics timeshare system, so big that it filled three floors of the computer center.

There were only a couple of catches. First, the software all had to be written in assembly language; the company had never gotten around to building a C (or any other language) compiler for this proprietary chip.

Second, the computer was 1,000 miles away. Our only connection to it was via dial-up landlines, using yet another thermal-printer terminal and a 110-Baud acoustic modem. You dialed the phone and then put the handset into a cradle in the modem.

We certainly couldn't say that the computer was too small, and it had data storage of many, many terabytes. And the cross-assembler and hex debugger, while slow, were capable enough.

Our biggest problem with this system was the fact that our phone connection ran through the telephone switchboard. The switchboard operators had no idea what we were using the line for, but they quite naturally assumed that people were talking to each other.

When an operator looked at her switchboard, she saw one light constantly on, indicating a single call running into hours. Curious, she'd listen in to the "conversation" and here only modem squeaks and squawks. She'd say, "Good Heavens, this connection has gone bad," and pull the plug on us. We learned to save files early and often.

Two aspects of this job stand out. First, when we started the job, I was curious as to how capable an assembler we could expect. I approached one of the other software guys, an expert who had been working with this processor for years, even decades. The conversation went like this:

Jack: "Is this a macro assembler?"

John: "A what?'

Jack: "A macro assembler."

John: "What's that?"

Jack: "You know, it lets you define macros."

John: "What's a macro?"

Jack: "You can define common blocks of code, and give them names."

John, looking at Jack quizzically but sincerely: "Why would you want to do that?"

Jack, to himself: "This may take longer than I thought."

Second, after I had some code that I felt was ready for integration, I brought it to John. I said, "Let's put this into the ROMs, please." He said, "Can't do that now. We just completed a build. You'll have to wait for the next one."

Jack: "When will that be?

John: "We usually make a build every two weeks." (He might well have added, "Whether we need it or not.")

Jack, in his usual discreet and tactful fashion, "two weeks??? I'm used to getting a build every two seconds!" (O.K., a minor exaggeration, but still on the order of seconds.)

Fortunately, we had another option. The development system included an instruction level emulator of the CPU. We could test all of our algorithms in the virtual CPU, attached to Multics. And that's what we did. The Kalman filter was not tightly coupled to the control system that John and the others were working on. It was basically a block of code that ran asynchronously in the background. So we could test the software very thoroughly without involving the actual hardware at all. Good thing, too, because the other guys were queued up and fighting for access to the real hardware.

How well did this testing in emulation mode work? When we were finally ready for hardware-software integration and test, we found one (count'em, one) error: a direct rather than indirect store. We fixed it in short order. Our entire integration tests required less than one 8-hour day.

This project was completed on schedule, and put in the field. The gyro system outperformed our previous systems by a factor of 10.

One final aspect of this project: The Kalman filter calls for a lot of vector and matrix operations. In fact, there are/were almost no scalar calculations. Considering that we were writing the code in assembly language, do you think I wrote a vector/matrix function library for the job?

You have to ask? Does a bear like honey?

{pagebreak}

The 68332
My next job involving embedded systems was vastly different. First, it was years later, since I had been vectored off into aerospace jobs not involving embedded systems. Second, it was with my own company, so I had more leeway into our approaches and tool selection.

In fact, this job was very satisfying because, for the first time in my young life, I was given freedom to choose all the tools and techniques we'd be using. A hardware guy picked the Motorola 68332 chip, and the various I/O devices like angle encoders, A to D and D to A converters, gyros, and accelerometers. But I got to choose the development systems, the software tools, the ICE, the algorithms, and the general approach. We had a very nice C compiler from Intermetrics, which included an equally nice symbolic debugger. We had an ICE that was both powerful and inexpensive, thanks to its use of the Motorola JTAG port. The screen editor was Brief.

Everything ran on a PC running MSDOS. We had two computers on site; a desktop for code development, and a laptop that talked to the ICE. The devices all talked to each other via fast serial connections. No SneakeNet allowed.

Our setup was not exactly an IDE, but not far from it. We had the main feature of any IDE, which is the ability to compile, download, and test software from the screen editor. While editing a source file, I had only to press a hotkey to compile it. If there were errors, Brief would bounce me back to the editor, with the cursor poised at the location of the first error.

If the program compiled without error, another hot key invoked a BAT file that connected to the ICE, downloaded the file, and bounced to the symbolic debugger. As I said, not quite a true IDE, but close enough to leave satisfied developers.

We didn't use an over-the-counter RTOS for this job, for two reasons. First, the job just didn't warrant it. The functionality simply wasn't complicated enough. Mostly it had just one real-time task that talked to the I/O devices, and a background task running system tests.

Second, the hardware of the 68332 did most of the work. There was a real-time clock that triggered the real-time task, a watchdog timer, and the very capable counter/timer units. The serial and parallel ports were all interrupt driven. By the time we'd written the Interrupt Service Routines (ISRs), there was not much left for an RTOS to do.

On this job I applied the lessons I'd learned through the years. The approach was based on the theory that one should put off testing in the actual hardware until the very last. First, we tested algorithms in simulation mode, performing the software development and unit testing on a desktop, using Borland C. As usual, every line of code got tested using test drivers. Only after we were satisfied that the computations were working properly did we move the code to the Intermetrics compiler.

Next, we developed and tested the "flight code" in the ICE's own CPU. This let us not only test in a more realistic environment, we could do so without interfering with the hardware development.

One of the nice things about the ICE was that you could map both the CPU and the memory to be either in the ICE, or on the hardware. Our transition to the hardware was very painless because we could switch over to it one step at a time. If the software worked in one configuration, but broke if, for example, we mapped to the actual hardware ROMs, we didn't have to look far to find the source of the error. As I'd learned to expect by now, the final integration and test was pretty much a matter of writing the ROMs and pushing the go-button.

There was one aspect of this job that we did differently, and it's an approach I've used ever since. I gave you my general approach, which is to use the biggest, most powerful computer system to do the preliminary work. Use a desktop or mainframe, and test the algorithms thoroughly, even if it's in a different language, such as Matlab. There's no point downloading software to the target system, only to find out that it's executing the wrong algorithms.

Gradually move to a more realistic environment, saving execution on the target machine until the very last. And test, unit test, and single-step to exhaustion.

There is one change we made on this project, though, and it's an important one. I said that one should put off running on the target machine, but that approach only works if someone else is testing the hardware. In previous jobs, there were always hardware guys swarming over the hardware and testing it.

We couldn't count on that, though, in this case. This lead hardware designer was an analog expert -- he'd never built a digital circuit before in his life. We realized that if we were going to have to depend on the hardware to work properly, we'd better test it ourselves.

So I had to modify my approach to say, by all means do your initial development in a general-purpose computer, with everything simulated. But first, test the hardware itself.

This turned out to be easy. My partner was a EE graduate, and he had a briefcase I've envied him for, ever since. On one side, it was an ordinary briefcase, with the usual papers, pens, and documents. Flip it to the other side, and it was a complete electronics workshop, including soldering pencil, multimeter, and a whole panoply of active and passive components. This guy could build entire circuits right from his briefcase.

We did some simple tests in not much more than a day. We began by simply connecting a potentiometer to one of the A-to-D ports. We sent the digital value back out through a D-to-A port, and displayed it on a scope. Even the scope was an unnecessary frill -- a multimeter would have done just as well.

Next, we generated test waveforms like square waves and sawtooths in the CPU. We sent the digital data out through a D-to-A, and displayed that on the scope. Then we closed the loop through another pair of converters, so we could display both the generated and processed waveforms.

We did similar tests with the angle encoders and discrete I/O lines. A handful of toggle switches and LEDs were enough to do the job. The whole process took about a daty. Then we could turn the hardware back over to the hardware guys with confidence that the whole thing wasn't going to smoke the first time we ran it.

In this case, we didn't find a single problem in the hardware. The designer may have been new to digital systems, but he sure got it right.

In the end, this system ran, right out of the box. It was delivered on time and budget, and it outperformed the specs by a factor of two.

{pagebreak}

The 80486 project
After being spoiled by the facilities of the 68332 project, the next one was somewhat of a letdown. This time, we were back, at least initially, to the hardware guy/software guy sort of mentality. It was a pretty big project, and I was building only two or three "apps" in the system. By the time I arrived on the scene, lots of decisions had already been made. These included the choice of the processor, the I/O devices, the C language, the development tools, and the RTOS. So, for that matter, was the program architecture. There were a lot of I/O signals, mostly analog, so the architecture was designed around a input buffer array that cycled through the A-to-D channels in a specific order. The job of our apps was to read the data at the proper time, process it, and send the results out again. Some of the data went back out to devices again, but most just went to graphic and numeric displays.

Originally, we had planned to use an industrial-quality C compiler designed for real-time, embedded applications. But for preliminary development, we agreed to use the Microsoft C compiler. In time, however, the "preliminary" part morphed into "forever."

This decision was to lead to some consternation, because we knew that the floating-point routines in Microsoft's math library weren't reentrant. To use their compiler in production, we all had to agree not to use floating point. This decision made the problem much harder.

Later, we realized that the problem was not a problem, as long as one task using the floating point processor didn't get interrupted by another, also using it. Getting around this limitation was as easy as splitting the f.p. computations up into small sets, and disabling interrupts while processing each set. Or, even easier, simply agree that only one app got to use floating point.

Although the development system for this job was capable enough, it was definitely a step down from the 68332 project. There was not even the pretense of the features of an IDE--we used separate Windows and MSDOS apps for code development, testing, downloading, and testing. We had no ICE, relying instead on remote hex debugging.

Despite the step down in our development system's capabilities, this was one of the most fun projects of my young life. That's because I got to--perhaps "was forced to" would be more accurate--use my skills from so many different technical disciplines.

My boss was a big believer in the generalist, as opposed to specialist, philosophy. He was adament that we not be divided into "hardware guys" and "software guys," but rather know and understand all aspects of the system. Whenever I asked a question about how some gadget in the hardware worked, he'd bark "Get out the schematic and see for yourself!"

At one point, he had me checking a new circuit board to make sure it matched the schematic. There I sat, a Ph.D. physicist "software guy" and an expert on embedded software development, using a multimeter to buzz out the copper traces on a PC board.

I grumbled, "Dammit, Jim, I'm a software engineer, not a lab tech." To myself, I thought, "Man, at the salary this guy's paying me, this has got to be the most expensive circuit board validation in history."

In the end, though, I learned that the boss was absolutely right. The system was entirely too embedded, too tightly married to the hardware, to divide it into software, hardware, and systems pieces. Or control pieces or algorithmic pieces. It all had to play nicely together, and that took someone who understood the funcionality of all the pieces.

Now, I love electronics; it was key in my Master's thesis. I also love microprocessors, having been in on the ground floor. And I love digital logic. And math algorithms. And physics. And software engineering. It's not often I get to use skills in all these disciplines on any one job. But to make this job work, I had to dredge up everything I'd learned in all these disciplines, plus quite a few more.

Once I got over the shock of being yanked out of my comfort zone, I didn't just accept the boss's notion of doing it all; I jumped in with both feet.

Like any high-tech company, ours had an electronics lab, and a "chief hardway guy" working in it. But he was always in the greatest of demand, and I didn't want to wait for him to have time to talk to me. Anyhow, I got tired of going into the lab to ask, "Where did the Phillips go?" or "May I borrow the multimeter?"

So I went to Radio Shack and bought myself a CARE package: A set of hand tools, a soldering station and soldering aids, a multimeter, a logic probe, and various electronic components. My desk drawer began to look more and more like my partner's briefcase.

In the end, I found myself not just involved in all aspects of the program. I was, quite literally, embedded in them. At one end of my desk was the software development system and the UUT. At the other end was an oscilloscope, multimeter, signal generator, and logic probe. Later, it included precision thermometers, air and vacuum pumps, and more. In my desk drawer were the circuit components and tools. Below that were a few custom test gadgets I'd built myself. Not far away was a PROM burner, FPGA burner, logic design software, and support software. Standing in the corner were a couple of tanks of nitrous oxide.

In the last system I built for that company, I got to exercise all my skills and then some. I had a lot of help designing the more complex parts of the circuitry, but I was involved in every other aspect. When the prototype circuit board proved to be too noisy, I found myself sitting next to the PC layout guy, showing him how best to route the power, ground, and signal lines, and where to put the isolation caps. When our first board got populated, I was the one who did it. One step at a time, testing all the way (unit testing, remember?).

When one of our fabricators got chips upside down, I was the one that found them. When a part failed, I found that too. When the pressure control system proved to be unstable, I found that too, and redesigned it. And when we learned that the sensing device required a lot of digital logic, I worked with the sensor's vendor to work out a fix. I picked the serial PROM that solved the problem. Then I designed the logic and burned the FPGA.

And I loved every minute of it.

{pagebreak}

Today's development systems
Until a few years ago, I used to attend the Embedded Systems Conference every year. I watched with interest as the capabilities of chips and systems improved, and the development systems and software got so very much better.

No more SneakerNet for these systems. No more hex debuggers either. It seemed that almost any device that involved digital logic, also had a C compiler and a symbolic debugger. Often the toolsets was the GNU toolset. Not my first choice, because I'm a GUI/IDE sort of person. but still very capable, and familiar to virtually everyone.

Today, there's been yet another dramatic shift in development environments. Today, emphasis has shifted from the GNU toolset to custom toolsets based on Visual Studio. Some vendors actually use Visual Studio, some use its look & feel. But all are tightly integrated into proper IDEs, with excellent support for cross-compiling for an embedded system. Far from SneakerNet, modern systems talk to the IDE through high-speed connections.

As I sit here typing this, there a tiny PC board, about 2 x 3", plugged into my USB port. This is not just the core processor, mind you. It's the core processor plus its evaluation board, with a fast CPU and more memory than I used to have in my hard drive. It came with a very complete Visual Studio IDE, and a built-in RTOSes. I can choose from a bewindering array of pre-built software modules.

There's a second board plugged into my Ethernet port. Running their IDEs is as simple as clicking on an icon in Windows, and if their user interfaces aren't Visual Studio, they're close enough for me.

With systems like this, I'm back down to creating a new build in seconds. Awesome.

I chose three of these systems for nostalgia's sake. One has my favorite 68332 chip; two more have Z80-like processors. I can get away with choosing them this because, now that I'm retired, I don't have to explain to my motives to anyone.

A whole set of gadgets is based on Parallax's wierd and wonderful Propeller chip.

I have one more system to buy. That will be my only concession to leading-edge technology: A high-end ARM system.

Let me see, now. where did I put that logic probe?

Jack Crenshaw is a systems engineer and the author of Math Toolkit for Real-Time Programming. He holds a PhD in physics from Auburn University. E-mail him at jcrens@earthlink.net. For more information about Jack click here

Comments

blog comments powered by Disqus