Viewpoint: Why programmability is now a game changer

by Jacques Benkoski , TechOnline India - September 10, 2009

The common wisdom these days says that the semiconductor industry is heading for the cliff. Some even say that we are like the cartoon character that went over the cliff and his legs are still moving, not realizing that there is no ground under him.

The common wisdom these days says that the semiconductor industry is heading for the cliff. Some even say that we are like the cartoon character that went over the cliff and his legs are still moving, not realizing that there is no ground under him.

The increased in design cost, the growth in the number of IPs and the mask costs are combining to seemingly make any but the highest volume chips economically unfeasible.

The pundits are seeing a world with microprocessors and a few consumer SoCs as the remaining dinosaurs while all the other designs are heading towards a cul-de-sac with a small passage towards the FPGA world for the low volume price- and power-insensitive applications.

The trends are thus obvious and the outcome inevitable. Game Over.

Not so. The designers of that game have forgotten a few important parameters. The pressure to reach low-power solutions is now pervasive beyond the mobile applications.

The cost of the executing a given task in software on a generic processor architecture versus dedicated hardware has been proven to be two orders of magnitude off in cost, power and performance.

With all due respect to the embedded cores, they would have to run at impossible speed to swallow a 5 GPS stream, perform live video transcoding or aim a beam shaping antenna array.

Some would want to argue that multi-core is the solution, especially with dedicated cores for specific applications.

But the multi-core programming problem remains stubbornly elusive beyond the simple threading on an identical architecture. Nobody seems to be able to master the unbound complexity of heterogeneous processing engines with different characteristics communicating over ill-characterized busses and networks-on-chips.

At the same time, the number of software engineers continues to grow and the number of hardware engineers continues to shrink — at least on a relative basis — yet in most cases the software is given away by semiconductor companies as a necessary component of a platform but hardly as the valuable differentiator it is made to be.

While these disputes are happening, the FPGA world has been undergoing its own silent revolution. No longer simply seen as gobs of glue logic, the FPGAs have now emerged as an interesting alternative implementation for many applications with power, price and performance that enable them to make their way into consumer and even mobile applications.

Yet at the same time, they also emerge as a fascinating distributed compute fabric with a regular architecture of computational elements and memories. They suddenly represent a quasi-systolic array alternative to the Von Neumann digital processors with a much more attractive performance, cost and power tradeoff.

That is if one finds a way to program the beast and not try to do the equivalent of assembly coding, i.e. RTL-level design.

{pagebreak}What do those three trends have in common? They all converge to the compelling need to be able to bring about the software programmability of integrated circuits. Not the kind that the original silicon compilers pioneers had dreamed but one that has evolved from the world of VLIW compilation and the world of hardware accelerators.

What do we mean by that? Microprocessor designers have long known that it pays off to generate additional instructions to cope in hardware with either common or costly software routines. Back in the 80's we had Intel's 8087 math co-processor which later evolved into the more elegant Multi-Media Extension (MMX) to the famed Intel Architecture.

More recently as ARM cores became a ubiquitous presence on so many SoCs, designers have been quick to recognize that one could offload in a similar way the more tedious tasks that the little embedded microprocessor could not cope with, rather than simply use it as master controller for a handful of unrelated hardware.

The path from there to the most recent developments that I believe are game changer was natural, at least in hindsight.

Enter to the scene tools that are coming from the compilation world and are able to map the software description onto hardware extensions automatically created.

The electronics are in fact software programmed and automatically compiled without going through the equivalent of assembly language that RTL represents. What does that mean in the complex world we just described?

It radically changes the conundrum that traditional ICs were facing by enabling a way to capture all the software horsepower and differentiation and map it onto hardware. The resulting architecture is clean, with an embedded microprocessor at the heart of the SoC and a series of hardware accelerators neatly brought to bear on the appropriate tasks.

This architecture also tames the heterogeneous multi-core programming problem as each hardware accelerator is seen as an accelerated subroutine, enabling to map all the software differentiation into hardware, minus the headache.

The work of the system software engineers which contains so much of the differentiated and unique IP of semiconductor and system companies can be captured elegantly. And since it is now delivered in differentiated, low power, high performance hardware package, it allows the manufacturers to actually get paid for it.

The outcome is a new wave of rapid product introductions of complex and targeted ICs for ever more powerful consumer electronics, myriads of mobile internet devices, gaming platform breakthroughs, and automotive infotainment opportunities to only name the most obvious.

No wonder Intel's Atom, ARM, Marvell and others are on the war path for what is likely to be the biggest semiconductor opportunity in a very long time, with unit volumes in the billions.

These compiler tools also have a dramatic impact on the FPGA world which follows the same path as the software maps cleanly with the same paradigm on their pre-architected distributed compute platform. They can now be a formidable competitor to the DSPs and other dedicated machines, with a better power, cost, performance tradeoff yet offering a similar clean programming paradigm.

But there is no reason to erect an unnecessary wall between the two worlds as many applications will be ideally served with an off-the-shelf SoC and a companion FPGA as a hardware accelerator for specific tasks.

So one can now see a continuum of solutions from software running on a processors, to processors with hardware accelerators on SoCs, to SoC/FPGA co-processing solutions, to FPGA with their cores and hardware accelerators, to pure FPGA-as-a-compute-machines. All unified by the availability of a programming paradigm.

Already today the DSP processing embedded into SoCs has surpassed the discrete DSPs as reported by the DSP Silicon Strategies report and the ITRS has predicted an explosion in the number of embedded accelerators.

The number of SoC and ASIC tapeouts has shrunk and might continue to shrink but the number of embedded processing engines is growing exponentially.

It is not clear if they are to be called Software Programmable Integrated Circuits or if they even will ever have a name but I think if one steps back and looks at the overall landscape, that trend is obvious and inevitable. Play Again ?

Jacques Benkoski is a Venture Partner at USVP, and Executive Chairman of the board of directors at Synfora.

About Author

Comments

blog comments powered by Disqus