Editor's Note: This note from a reader comes in reponse to my recent blog (DSPs they are a-changin') and it hits hard at one of the more problematic issues associated with advanced DSP silicon and cores: the tools to support them. Well, they'd better start catching up, or we're in for a world of hurt. Thanks for submitting this, Andrew.
I see DSPs very like FPGAs 20 odd years ago. Twenty years ago, most logic was done in 'real hardware', then various things like CPLDs came along, and allowed a degree of programmability. Programming was a pain, you had to know the chips in detail, down to the polarity of IO pins (they were fixed ). If you were lucky you could use 'equation' entry, but still had no optimization, and limited simulation/debug capability.
Then along came two chip companies, X and A , who pushed more complicated chips called FPGAs, and forecast they would do away with 90 percent of the digital chips on a board. How right they were, but initially, you still had to endure the pain of designing using CUPL, ABEL, PALASM, ELLA et al, along with simulators that were unreliable.
Then VHDL and Verilog came along, with modelsim, and the world was standardized..ish.
The abstraction from the chip increased and one could concentrate more on algorithms, and function, not the chip, right up til now, where boards are mostly a FPGA, a DSP and some memory. Debugging was primarily in the simulator, where one can get low level details of operation, and if the FPGA is big, using hardware coprocessors to speed up simulation. The abstraction, etc.. being done automatically.
I'm not saying that one does not need to know the FPGA, but it's getting to the point where with a team of six FPGA designers, only one has to know the ins and outs of the FPGA.
I see DSPs going the same route. C and C++ seem to be the main way DSPs are programmed, and a deep knowledge of the DPS architecture is required to program efficiently. Debugging is primarily all about putting it on the chip and seeing if the vectors pass!
The IDE's are very pretty, but way too intrusive to give much real information. Multi processors, memory partitioning et al , are a major headache. To change a process or what memory is used, one needs to hack text files here and there. But change one bit of code, and that can chuck another out, to the point that design teams seem to set a memory structure once, and not change it in case it breaks, and process's are allocated to a processor, with 'walls' around the code so the code looks like it has a simple single processor.
If you're unlucky, you have a 'resource allocator' processor, that 'dynamically' assigns processes to a DSP, but then you're in trouble. DSPs are used for real-time processing, how do you 'guarantee' that under all load conditions there will be the required processing available? With the current tools can you do that to your manager 100 percent?
Oh, then you strike the multi core DSP, but it only has say sRIO on two of the four cores, and DMA blocks, how do you model that reliably?
It's amazing what we can actually do if you think of things like that, and it goes to highlight why we have become too hardware oriented like MP4 players and TV hard disc recorders 'freezing' every now and again. It just won't do if it's something a more critical than "I missed recording the late show". The bugs are everywhere: we have just gotten used to them.
I'd love to come back in 2029, and see what we can do. I can see boards with 1024 processors on a chip, multiple chips on a board, all programmed in a language we have yet to find. All simulated and proven.
Else, we are going to have more glitches as code gets reused and more band aids put around it to do more things. Just think of that when you're flying the latest Dream Liner, which has had accelerated DSP software testing.