Reducing costs with embedded software optimization

by Nick McNamara , TechOnline India - August 25, 2009

Embedded software optimization enables OEMs to extend product life cycles, remove the need for expensive product redesign costs, and decrease operational costs caused by expensive unplanned maintenance in the field.

In today's economic climate, OEMs developing products based on embedded systems face a stiff set of challenges. Whatever the financial situation, they need to maintain or increase sales of their goods in the face of tough competition, but budgets to improve or redevelop their products are often being cut dramatically.

Companies may also be unable to recruit the right engineers with the specialist skills they need, due to a lack of talent in their area, or even a hiring freeze. How can they square this circle and keep their position in the market?

In reality, products which deploy embedded software often have unrealized potential due to inefficiencies in their code. Products are regularly developed within impossible 'time-to-market' timescales which lead to problems with either overall quality or reduced functional specification.

The hardware may be capable of providing improved performance if the software on products already in the field could be optimized fully. Multiple tweaks and optimiations are usually possible, meaning the cumulative benefits are often significant.

Little details make the difference

There may also be more substantial issues in the software that case larger performance issues " or are perceived by the user of a product as important even if they don't have a major measurable effect on the system. We all know that sometimes it's the annoying little details of a product that make all the difference in usability, and this can be enough to make the difference between success and failure in a competitive commercial environment such as consumer gadgets.

Open source technologies pose a particular challenge, as they tend to continually evolve and may improve and change more frequently than commercially-licensed software. In particular, embedded Linux is a popular choice, but OEMs may be missing out on squeezing the last few per cent of performance out of their products if the version of Linux on their product has received no platform-specific performance optimization or customization.

As Linux is non-proprietary, the OEM is responsible for ensuring the Linux OS is optimized " many OEMs do not have the appropriate skills to do this in-house. Taken together, what all these factors mean is that many OEMs have a chance to improve their systems for costs substantially lower than developing a new product. By removing issues and improving performance, the product lifecycle can be extended and revenue maintained or increased.

But how can companies achieve this goal in practice, particularly if expert skills in a particular niche technology are lacking? One approach has been developed by U.K.-based Pebble Bay Consulting " this uses a model called embedded software optimization (ESO), which provides a example of how the cost-effective improvement of embedded software can be approached.

The ESO approach is based around four identifiable and measurable phases: analysis, development, test and maintenance. Firstly, the code base is analysed using any number of tools and techniques to understand worst-case execution time behaviour, or performance of static source-code analysis.

A typical embedded system may contain up to a million lines of source code, or even more. Checking this huge volume of code by hand is clearly not a practical option. Static analysis tries to identify code sequences that might result in buffer overflows, resource leaks, or many other reliability and security problems. Source code analysers do an excellent job at locating a significant class of defects that are not detected by compilers during standard builds and often go undetected during run-time testing or typical field operation.

Getting a clear view of how an embedded system operates at run-time is often overlooked under tight development timescales. A range of different tools can be used to analyse how inefficient an embedded system can be due to the evolutionary nature of software design. Tools used may include benchmarking analysis, memory usage analysis, networking performance measurements, and investigation of code efficiency and density. All of these help to expose problematical embedded code, and point to areas of code require attention or redevelopment.

The results from the analysis phase give an insight into the existing performance of the software, and where it is perhaps not reaching its full potential. Metrics for improvements can then be defined, ensuring that all development time and effort is focussed on where it's going to really deliver useful results.{pagebreak}With the performance metric and goal defined and in place, the next phase is development where the code base is re-engineered to achieve the required improvements. It's important in this stage to avoid the classic 'not invented here' syndrome: the best way to get results may be re-writing existing code, writing complete new sections, or buying in third-party code from a library or other source. In practice, a combination of these approaches will often be the optimal solution.

The code is then tested to ensure the right results have been achieved, and further work can be carried out if necessary. Finally, in the maintenance phase, the performance of the new code is monitored to check that the expected behaviour is realised in practice. This model has proved successful so far across a range of embedded products.

The improvements in performance may be in tens of percents. Embedded software optimization enables OEMs to extend product life cycles, remove the need for expensive product redesign costs, and decrease operational costs caused by expensive unplanned maintenance in the field.

Nick McNamara is commercial director of Pebble Bay Consulting.

About Author


blog comments powered by Disqus