CWE: what developers of connected embedded systems need to know

by Chriss Tapp, Deepu Chandran , TechOnline India - September 14, 2011

The growth in the number of remotely controlled, connected consumer devices is accelerating and now includes televisions, entertainment systems, home security, heating and even refrigerators. The associated demand for instant connectivity means that the security implications need to be understood.

A lot has been written about the potential risks that could exist if insecure software is used within military, infrastructure or medical systems, and it is easy to understand why they need to be secure.

The growth in the number of remotely controlled, connected consumer devices is accelerating and now includes televisions, entertainment systems, home security, heating and even refrigerators. The associated demand for instant connectivity means that the security implications need to be understood.

Incidents of security breaches within consumer devices are on the increase and steps need to be taken now to ensure that personal information held within the home environment is protected and system functionality maintained. For example, the broadband router supplied by an ISP was found to contain a CSRF (cross-site request forgery) vulnerability that allowed a remote user to gain full administrative access by using a web browser and a specially manipulated URL.

It was possible to steal security keys and configure the router to forward any traffic to any of the systems connected behind it, even though the user thought they were protected by a firewall. Another ISP supplier whose power line networking equipment was found to be unsafe due to a manufacturing defect issued a safety notice and replacement units.

Some users were surprised when, some months later, they were sent letters informing them that they were still using the faulty devices. The units were not able to connect to the Internet themselves, and their continued use could only have been detected by means of firmware running in the ADSL modem/router that was supplied by the ISP, even though this included a firewall to prevent unauthorized external access to the private network.

Any security issue, such as the one identified above, within such a customer support feature would compromise the security of the whole network. While the firewalls within broadband routers are designed with security in mind, this is not necessarily the case with the firmware that provides other functionality. However, the security of the system needs to be treated holistically to ensure that there are no exploitable vulnerabilities within code not considered as security-related that could otherwise lead to security compromises.

Unfortunately, “Security by obscurity,” where an attempt is made to secure a system by hiding its design and implementation details, has often been considered adequate to protect non-critical systems such as consumer electronics. The deployment of large numbers of consumer devices means that the chances of this security being maintained are negligible as many, many people have the opportunity to investigate its behaviour. An example of such a failing occurred in 2008 in Poland when a teenager modified a remote control unit so that he could change the signals for the tram system. His actions lead to the derailing of at least four trams, resulting in twelve people being injured.

The signals for the trams had been designed so that the driver could control them by using a remote control. The system was considered to be secure as the hardware was not commercially available and no thought was given to the injection of commands from any other external source, resulting in the use of data without encryption or validation.

The recent compromise of the Sony PlayStation Network clearly demonstrates how a security breach can impact a large number of people and be extremely costly to those affected. The abundance of connected devices now available to the general public possibly requires them to have stronger security attributes than those that are maintained by competent persons as only a small percent of users even consider the security aspects of their deployment. Given the size of the code-base associated with these devices, it is a non-trivial task to ensure that they are secure – tools, processes and best-practice must all be brought to

bear.

 

Common Weakness and Enumeration (CWE)

CWE is a strategic software assurance initiative run by the public interest, not-for-profit MITRE Corporation under a U.S. Federal grant, co-sponsored by the National Cyber Security Division of the U.S. Department of Homeland Security. It aims to advance the code security assessment industry (i.e. tool availability and capabilities) and to rapidly accelerate the uptake of these tools within organizations producing connected devices in order to improve the software assurance and review processes that they use to ensure devices are secure.

CWE maintains the Common Weakness and Enumeration database which contains an international, community-developed, formal list of common software weaknesses that have been identified after real-world systems have been exploited as a result of latent security vulnerabilities. The core weaknesses that lead to the exploits were identified by examining information on individual exploits recorded in the Common Vulnerabilities and Exposures (CVE) database (also maintained by MITRE) after their discovery in laboratories and live systems.

According to research by the National Institute of Security Technology (NIST), 64% of software vulnerabilities stem from programming errors. The CWE database can be used to highlight issues that are a common cause of the errors that lead to security failures within systems, making it easier to ensure that they are not present within a particular development code-base. Formal identification of issues means that strategies can be developed to mitigate their severity, and detection processes can be put in place to help ensure that they are not present in the first place.


The CWE database

The CWE database groups the core issues into categories and is structured to allow it to be accessed in layers:

1. The lowest, most complex layer contains the full CWE list. This contains several hundred nodes and is primarily intended to be used by tool vendors and research projects;
2. The middle layer groups related CWEs. It contains tens of nodes and is aimed at software security and development practitioners;
3. The top layer groups the CWEs more broadly. It contains a minimal set of nodes that define a strategic set of vulnerabilities.

Despite this high-level outline, there is no formal structure to the CWE database; each weakness is added to the database as it is discovered. The database is maintained in XML format, and there are a number of filters that can be applied to review the results. This can either be done on the CWE website (cwe.mitre.org), or by downloading the full CWE XML file and applying filters locally.

The content and format of the CWE database is under constant review to ensure the accuracy and relevance of the data is maintained. For example, a recent initiative added qualifying data to each entry to show how the related vulnerability appeared and was exploited in the field.

 

CWE database content

The CWE database contains information on security weaknesses that have been proven to lead to exploitable vulnerabilities. These weaknesses could be at the infrastructure level (e.g., a poorly configured network and/or security appliance), policy and procedure level (e.g., sharing usernames and/or passwords) or coding level. For coding issues, all of the software languages that are associated with contemporary enterprise deployments are considered, including (but not limited to) C, C++, C#, Java and PHP.

The CWE database does not capture known coding weaknesses that have NOT been exploited in the field. In other words, it holds information on actual exploits, not theoretical. Is CWE compliance a requirement?

U.S. Federal contracts are increasingly calling for security compliance, and others are likely to follow. Security-related issues are gaining importance within the consumer electronics sector and the design of software systems must now consider how to mitigate the risk of one component affecting another. CWE compliance may be used to demonstrate that contractual obligations on software security have been met.

The use of  CWE compliance checking tools, should be included in the development environment to ensure that project security requirements are met, as trying to add security in at the end of development is very unlikely to succeed. The adoption of other security standards, such as the CERT-C Secure Coding Standard, compliments this objective, extending the security characteristics of an application even further.

 

Writing Secure Software

Software will only be truly secure if security is designed in and implemented up front. Most vulnerabilities can be traced to coding errors or flaws with the architecture or design. Defects in these areas are generally hard and/or expensive to fix once the system has been deployed.

Unfortunately, it is common for developers to consider application functionality to be their main objective during development and testing. Security is rarely, if even, given the same treatment. In reality, the security of a system can be considered as one of its main (and most important) quality attributes. It is important to remember that meeting all of the requirements for a system will only ensure it is secure if those requirements include security requirements.

Figure 1 below illustrates the attributes associated with system quality. By focusing on these measures at all phases of the software development lifecycle, developers can help eliminate known weaknesses. A common understanding of the security goals and approaches to be taken during development within the team is essential to prevent the introduction of security vulnerabilities.

 

 
Figure 1 - System quality is determined by many attributes, including those relating to security. An assessment of the security risks and the establishment of secure coding practices are essential if the focus is to remain on the development of a secure system.

The risk assessment evaluates the security risks associated with the various components of the software to be developed and determines the quantitative or qualitative value of these risks in relation to a concrete situation and recognized threat. This allows the nature and impact of any potential security breach to be determined prior to deployment, helping to identify the security controls and mitigation efforts required to prevent any vulnerability resulting in an exploit.

Once identified, the security controls and mitigation strategies can be incorporated in to the system requirements. These security requirements will then cascade through the development process, where they can be traced and married up with artifacts produced during design, coding, standards compliance checking, code review and testing. Documentary evidence will then exist to demonstrate how the final product
meets the security objectives that were laid down in the originating contract. 

 

Coding standards

Coding standards are used to encourage programmers to uniformly follow the set of rules and guidelines, established at project inception, to ensure that quality objectives are met. Compliance with these standards must be ensured if these goals are to be achieved, especially as many security issues result from coding errors that they target.

This compliance should be a formal process (ideally tool-assisted, but manual is also possible) as it is virtually impossible for a programming team to follow all the rules and guidelines thought the entire code-base. Adherence to the standards is a useful metric to apply when determining code quality.

CWE itself does not mandate that an automated standards checker be used. However, secure coding practices do require both static and dynamic assurance measures. The workload will be significantly lighter and the results more accurate if tools are used. Static analysis tools confirming CWE compatibility systematically enforce the standard across all code. Dynamic analysis, which ideally takes place on the target, assures that the code does not contain run time errors.

 

Traceability

If a claim is to be made that a system complies with a security standard like CWE, then evidence must be provided to support that claim. Traceability (which makes it possible to show which test result(s) prove that a particular security requirement has been met) from requirements to the design, verification plan and resulting test artifacts can be used to support such a claim.


 

Figure 2 - LDRA TBreq shows the traceability from requirements through the design, verification plan and source code to the final verification
reports

Figure 2 above illustrates how LDRA’s TBreq maps requirements to the design specification, verification plan, source code and verification reports. Such graphical representation makes it easy for developers
to immediately spot unnecessary functionality (code with no requirement), unimplemented requirements and failed or missing test cases.

 

Conclusions

Adoption of a security standard such as CWE allows security quality attributes to be specified for a project. Incorporation of security attributes in to the system requirements means that they can then be
measured and verified before a product is put into service, significantly reducing the potential for in-the-field exploitation of latent security vulnerabilities and the elimination of any associated mitigation costs.

The use of an application lifecycle management (ALM) tool to automate testing, process artifact and requirement tracing dramatically reduces the resources needed to produce the evidentiary documentation required by certification bodies. The use of a qualified and well-integrated tool chain leverages the tool vendor’s experience, reputation and expertise in software security, helping to ensure a positive experience within the development team.

A whole-company security ethic supported by the use of standards, tools and a positive development environment ensures that security is a foundational principle. The resulting process of continual improvement helps to ensure that only dependable, trustworthy, extensible and secure systems are released for production.

 

About the authors:

Chris Tapp is a Field Applications Engineer at LDRA Software Technology with more than 20 years’ experience in embedded software development. He graduated from the University of Durham in 1987 and has spent most of his career working within the automotive, industrial control and information technology industries, mainly as a self-employed consultant. He serves in the MISRA C working group, and is currently chairman of the MISRA C++ working group. He joined LDRA in 2007 and specializes in programming standards. Chris may be reached at  chris.tapp@ldra.com

Deepu Chandran is a Field Applications Engineer with LDRA’s India office. Deepu specializes in the development, integration and certification of mission- and safety-critical systems in avionics, nuclear,
industrial safety and security. With a solid background in development and testing tools, Deepu guides organizations in selecting, integrating, and supporting their embedded systems from development through certification.

 


 

Comments

blog comments powered by Disqus