Virtualization for the embedded market

by Mark Hermeling , TechOnline India - June 16, 2009

The key differences between virtualization for enterprise versus embedded.

Over the past few years, virtualization technology has been widely adopted in the enterprise market to make the best use of ever more powerful microprocessors with multiple processing cores. virtualization has certainly proven itself in the back office environment and this is now quickly spreading to include the embedded market.

Virtualization of enterprise servers allows a single physical server to perform as multiple, logical servers, hosting multiple instances of Windows, Linux or other operating systems. These systems are often using dual- and quad-core processors from Intel and AMD. The move to multicore is accelerating, and most vendors have presented their "many-core" roadmap for beyond four cores. The multi- and many-core chips from the enterprise market are now making their way into the embedded market with more than 25 percent of embedded processors that were shipped in 2008 being multicore.

The new multicore devices are bringing a true change in the way embedded developers are designing their systems. There are several key differences in the way that embedded developers are looking at multicore and virtualization.

Virtualization provides the capability of running multiple instances of operating systems on a single processor. Each instance is usually referred to as a virtual machine or virtual board. The virtual board provides an environment in which a guest operating system operates in isolation. A virtual machine manager (or 'hypervisor') manages the virtual boards and arbitrates scheduling as well as memory and device access. virtualization can be utilised in single-core or multicore processors.

A multicore processor has multiple processing cores connected to shared memory. A common way to utilise a multicore processor is by running a single operating system in a symmetric multiprocessing configuration. The single operating system instance has a single scheduler and can dispatch processes and tasks to the different cores. One of the benefits of SMP is that it makes load balancing across the different cores straightforward. However, SMP does not allow the multiple cores on the processor to execute different operating systems, for example, a general purpose operating system as well as a real-time operating system. The single operating system instance in SMP mode is also a single point of failure; if the system crashes, all of the cores will crash and will need to reboot.

The multiple cores in the processor can also be configured in an asymmetric multiprocessing configuration, where each core has a separate operating system that schedules its tasks on that one core. The operating systems on the core can either be the same type (multiple instances of Linux) or a mix of operating systems (e.g., Wind River Linux and VxWorks).

Embedded systems are different from enterprise IT systems. They are constrained in the power they can use, the amount of memory and the form factor. Of course, IT systems are constrained as well, but embedded takes it to the limit.

These constraints mean that many of the server virtualization solutions optimised for Windows and enterprise Linux are not well-suited.

Virtualization allows designers to consolidate different pieces of functionality that have traditionally required multiple, dedicated processors into a single processor, single or multicore. This reduces the device's cost, increases its functionality, and allows innovation to create differentiated devices.

Virtualization solutions for embedded need to be configurable and adaptable to run on constrained hardware. The following are some of the considerations that developers are looking for:

* Real-time responses: Real-time means fast and deterministic responses to events. virtualization, of course, requires context switches between the different virtual boards. These context switches need to be fast and not impact determinism.

* Reliability: A fault in one virtual board needs to be contained and cannot bring the entire system down. The faulty board can be reset individually to restore it into service.

* Boot time: The time from power-on to a responsive system is important in many industries, for example automotive.

* Memory footprint: Embedded systems have a lot more memory than they did even a couple of years ago; but memory is still on a budget. virtualization requires memory, but the amount needs to be minimized. {pagebreak}Luckily hardware and software virtualization technologies are now coming together to provide a foundation that provides enough capability with a small enough footprint for embedded devices.

Hardware support for virtualization such as Intel's VT-x and VT-d (direct I/O) provides a boost in efficiency for virtualization in embedded systems. Hardware support speeds up the administrative work that a hypervisor has to perform. This involves operations such as memory access protection, IRQ dispatching and so forth. It also provides a boost in efficiency for virtualization of devices such as communications and networking elements, which is important for telecommunications applications.

Virtualization is done through an embedded hypervisor. As mentioned before, a hypervisor is a thin administration layer running directly on the hardware (often referred to as a "type 1" hypervisor) that arbitrates the resources of the hardware between the different virtual boards. The usage of a hypervisor is beneficial in both single and multicore scenarios. The hypervisor arbitrates time (scheduling) if multiple virtual boards run on one core and it arbitrates resources such as memory and devices between virtual boards in both single and multicore configuration.

Figure 1: Example of single and multicore virtualization

In order to be able to run an operating system inside a virtual board on a hypervisor, the OS must either be paravirtualised or the hypervisor must perform emulation. Emulation allows an operating system to run unmodified on a virtual board; and while this seems attractive at first sight, it has a serious drawback: It requires more work from the hypervisor to emulate hardware when the guest operating systems try to access it. More work translates into more code and memory and less performance and determinism.

Instead, with paravirtualization, the operating system is modified to collaborate with the hypervisor. This provides greater performance by ensuring the fastest possible interaction between the hypervisor and the guest operating systems. The applications on the operating system continue to run unmodified. Paravirtualization also allows for direct interaction between a guest OS and a hardware device, if approved by the hypervisor, which greatly improves throughput and latency.

A hypervisor can execute a virtual board, which can contain a complete guest OS; but the virtual board can also contain a "minimal executive." In this scenario, the virtual board presents an interface in which to run an executive without an OS. One of the benefits of this scenario is a very quick boot time; the minimal executive comes up first and is operational, while the other operating systems take their time to boot.{pagebreak}Most people will consider a hypervisor in scenarios where a single core needs to execute multiple virtual boards with different operating systems (say Linux and VxWorks). However, the hypervisor can provide benefits in other situations as well. Consider the case where the user has a dual-core processor and wants to run VxWorks on one core for deterministic, real-time behaviour and Linux on the other core for network connectivity or graphics. This is an example of an AMP configuration. Without arbitration, both operating systems have access to the full hardware, which means that, for example, Linux could step over memory owned by VxWorks and vice versa. Manually configuring each operating system instance in order to avoid conflicts is complicated and prone to error.

By design, the hypervisor provides this separation between the virtual boards. The hypervisor can map every virtual board one-to-one to the cores on a multicore processor. Each virtual board now runs on a core as the only OS exclusively on that core; there is no need (and hence no overhead) for scheduling. The hypervisor only gets involved to provide protection of memory and separation of devices. If required, the hypervisor could even map a single OS to multiple cores within a multicore design. This would deliver an SMP OS over a subset of the cores.

Another great benefit of using the hypervisor in this situation is reliability. The hypervisor is at all times in control of the hardware. It can detect whether a virtual board misbehaves (e.g., due to a memory or device access violation), and it can reboot this virtual board without affecting any of the other virtual boards in the system.

Using a hypervisor for configuration of an AMP system provides the developer with a full set of capabilities (AMP, SMP, protection, boot) to configure multicore easily. For managers this allows them to feel confident that they have a proven solution that provides portability and future-proofs their projects.

Figure 2: Hypervisor for separation in an AMP system

Scenarios:

So what does all this multicore and virtualization technology allow us to do differently as embedded developers?

* Reduce the number of processors in a system by consolidating them onto virtual boards in a single processor (single or multicore).

* Increase the reliability of AMP systems by guaranteeing resource (memory, devices) separation and the ability to restart virtual boards.

* Migrate existing systems into a virtual board and add more functionality in new virtual boards, providing the opportunity for reuse and innovation.

* Combine real-time, legacy and general purpose operating systems in the same device.

* Provide faster performance through the use of multicore.

The move to multicore devices results in a fundamental shift in the way embedded systems are designed and implemented. There are many options for embedded system developers to utilise the benefits of multicore. AMP, SMP and hypervisor technologies provide the capability to reduce the number of processors in a system, increase reliability, migrate legacy code and add innovative new features. All of this greatly helps to deliver the new types of devices that the market demands, with Internet connectivity and ever increasing performance.

Using multicore and virtualization technology from the desktop or back office is not an effective way to optimise these devices. The constraints put on embedded devices requires a dedicated solution for virtualization. Embedded virtualization is available today without compromising the resources of the design and giving the system designer more flexibility to differentiate the end device and improve the performance and power consumption while reducing development risk, time-to-market and cost.

Mark Hermeling is senior product manager for multicore and virtualization at Wind River Copy Ends.

Related links and articles:

Virtualization and componentization in embedded systems — and how it will change the way you engineer

About Author

Comments

blog comments powered by Disqus