Using drowsy cores to lower power in multicore SoCs

by Cody Croxton, Ben Eckermann and David Lapp , TechOnline India - June 30, 2011

Cascading power management is a technique that steers tasks to a smaller number of cores during non-peak activity periods so that the idle cores can enter a minimal-power or “drowsy” state.

Multicore processing has enabled higher and higher levels of processing capability, but with a price: higher levels of power consumption. Cascading power management is a technique that steers tasks to a smaller number of cores during non-peak activity periods so that the idle cores can enter a minimal-power or “drowsy” state.

When packet traffic increases again, the technique allows a rapid return to fully loaded conditions. Cascading power management is not simply a power-saving technique; it is also a workload management technique that distributes packet processing in a more efficient way.

Figure 1 below shows how packets are queued and distributed under the cascading power management technique. In a typical network system, the cores take data/packets in from a network, process them, then send them back out to the network.

A multicore SoC has multiple cores that are all doing the same thing, in parallel, but to different packets. The incoming traffic is kept in queues, then removed from the queues and distributed to the cores. When the cores are finished processing, they transmit the packets back to network interfaces.

The key concept behind cascading power management is how that work is

distributed. In the non-cascading power management illustration, all the cores have some non-zero amount of traffic allocated to them.  They all need to be functional. Under a traffic load that can be dealt with by a smaller number of cores, all of the cores do not need to be involved in packet processing. Work is distributed to a smaller number of cores. This allows the idle cores to go into the drowsy power-saving mode and not wake up until they are needed.

 

Reduced Energy Consumption under Light Loads

Cascading power management provides the mechanisms to enable a drowsy core, reducing power per core. This technique works in a dynamic way that matches energy consumption to current workloads. Drowsy core provides a very large power savings and fast wake-up time.

If network traffic is slow, the cores are not burning a lot of power while they are not doing anything useful. If network traffic increases suddenly, the system can return to fully loaded conditions quickly. This technique could apply to any network system processes such as IP forwarding or firewall management.

How do you determine that traffic is light enough so that cores can go into this low-power mode? The approach we developed is very easy for software to use and can be implemented in any network system. To monitor changing amounts of traffic on the network, for example during times of day when network traffic is light, it’s not ideal to have complex software monitoring or complex logic in the SoC performing this job. Ideally there would be a way to recognize relatively simply that traffic has dropped off.

 

                             

 Figure 1. Cascading Power Management

 

We've have developed a self-balancing mechanism that can distribute the traffic under heavy and light workloads according to the diagram in Figure 1 above. When traffic is lighter, packets will be distributed using the cascading power management technique, as shown in the bottom half of Figure 1. As traffic becomes heavier, packets will be distributed more like top half of the diagram, in a more traditional manner.

Drowsy Core vs. DFS and DVFS

Power management in the scenario just described could potentially be controlled with techniques such as dynamic voltage frequency scaling (DVFS) or dynamic frequency scaling (DFS), which allow on-the-fly frequency adjustment according to existing system performance requirements. However, the gains from these techniques are not as great as the gain from using drowsy mode.

Take for example a four-core system in which only 50 percent of peak performance is required. One option could be to use DVFS. In a system with all four cores operating at 50 percent frequency and 90 percent voltage in each core, each core may consume around 70 percent of the power of a fully loaded system (dynamic power drops due to the reduced frequency, but static power only decreases slightly due to the small voltage drop).

The average power of the DVFS system is therefore also 70% of peak power. However, a drowsy system could have two cores drowsy and two cores at 100 percent operation. The power consumption in the drowsy-core system can be expressed as [(2 x 20%)+(2 x 100%)]/2=60%.

This 60% of peak system power compares favorably with 70 percent of peak
power consumed by a DVFS system. It also simplifies a system integrator's requirements by not requiring voltage regulators with dynamically programmable voltages, and may also increase system reliability by keeping operating voltage constant.

By moving away from a DFS or DVFS environment, system power savings will
better match the system’s workload. The benefits of the drowsy core technique are even greater under lighter loads. DFS and DVFS in advanced technologies have voltage floors, where voltage can’t drop beneath a certain level.

While DFS can reduce dynamic power, static power cannot scale down. Looking at Figure 1, the savings achieved by having Core 2 and Core 3 in drowsy, for example, are greater than they would be by having a small amount of traffic distributed among all cores.

Drowsy Mode Techniques

In drowsy mode, the drowsy cores retain their state with all registers in State Retention Power Gating (SRPG), a technique that allows the voltage supply to be reduced to zero for the majority of a block’s logic gates while maintaining the supply for the state elements of that block. The rest of the core is off, consuming near-zero power.

SRPG can greatly reduce power consumption when an application is in stop
mode, yet it still accommodates fast wake-up times. Reducing the supply to zero in stop mode allows both dynamic and static power to be removed. The result is that each drowsy core consumes zero dynamic and minimal static power.

In the drowsy state, processor core power consumption is about 80 percent less than full power. Core voltage is down to a minimum level. The power essentially is off, except for the logic that has to be there to turn the core back on. The core clocks are not running, so there’s no dynamic power and almost all functions are shut down within the core.

The L2 cache is still available. When the core enters the drowsy state, the L1 cache is invalidated and flushed by hardware instead of software. This makes it easier for software not to have to deal with the process of flushing and invalidating the L1 cache. The wakeup time has been estimated at less than 200 nanoseconds; in many cases it will be faster.

Simple to Use

In Freescale’s implementation of cascading power management, the mechanism that controls power management is one that the system already uses today. The cascading power management technique adds capabilities onto an existing mechanism that allows a drowsy state when no traffic goes toward the core. No configuration is necessary from the users’ perspective and it is relatively simple for programmers to change applications to take advantage of this capability.

With the cascading power management technique, cores can achieve reduced
energy consumption under light network loads and then automatically return to full function when network loads increase. Relatively simple to implement in software, this technique not only saves a great deal of power, it also provides a more efficient way to distribute packet processing loads among cores on an SoC.

 

About the authors:

Cody Croxton is a Senior Member of Technical Staff for the Networking and Multimedia Group at Freescale Semiconductor. He is currently leading the design of a next generation Power Architecture processor core.

Ben Eckermann is a Senior Member of Technical Staff and SoC Architect for the Networking and Multimedia Group at Freescale. He is currently leading an effort in design techniques for low power for Freescale's QorIQ family.

David Lapp is a Senior System Architect for the Networking and Multimedia Group at Freescale, and previously served as Chief Technology Officer for Seaway Networks.
 

Comments

blog comments powered by Disqus