The purpose-driven CPU: Multiple tasks in Carrier Ethernet

by Uday Mudoi, Vitesse , TechOnline India - April 13, 2011

Demands for packet-processing tasks in the public network have grown so diverse, the notion that processing needs at either the central telecom switch or edge access device could be solved by a unified CPU, have all but disappeared.

Demands for packet-processing tasks in the public network have grown so diverse, the notion that processing needs at either the central telecom switch or edge access device could be solved by a unified CPU, have all but disappeared.  There is a reason one rarely hears the term "network processor" any more. 

There is a control-plane management device, there is an Ethernet switch chip, there is a datapath deep packet inspector, there is a channel-centric processor to monitor and prioritize traffic, and there may be co-processors such as TCP offload engines.  The integration capabilities of Moore's Law may allow several functions to be integrated over time, but intelligence in the network is decentralized by nature, thus the CPU tasks can remain varied in both the control plane and the data plane.

In some sense, the task for standardizing network processing is easier than it's ever been, or that was anticipated in the 1990s.  In previous generations, CPU designers had to optimize for both packet- and circuit-switching functions, and design packet forwarding engines for both variable-sized frames (Ethernet, frame relay), and like-sized cells (Asynchronous Transfer Mode, or ATM). 

In practice, that meant designing for T1/E1, Ethernet, token ring, X.25, frame relay, SONET/SDH, and ATMtraffic. Today, the time-division multiplexed circuit has all but disappeared, and the ubiquitous packet-switching market has standardized on Ethernet at Layer2 and TCP/IP at Layers 3 and 4.

This by no means implies that network design is a simple task dependent on a flat topology, particularly now that Ethernet finds an equal home in both enterprise and service-provider networks.  Robert Metcalfe, one of

the inventors of the original bus-oriented Ethernet, once said that "I no longer recognize my baby," when speaking of the switched hub topology that dominated Ethernet circa 2000, at the 100-Mbit and 1-Gbit levels. 

The Ethernet baby has changed just as significantly in the last ten years, addingthe advanced services defined by Metro Ethernet Forum, and such advancedmonitoring and performance features as IEEE 1588 timing, Y.1731 performancemonitoring, and G.8031/32 protection switching.  Most of these latter features are intended to make Ethernet in the carrier environment look more like TDM and SONET services from previous carrier networks --but even Ethernet in a well-defined enterprise or metro space requires different processing architectures than in years past.

The problem space for the combined MAC/switch device in an enterprise Ethernet world has been honed to one of support for faster physical interfaces.  While some switch architectures continue to add support for "deep packet inspection," most chips that perform more than a rudimentary analysis of a packet header are segmented from a central switch through a co-processing topology.  TCP Offload Engines, for example, are designed as separate chips configured in a look-aside role to the primary packet-forwarding engine.  The original Layer 2 Ethernet switches have done a good job adding Layer 3 support for MPLS and routing, but too many header modification functions tend to distract from the speed of forwarding as MACs and switches are accelerated to meet 10G, 40G, and 100-Gbit/sec backbone speeds. 

The key for the enterprise packet-forwarding device, and for central switches in a metro environment, is to forward packets as quickly as the header inspection will allow, so that actual throughput can come close to meeting the advertised physical speed of the backbone, which may be as great as 40 or 100 Gbit/sec.

When Metro Ethernet Forum began working with standards bodies such as IEEE and International Telecommunications Union, however, a different set of priorities than raw forwarding speeds took precedence. 

Carrier-centric processors, whether for wireline or wireless services, are designed to meet the constraints of tight Quality of Service (QoS) prioritization first and foremost.  A QoS scheduling algorithm implemented in hardware may use packet prioritization schemes, such as weighted round-robin, to determine how packets are delivered in a rank ordering scheme.  The QoS processor devices provide predictability for Ethernet services that allow ironclad Service Level Agreements (SLAs) to be negotiated between carriers, or between a carrier and business customer. 

As service providers migrate from TDM networks of the past to all-Ethernet networks of the 21st century, MACs and mappers that support advanced services must comply with emerging standards such as the previously-mentioned 1588 for timestamping; G.8031/32 for sub-50-ms failover protection switching; and Y.1731 Operations Administration and Maintenance (OAM), which allows Ethernet to offer the real-time network analysis considered the norm in SONET networks. 


In fact, the desire to emulate SONET specs, such as protection-switching times, has driven much of the Carrier Ethernet standards work. Service providers expect an Ethernet network that is just as reliable, with as dependable a failover protection switching architecture, as its SONET predecessor.

Additional tasks have arisen from the expansion of multiple types of bridging, network routing, and transport being combined with merged protocols such as PBB-TE and MPLS-T.  In attempting to address new

requirements from support of IP/MPLS or hybrid IPv4/IPv6 networks, some OEMs have turned to broader use of search-centric processors, often based on the use of special content-addressable memory such as ternary CAM.  In some cases, the T-CAM search engine can be combined with other centralized control-plane or data-plane functions, but in other cases it deserves its own specialized role near the route engine.


Service Orientation and Distributed Intelligence

Given that some statistics-gathering functions necessary for OAM can be implemented in small logic blocks or realized in middleware, the custom IC design groups within some network equipment companies have advocated integration of many Carrier Ethernet and specialized transport functions within a central network processing unit that begins to resemble the proverbial Swiss army knife, as it is tasked to take on many more service-aware functions for the public network. 

The question that needs to be asked is not whether this can be done, but whether OAM monitoring coverage at speed can be fast enough, whether protection-switching can be implemented in sub-50-ms windows, and whether the single-chip solution represents the most cost-effective way to add functionality for end-to-end Ethernet.



Service-aware system topologies emphasize the ability to dial up unique service offerings on a per-port, per-link, per-emulated-circuit basis.  This should suggest that an effective chip-level approach to offering
efficient Carrier Ethernet services should follow the rubric of "centralize when possible, distribute when necessary."  Certainly, support for OAM, timestamping, protection switching, and transport flexibility can reside in a centralized control-plane management processor.  It can be supported indirectly in the feature set for datapath packet-forwarding engines, and multi-port Ethernet switches.  But devices closer to the line interface, including MACs, framer/mappers, and even Serdes physical-layer devices, also can play important support roles in end-to-end Ethernet, by being upgraded to offer Carrier Ethernet standards in hardware.

From the point of view of a semiconductor vendor, the most optimal way to address the ubiquity of Ethernet as an enterprise LAN and Ethernet as a global service is to offer a suite of products with end-to-end support for advanced services.  In this scenario, the devices for enterprise and metro rings can be defined in the traditional partitioning of high-density switch, high-performance MACs, high-speed Serdes transceivers, and packet-forwarding engines or search engines where appropriate.  Devices for the LAN and MAN markets can be the first to add support for higher-speed services, such as the new 40-Gbit and 100-Gbit Ethernet defined in IEEE 

It may be important in enterprise environments to add support for the aggregation of special services such as 8-Gbit Fibre Channel.

If all Ethernet devices in a portfolio are designed with support for standards such as IEEE 1588, Y.1731, and G.8031/32, devices can be adapted for use in the line cards that service more complex metropolitan topologies, and in the long-haul rings and links for full-service WANs.  As wireless operators move to Ethernet transport for backhaul, the same devices can be utilized in wireless backhaul.  It then becomes a matter of efficient software implementation to add, comprehensively or incrementally, equipment-level support for Metro Ethernet Forum Layer 2 defined services such as E-Line and E-LAN; merged transport services such as IP/MPLS; network synchronization as defined in IEEE 1588 and Synchronous Ethernet; advanced protection switching defined in G.8031/32; and Ethernet OAM as defined in Y.1731.

The service provider can expect that some functions offered by the network equipment provider will be pgraded in silicon as standards evolve or new services are offered.  But the strategy followed by a semiconductor provider of distributing network intelligence and making all devices in an Ethernet portfolio ready for advanced services, allows the network equipment provider to future-proof network nodes from switch-routers to access devices to backhaul equipment. 

If the mappers serving such markets are flexible in protocol approach, they also can handle the rare cases where non-Ethernet traffic, such as Fibre Channel packets or Optical Transport Units, are aggregated into the flow - though the instances of non-Ethernet traffic, even in public networks, may diminish significantly over time.
About the Author:

Uday Mudoi, director of product marketing at Vitesse, has more than 16 years of experience in the communications and semiconductor industries.  Mr. Mudoi started his career at Siemens and joined Vitesse in 2000.  His experience in networking and communications includes a variety of technology areas including Enterprise solutions such as network processors, Ethernet switches, green technology, Carrier Ethernet, and
modem chipsets.  Mr. Mudoi holds a Bachelor of Science degree in Electrical Engineering from the Indian Institute of Technology, Kharagpur, and a Master's degree in Computer Science from North Carolina State University.  He also received an MBA from Columbia University.


About Author


blog comments powered by Disqus