Tips and Tricks -- Honing Ethernet for new prioritization, timing duties using the Vitesse Serval switch

by Uday Mudoi, Vitesse Semiconductor , TechOnline India - December 26, 2011

The advantages of unifying on Ethernet extend to both carriers and their enterprise customers. By relying on Ethernet as the underlying common Layer 2 protocol, service providers not only flatten their own networks, but also allow their customers to enable unified services across multiple operators. One access device thus can link with many operators.

Developers of Internet access devices are familiar with using such interface chips as T1/E1 framers and ATM circuit emulation devices, all with an eye to making packet and circuit services play well together.  The advent of end-to-end Ethernet in the public network would seem to simplify the job for the OEM designing equipment for the enterprise.  However, sending out an Ethernet packet to do an isochronous circuit job is not always as simple as might be anticipated.

When Ethernet switches first were employed in the telco central office, existing Layer 2/Layer 3 switches for the LAN were simply scaled up to offer more ports and faster speeds per port, and in many cases that worked just fine.  But the development of new standards for packet prioritization and fault resilience, the product of such organizations as IEEE, Internet Engineering Task Force, and Metro Ethernet Forum, has tasked the network-interface OEM with new requirements for service-aware support at the network demarcation point.

OEMs want to offer enterprise customers a platform that can interface to a service provider with Ethernet services in a manner that preserves everything the customer already knows about packet and circuit services.  This means that an access system must offer prioritization of traffic that is as fine-grained as necessary for a customer╒s mix of voice, data, and video services.  It also means the service protection should be similar to that found in Sonet, midband TDM (T1/E1), and Asynchronous Transfer Mode (ATM) services available in the public network.   

The advantages of unifying on Ethernet extend to both carriers and their enterprise customers.  By relying on Ethernet as the underlying common Layer 2 protocol, service providers not only flatten their own networks, but also allow their customers to enable unified services across multiple operators.  One access device thus can link with many operators.  Where once a Network Interface Device (NID) was seen as a tunnel and services platform, operating largely at Layer 2, to interconnect User Network Interfaces, the future NID becomes a more manageable and service-aware platform.

In the new NID, hardwired Operations, Administration, and Maintenance (OAM) is a virtual necessity, due to the need for performance management to be accomplished at wire speed at multiple layers in the OSI stack.  In addition, network edge devices must be able to handle a mix of fiber-based Ethernet rings and bonded copper, which will continue to be seen at the network edge.  

It is also important for the switch at the network edge to support both standard Multi-Protocol Label Switching (MPLS) and the more recently defined transport version, MPLS-TP, as real-world public networks will use a mix of bridging, L2 switching, and routing in the most practical and cost-effective manner possible.  It is not necessary for every Ethernet switch to be a full-service router, but providing support for full Layer 3 classification is a necessity.

While an enterprise customer may not fully realize the capabilities introduced by a dedicated management channel for OAM, the software provided with such a platform will allow indirect control and handoff of service prioritization to the primary service provider and an out-of-franchise operator.  This means the enterprise customer can "dial up" Service Level Agreements (SLAs) that involve a single provider or multiple providers.  In this case, a Hybrid NID enables more complex traffic management through its dedicated management channel.  An out-of-franchise operator can relinquish control of the NID UNI to a second operator through a management channel, thus allowing true multi-operator SLAs to exist.  The key to realizing a true Hybrid NID is to not lock in management to either a local or OOF service provider, but allow the operators to share an negotiate management of services as part of the SLA.

The type of software-enabled SLA dial-up the enterprise customer will see for multi-provider OAM is the same model that can be expected in prioritizing traffic.  Some enterprise customers will not want to choose a traffic-management model, but opt for best available given chosen SLAs and traffic conditions.  Others may want to choose their buffering and packet-prioritization algorithms enabled at the demarcation port.  Similarly, a majority of enterprise customers will not want to explicitly choose their ring-restoration or fault-recovery mechanisms if there is a network fault, they will simply seek a Sonet-like restoration time of less than 50 ms.  But there may be cases where an enterprise customer wants the Hybrid NID to specify recovery mechanisms (if available from the provider), and to enable those requests via software.

As devices at the network edge adopt more complex security environments that combine bulk encryption, authentication, and other services, the hardware at the network edge should be able to scale to handle a mix of IPsec, MACsec, Public Key Infrastructure, and similar security domains.

As the public-private demarcation point adopts complex mixes of Layer 2 bridging/switching and Layer 3 routing, the enterprise customer may demand ad hoc mixes of MPLS and bridging at the edge of the network, including full MPLS pushed to the enterprise network edge.  This requires that a transparent mix of Ethernet with IP/MPLS and MPLS-TP services be provided for any label switching and bridging contingencies that might be encountered.

New generations of switch, MAC, and physical-layer chips for Ethernet, such as Vitesse Semiconductor Corp.'s Serval switch, address the new QoS and timing demands.  Serval integrates a MIPS processor to support advanced features such as multiple packet queuing policies, and has an on-board ternary CAM memory to aid in Layer 3 classification.  As a result the switch can support any combination of MPLS, MPLS-TP, and traditional bridging.

The core Serval architecture supports 8 QoS classes with up to 2,616 queues.  On-chip shared buffer memory uses per-color watermarks (green/yellow/red frames and bytes) to set up dedicated areas in memory for different service classes.  The hierarchical QoS algorithms support Weighted Random Early Detect, dual leaky bucket traffic shaping, per-priority flow control, and per-service queuing, meeting all Metro Ethernet Forum guides for Service Level Agreements.

Serval's hardware-based OAM supports IEEE 802.3, 802.1ag, MEF-16, and Y.1731, including the latter's up and down Maintenance Entity End Points, or MEPs.  This offers full support for OAM at port, service, and path level.  The switch supports 12 all-hardware port MEPs, and 64 all-hardware path and service MEPs.

While MEP support was present on Caracal, the Serval switch offers additional OAM support for continuity checks, loss measurement, delay measurement, and loopback, all in multiple modes.  Generation of OAM frames takes place entirely through hardware, and resultant frames are sent to the switch's on-chip 416-MHZ MIPS processor.  Including the frame-forwarding RISC core on the switch chip itself is another area where Serval provides a unique advantage in saving real estate.  This MIPS processor has itself been upgraded with new support for PCI Express, DDR2 and DDR3 SDRAM interfaces, and the Vitesse Register Access Protocol for inband read and write.

Network timing is critically important at the edge of the network, for insuring consistent and accurate SLAs.  Serval supports 1588v2 in one- and two-step modes, including the multiple clock types defined in 1588v2, including master clock, slave clock, boundary clock, peer-to-peer transparent clock, and end-to-end transparent clock.  The on-chip MIPS core runs point-to-point protocol and filtering software.
About the Author:

Uday Mudoi, director of product marketing at Vitesse, has more than 16 years of experience in the communications and semiconductor industries.  Mudol started his career at Siemens and joined Vitesse in 2000.  His experience in networking and communications includes a variety of technology areas including Enterprise solutions such as network processors, Ethernet switches, green technology, Carrier Ethernet, and modem chipsets.  He holds a Bachelor of Science degree in Electrical Engineering from the Indian Institute of Technology, Kharagpur, and a Master's degree in Computer Science from North Carolina State University.  He also received an MBA from Columbia University.

This article is courtesy CommsDesign.


blog comments powered by Disqus