Need, Standards, Salient points and Challenges for 10GE (10 Gigabit Ethernet) adpotion

Gist: Gigabit connections, which were earlier used for interconnecting network switches (hence forming the backbone connectivity) have become common to connect the desktop. Is 10 GE (10 Gigabit Ethernet) then taking over any time soon both for interconnectig switches as well as directly connecting the servers? We would explore on whether there is a need for 10 GE ports, 10 GE interconnections, standards and types of pluggable optics for 10 GE as well as salient points along with the limitations of the 10 Gigabit Ethernet, in this article.

Need for 10GE Ports:

  • Consolidation and virtualization are two of the strategies that many data centers are implementing in order to lower server count, improve server utilization, reduce energy demand, reclaim floor space, and redeploy IT resources to higher value projects, while maintaining, or improving, reliability. Virtualization has enabled dynamic allocation of resources for applications which earlier used to reside in seperate servers. So, the 1GE connectivity to a single server is no longer dedicated to that single server as multiple virtual servers populate every physical server. Hence 10GE Ports on every server (whose capacity will be increasing to more multiple cores and memory in the future) is a reality today, and in certain cases, a must.
  • More and more applications (some of them being real-time, requiring extremely less latency) have moved in to the IP Network. Today, HD Video Conferencing/ Streaming (Single MPEG-4 Stream consumes about 3.75 Mbps of Bandwidth), IP Video Surveillance, Centralized IP Telephony, Clustered Business continuity applications like ERP/CRM, E-Commerce etc. & Scheduled back up to SAN/ NAS networks have increased the load on existing server room environments.
  • Virtualization and the advances in the storage technologies like iSCSI has enabled consolidation of storage resources like SAN/NAS and hence consolidation of different disparate SAN networks in to a centralized low-cost high speed 10 GE ethernet based storage network saves a lot of cost and management overhead.
  • HPCC High Performance Computing Cluster (which is basically an interconnection of various high capacity servers working togeather to solve big computational tasks) requires higher throughput and low latency between all the nodes in the cluster, which is offered by 10GE.
  • Connecting Top of the Rack Access Switchs for Blade Servers to the Core Switch – The Aggregation layer thus formed requires fewer cables/ ports connecting to the core switch when it uses 10GE links for the same.
  • High Performance Servers: The availability of next generation multi core CPU’s with multi-threaded networking stacks will be able to fully utilize a 10 GE connection directly from the network switch to the server. Single 10GE connection also avoids the need to interconnect multiple 1 Gbe NIC’s from individual servers to switches in order to acheive higher throughput – increasing server utilization and also reducing power utilization.
  • In the WAN (Wide Area Networking) end, carriers have been using SONET and ATM for high capacity long distance interconnections and with the introduction of 10GE standard, the lower costs of setting up and maintaining high speed networks that are scalable is quite appealing.

Need for 10GE Interconnects:

  • When a 24/48 Port 1GE Port Switch is deployed in a data centre it is imperative that the interconnecting technology (to other switches) be more than 1GE as all the aggregated throughput goes through this setup. In this case, a 10GE Interconnecting apparatus or a 40GE interconnecting apparatus might be more optimum and the later is required for non-blocking performance if crucial servers are connected through the switch.
  • Interconnection of multiple vendor switches (where stacking is not possible) can be achieved through multiple such 10GE/40GE interconnects along with Link Aggregation technology to ensure that the aggregation layer has a fully non-blocking or at least 2.4:1 architecture.

10 GE Cabling / Connectors:

Standards:

  • 10GBASE-SR, 10GBASE-ER, 10GBASE-LR, 10GBASE-ZR are the common standards for fiber optic interfaces for 10GE. Both single mode, multimode fiber are supported for longer, shorter distances respectively ranging from a few meters up to 80 KM.
  • 10GBaseT is the copper interface standard that can go up to 100 meters using Cat 6a or Cat 7 Cables. It can support lower distances for Cat 6 Cables (55 meters). 10GBaseT is reverse compatible with the earlier 1G and 100 Mbps Base T Connections.
  • 10GEBASE-CX4 is another standard that supports 10GE by using twin-axial cable with 24 Gauge wire (same cable used for infiniband) and the primary application is for stacking switches of the same vendor. This technology has distance limitations (like 15 meters max).
  • DAC – Direct Attach Cable : Low cost technology, supports shorter distances (For 10 GE). SFP+ Can be used along with DAC.

Common Pluggable optics for 10GE through MSA’s:

  • SFP+ (For 10 GE) and SFP Standards use the same physical dimensions and SFP transceivers are supported by SFP+ equipment – So, it supports 1GE Optics as well. This standard supports up to 80 KM. Lower latency, lower power, lower heat when compared to equivalent standards.
  • XFP, XENPAK, X2 are the other common types of pluggable optics supporting 10 GE primarily through MSA’s.
  • Interoperability between multiple vendor transceivers are governed mostly by MSA – Multi Source Agreement between the various vendors.

Salient Points about 10GE

  • 10 GE is an 1EEE 802.3ae Standard
  • 10 GE Supports Full Duplex Communication – Hence lower latency and faster response than 1 GE Connections.
  • Same frame format, frame size and MAC protocol as previous ethernet versions
  • Computer/ Server expansion interface – PCI Express can support up to 12.5 Gbps of bandwidth to accomodate a 10GE Network Interface Card.
  • Intelligent Ethernet NIC cards offload protocol (TCP/IP) processing from the host processor and hence has reduced the CPU utilization parameter for 10GE connections and the latency is also lesser for 10GE when compared to 1GE.
  • Ethernet based switches and interconnects can be managed by the same Network Management System and protocols currently being used in data centres. For, newer protocols, need for a seperate management interface increases the cost of management.

Challenges in moving to 10GE:

  • Cost of 10GE NIC cards as well as the price per port for 10 GE switches / Optical interfaces remain very high.
  • The options for connecting 1GE ports alongside with 10GE ports are limited in most of the switches.
  • 10GE NIC cards do not come built-in on most of the servers.
  • Latency/ server utilization is higher and hence an issue with ethernet, especially with higher throughput applications, when compared with parallel technologies like Infiniband.
  • Packet loss due to buffer overflows on congested ports is an issue in ethernet – the IEEE 802.1Q au is working on enhanced congestion management techniques for ethernet.

excITingIP.com

In case you have any clarifications or have anything to add to the topic, please feel free to add your comment in the space given below or you could also contact us using the contact form. You could also submit your email address in the box provided with the title “Get Email Updates when new articles are published” if you want to receive the articles published in this site to your email as and when they are published.