Infiniband, IBoE and their advantages in High Performance Computing

This article is to introduce the Infiniband technology (which is commonly used in High Performance Computing for Inter Process Communication to enable server consolidation) and discuss the advantages of the Infiniband technology. We also take a look at the IBoE (Infiniband over Ethernet) technology which hopes to achieve IPC Consolidation.
Infiniband is a switched fabric communications link primarily used today with High Performance Computing (HPC), inter-connecting different clusters of servers together through high performance switches that support infiniband . It is a point to point bi-directional serial link originally intended for connection of processors with high speed peripherals such as disks. It specifies a hardware platform for VIA – Virtual Interface Architecture.

You could also read an overview of the technologies that are used for interconnectivity in High Performance Computing in the link given in the previous line.

Advantages of Infiniband and why it is an exciting technology:

Performance/ Bandwidth: Infiniband Switches today support up to 40Gb/s host connectivity (Servers) and 120 Gb/s inter-switch connectivity. This should be the highest supported speed for interconnectivity that is available today.

Low Latency: In performance critical data centre and High Performance Computing (HPC) applications, infiniband can give very low latencies (like 1 Micro second end to end), there by enabling a better performance.

Cost:
If the price/performance ratio is compared for infiniband with competing technologies, infiniband should have one of the lowest PPR as the HCA’s (Host Channel Adaptors) and the Switches are competitively priced.

Support for Consolidation: Infiniband provides a roadmap for consolidation of network, clustering and storage data in the near future which reduces the power, cost and real estate required for managing multiple devices and technologies in the data centre.

Reliability: Infiniband supports fully redundant and lossless I/O fabrics, with automated path fail over and link layer multi -pathing abilities to enhance the reliability of the interconnecting technology. It performs Cyclic Redundancy Checks at each fabric hop and end to end to ensure that data is correctly transferred.

Support for advanced protocols: Infiniband gives direct support for advanced transport protocols such as Remote Direct Memory Access (RDMA).

Scalability: Infiniband is a scalable solution and it does not rely on the Spanning Tree Protocol like the ethernet.

What is Infiniband Over Ethernet (IBoE) and what are its advantages?

Just like FCoE (Fiber Channel over Ethernet), which encapsulates Fiber Channel (FC) data (and therefore maintains the familiarity and enables the interconnectivity, manageability of the SAN software) in Ethernet frames, Infiniband over Ethernet – IBoE encapsulates IB data (and therefore maintains the familiarity, maturity, interfaces and management compatibility of IPC-Inter Process Communication software) in ethernet frames. This is essential because, there are a lot of investments on Infiniband for HPC, IPC and clusters and hence would be preferred for any IPC consolidation initiatives.

Applications qualified using the Openfabrics IPC protocol stack over Infiniband can now be seamlessly deployed over zero copy/send receive and RDMA ethernet using IBoE. The high performance over ethernet is mainly possible due to the advent of reliable ethernet through the support of Per Priority Pause and using the congestion management and control of the Layer 2 ethernet medium.

excITingIP.com

You could stay up to date on the various computer networking technologies by subscribing to this blog with your email address in the sidebar box mentioned as “Get email updates when new articles are published”