Priority Flow Control (IEEE 802.1Qbb) enables FCoE

Convergence of Fiber Channel-based storage network onto Ethernet network has many advantages. Fiber Channel Over Ethernet (FCoE) is the protocol that enables this convergence.

But, the Ethernet network is basically a lossy communication medium – packets are dropped when there is congestion and it is up to the higher level applications to manage re-transmission of lost packets. Storage networks, however, require lossless behavior at Layer 2.

Priority Flow Control, as defined by the IEEE 802.1Qbb protocol, enables lossless Ethernet networks suitable for carrying FCoE traffic, along with other classes of traffic (that may or may not require lossless Ethernet).


When two switches are exchanging data in an Ethernet network and there is congestion, the switch that is transmitting the data sends the packets, but the switch that is receiving the data drops all the packets, beyond what it can hold in its buffer memory. This makes Ethernet networks lossy.

To create a lossless network, IEEE 802.3x PAUSE Control Frames were introduced. Simplistically speaking, once data exceeds the buffer memory (or once it is estimated to exceed the buffer memory) of the receiving switch, it will send a PAUSE frame to the transmitting switch, which will immediately stop transmission until it gets the signal to continue once again. This, done effectively, prevents data loss.

Priority Flow Control, as defined by IEEE 802.1Qbb:

However, the PAUSE frame (defined above) will stop all traffic coming from the transmitting Ethernet switch. This is not desirable as other types of Ethernet traffic are already optimized to handle lossy Ethernet networks and they should not be stopped.

To solve this issue, Priority Flow Control, as defined by IEEE 802.1Qbb, was introduced where Ethernet traffic is divided into different classes and priority can be defined per class (through mechanisms like CoS & 802.1p).

Priority Flow Control enables PAUSE to be defined/implemented for up to eight CoS levels, independently. Hence, if one CoS is used to define FCoE traffic, the PAUSE frame sent by switches enabled with PFC will stop only the FCoE traffic, while (other) data defined by different CoS values, continues to flow through the link.

This enables storage traffic to be effectively sent via (now lossless) Ethernet networks, along with other types of traffic. While FCoE is one application that is enabled by PFC, there can be others.

Each switch has certain buffer memory to store (some) data packets during congestion. The buffer size and the distance between the two switches are important considerations for enabling PFC (among others).

For further information on this topic, refer to this excellent white paper published by Cisco systems.

You could stay up to date on Computer Networking/IT technologies by subscribing to this blog with your email address in the sidebar box that says, ‘Get email updates when new articles are published’.

Comments are closed