Distributed Core/Leaf-Spine Network Architecture is catching up with large data center/cloud networks due to its scalability, reliability, and better performance (vs. 3-tier Core-Aggregation-Edge Tree Networks). Maybe it’s time for enterprises and smaller networks to consider implementing Distributed Core/Leaf-Spine Networks, as the architecture enables companies to start small and scale up massively. Here’s a short introduction.
A basic architecture diagram for Distributed Core/Leaf-Spine Networks is shown above. As you can see, the top layer has Spine Switches and the layer below has Leaf Switches. The Servers/Storage equipment (or) Top of the Rack (ToR) Switches connect to the Leaf Switches, as shown in the bottom of the diagram.
As you can see, All Leaf Switches connect to All Spine Switches, but Leaf and Spine Switches are not connected directly to each other.
It is possible to have a simple Distributed Core network with 2 Leaf switches and 4 spine switches (as shown above). If each Leaf Switch has 48 Ports and 2 Uplinks, the total number of Servers that you can connect in this configuration will be 48 x 4 = 192. You can expand the network quickly by adding Leaf and Spine Switches – More than 6000 servers can be connected to multiple Leaf/Spine Switches with massive backplane capacity.
The capacity/expandability of the network will depend on the No. of ports on the Spine Switches and No. of uplinks on the Leaf Switches. With Leaf-Spine/Distributed Core networks, you can either design for a non-blocking architecture or over-subscribed architecture, depending on your requirement and budget.
The number of links (between Leaf and Spine switches) = No. of Leaf Switches x No. of Spine Switches. As the network expands, the No. of links increases tremendously. In Distributed Core/Leaf-Spine architecture, all the links are utilized to send data traffic, unlike Core-Distribution-Access networks where redundant links are disabled using STP. This network can be implemented in L2 using TRILL/SPB (Shortest Path Bridging) protocols; but more commonly, it is implemented in L3 using ECMP, BGP or OSPF protocols.
- It is possible to use low-cost 1U or 2U Spine Switches Vs. Expensive Chassis-based Core Switches.
- It is possible to start small and expand the Spine/Leaf network by adding more switches, when required, without discarding the existing setup.
- There are networking vendors who make specialized Leaf/Spine switches.
- It is possible to configure the Distributed Core network to offer maximum redundancy/resiliency. Even if a Spine Switch fails, there will only be a performance degrade Vs. Service outage.
- It is possible to achieve higher throughput/bandwidth & connect more servers with Distributed Core networks Vs. Core-Aggregation-Edge Networks.
- Leaf/Spine networks can handle both East-West traffic (Server to Server: Cloud computing, Hadoop, etc.) and North-South traffic (Web content, Email, etc.) efficiently. The traditional networking model is more suitable for the latter, and expansion is limited.
- It is possible to use Standards-based protocols (even in a multi-vendor setup) to implement Leaf-Spine networks. But some vendors have developed their own proprietary protocols/fabrics, as well.
- Distributed Core networks enable Containerized (and Expandable) Data Centers.
- Networks can scale up/down/out massively and quickly.
- Can handle East-West (Server to Server) traffic efficiently.
- Distributed Core Networking using Dell Networking Z9000 and S4810 Switches.
- Network World: Clos Networks – What’s Old is New Again.
- The Network Surgeon: Cisco Spine and Leaf Architecture Discussion: Nexus 5500 Vs. 6001.
You could stay up to date on Computer Networking/IT Technologies by subscribing to this blog with your email address in the sidebar box that says, ‘Get email updates when new articles are published’.