WAN Virtualization is perhaps the next step to WAN link load balancing. Perhaps, some of the features mentioned here might be implemented by Application Delivery Controllers as well, but the concept behind the terminology – Considering all the WAN pipes as one big pipe and then routing the packets individually on the most appropriate link, among others is quite interesting. Lets read more about WAN Virtualization, in this article.
First of all, this term WAN Virtualization is not any standard and is definitely not as popular as Server Virtualization, at least yet. But there is at least one company which is trying to solve the WAN pipe bottlenecks beyond what is offered by the current generation of the WAN Network Optimization products and I thought, why not represent the highlights of the technology here?
1. When there are multiple WAN Connections (number of links and types of links), one might be able to obtain a higher flow performance by enabling packets (even from within a single flow) to be striped across multiple network paths, based on certain measurements of each network path characteristics.
2. Basically, multiple WAN links are treated as one large pipe and the packets are routed across any link as deemed fit, on a packet by packet basis instead of routing them on a per-flow basis.
3. For this to happen, a WAN Virtualization device (where the multiple WAN links terminate, placed on either side of the WAN connection) has to monitor detailed characteristics of each WAN link connected to it on various parameters like packet loss, latency, jitter, etc. These are measured on a packet by packet basis to continuously monitor the current status of each link and take action on sudden network inconsistencies / bursts of traffic, etc that affect the movement of packets.
4. Even if there is no traffic passing through the links, the device can send some heartbeat packets to monitor the status of the link in real time. This way, it is possible to determine the best links / worst links (perhaps from multiple ISP’s) to send the packets and take packet routing decisions accordingly.
5. This concept not only accounts for bandwidth aggregation, but also provides a way to route packets around network trouble/ sudden traffic bursts as they happen and not just when the whole link is down. When multiple links are available, it chooses the best path with minimum congestion, minimum packet loss, minimum latency, etc to send maximum number of packets.
6. If high levels of reliability is required, packets are duplicated and sent across two different paths. If one of them fail to reach / reaches late due to sudden network congestion, its still fine as the other stream would reach the destination on time and one stream is sufficient. If both arrive on time, one of the streams is discarded at the destination. Think about an important Video Conference session that your CEO is going to attend – This reliability is perhaps for such situations.
Isn’t this an interesting concept that can be explored more? Real time applications like voice and video might benefit quite a bit from WAN Virtualization, as it is termed. Also, it might enable companies to go with more than one broadband connection at branch offices with primarily download requirements and still get the kind of reliability offered by MPLS / Internet Leased Line connections.
You could stay up to date on the various computer networking / related IT technologies by subscribing to this blog with your email address in the sidebar box that says, ‘Get email updates when new articles are published’