Network Packet Brokers and Encapsulated Traffic
Overlay networks present a new challenge:
Overlay network technologies are certainly nothing new; we talked about more than a few above. What has changed significantly is the breadth and scale of overlay technologies with the growing infrastructure of Cloud and Data Center computing combined with the explosion of Virtual Machine and container instances. These networks represent a paradigm shift in design and the focus moves to configuring a service; not configuration of independent network devices. Many encapsulation technologies will share a common underlay network in today’s infrastructure; there could easily be hundreds of different overlays on a large network. These tunnels may even be ephemeral; take VXLAN for example, a tunnel is created and exists for only so long as is needed to transmit data from one VTEP (VXLAN Tunnel EndPoint) to the other. They are also dynamic with multiple potential routes to any given endpoint.
The problem at hand has changed significantly as well. In terms of monitoring, it simply doesn’t work to strip encapsulation headers away and send the de-encapsulated traffic to the monitoring tools because the encapsulation headers themselves are critical to understanding the network. Additionally, many of the overlay networks that exist on the underlay could share the same IP range, despite being entirely different networks. Since the underlay is transparent to service running on top of it, and the different overlays are logically separated by their encapsulation headers, there is no reason to ensure the overlays are in separate IP ranges. Indeed, in many cases those responsible for configuring the overlay wouldn’t know what IP range other overlays are using; or even if there are other overlays at all. This is intentional by design, of course, but with respect to monitoring such a network, what are you left with if you simply strip away the encapsulation headers? One; you lose all the information regarding the underlay network. Two; there is no longer any way to identify which traffic was part of which overlay network. This not only renders monitoring tools useless, worse perhaps, it generates an entirely incorrect view of the network.
These are two completely different networks logically separated by the VXLAN tag;
If the VXLAN headers are removed they appear to be the exact same network!
So, at this point we have a multi-faceted problem to deal with: as we discussed in the previous article, traditional monitoring tools cannot process traffic with these encapsulation headers, however, we cannot simply strip the headers away and forward the traffic to these tools because there is no longer any way to identify which logical network the traffic actually belonged to.
You may be thinking “Perhaps we can approach this issue with intelligent tapping to separate these logical networks by physical interfaces.” But remember, these overlay networks are dynamic with multiple paths through the underlay. The physical link bears little to no relevance regarding where and when the overlay traffic will traverse the network.
If legacy monitoring tools are to be used, then there must be some way to separate the overlay network through filtering without removing the encapsulation headers; this is where Cubro’s G5 Sessionmasters come in.
Returning to the VXLAN example, the G5 Sessionmasters (EXA48600 and EXA32100) can natively filter on the inner headers of encapsulated traffic at full line rate on every port; a feature not found on other network packet brokers. The ability to do this at line rate is critical because networks with massive amounts of overlays are very often using multiple 100Gbps physical links. Cubro accomplishes this unprecedented level of performance because these filtering functions are implemented at the hardware level. The solution to the aforementioned problems is to filter on the VXLAN tunnel ID (the VNI), the information that logically separates one VXLAN overlay from another.
Once we identify and separate these VXLAN overlays it is possible to then remove the headers and forward the traffic to a monitoring tool knowing, absolutely, that all the traffic belongs to the same overlay network.
If this wasn’t complex enough there is yet another consideration: these endpoints are often virtual instances and can move from one physical location in the underlay to another. Not only that, but they could move to an entirely different underlay altogether, across the country or perhaps across the world. To the user of the overlay network nothing appears to have changed but performance may have declined. Without any awareness or visibility into the underlay network the user is left with no ability to understand or troubleshoot this performance degradation. In the third installment of this series we will discuss the particular issues relative this situation as well as our proposed solution.
Author - Derek Burke is a Technical Support Engineer with Cubro. Derek holds various certifications pertaining to Networking, Linux, and Security. He is in continual pursuit of new skills and knowledge and strives for a deep understanding of technologies in order to offer assistance and communicate them to others.