What is Network Encapsulation?

A traditional ethernet frame as described previously contains enough data to get data from one device to another within the same VLAN (layer-2 segment). Source MAC, destination MAC. If the data needs to travel outside the layer-2 boundaries, a layer 3 (traditionally a router) device needs to decapsulate it, read the source and destination IP, re-encapsulate it, and route it to the right destination. VLANs were typically restricted to physical locations, and WANs (wide-area networks) or VPN tunnels were used to route traffic between sites.

Protocols have long existed that could be used to bridge layer 2 segments (VLANs) that are separated by layer 3 boundaries. They were often either proprietary or expensive. However, the new1 hotness in the network world is network encapsulation. Network encapsulation lets us take an ethernet frame from a local VLAN, perform some magic, and send it to another device far away. The remote device and then receive that frame and drop it on a local VLAN where it can be received by another device as if they were layer-2 adjacent. Network encapsulation is used in both physical and virtual environments. There are two common network encapsulation protocols that we will run into on NSX and Flow: Geneve (Generic Network Virtual Encapsulation) and VXLAN (Virtual Extensible Local Area Network.)

To understand the basics, we will walk through Baby’s First VXLAN Deployment on two physical switches. NSX and Flow both leverage Geneve for their internal operations, but are capable of leveraging VXLAN for certain use cases as well. Both function very similarly, but I think that VXLAN provides for a simpler example. Also, example will use Arista configuration because it’s easy to understand. Our example uses two switches that are connected via a layer-3 link. We will start with a basic configuration with an uplink, a standard switchport, and a single VLAN.

LEAF-SW01LEAF-SW02
vlan 10
 name Prod-Servers
!
ip route 0.0.0.0 0.0.0.0 192.168.100.1
!
interface Ethernet9
 no switchport
 description “Uplink to Spine”
 ip address 192.168.100.2 255.255.255.252
!
interface Ethernet1
 description “Server-A”
 switchport access vlan 10
vlan 10
 name Prod-Servers
!
ip route 0.0.0.0 0.0.0.0 192.168.200.1
!
interface Ethernet9
 no switchport
 description “Uplink to Spine”
 ip address 192.168.200.2 255.255.255.252
!
interface Ethernet1
 description “Server-B”
 switchport access vlan 10

Next, we are going to add a VXLAN interface. This is a service interface where we can define the VXLAN configuration. We will tell the VXLAN service to source traffic from Ethernet1, and to use UDP port 4789 for VXLAN traffic.  This turns the switch into a VXLAN Tunneling Endpoint (VTEP)

LEAF-SW01LEAF-SW02
vlan 10
 name Prod-Servers
!
ip route 0.0.0.0 0.0.0.0 192.168.100.1
!
interface Ethernet9
 no switchport
 description “Uplink to Spine”
 ip address 192.168.100.2 255.255.255.252
!
interface Ethernet1
 description “Server-A”
 switchport access vlan 10
!
interface Vxlan1
 vxlan source-interface Ethernet1
 vxlan udp-port 4789
vlan 10
 name Prod-Servers
!
ip route 0.0.0.0 0.0.0.0 192.168.200.1
!
interface Ethernet9
 no switchport
 description “Uplink to Spine”
 ip address 192.168.200.2 255.255.255.252
!
interface Ethernet1
 description “Server-B”
 switchport access vlan 10
!
interface Vxlan1
 vxlan source-interface Ethernet1
 vxlan udp-port 4789

Next, we will define our first VXLAN. A VXLAN is defined with a number called a virtual network identifier, or VNI. A VNI is a unique identifier for that VXLAN. While we are limited to 40942 VLAN tag, there are 16777215 possible VNIs3. So we will define VNI 10010 on both switches and bind it to that switch’s local VLAN 10

LEAF-SW01LEAF-SW02
vlan 10
 name Prod-Servers
!
ip route 0.0.0.0 0.0.0.0 192.168.100.1
!
interface Ethernet9
 no switchport
 description “Uplink to Spine”
 ip address 192.168.100.2 255.255.255.252
!
interface Ethernet1
 description “Server-A”
 switchport access vlan 10
!
interface Vxlan1
 vxlan source-interface Ethernet1
 vxlan udp-port 4789
 vxlan vlan 10 vni 10010
vlan 10
 name Prod-Servers
!
ip route 0.0.0.0 0.0.0.0 192.168.200.1
!
interface Ethernet9
 no switchport
 description “Uplink to Spine”
 ip address 192.168.200.2 255.255.255.252
!
interface Ethernet1
 description “Server-B”
 switchport access vlan 10
!
interface Vxlan1
 vxlan source-interface Ethernet1
 vxlan udp-port 4789
 vxlan vlan 10 vni 10010

Ok, that’s great. But how do the switches know about each other? There’s an easy way (which doesn’t scale) and a more complex way using dynamic routing (out of scope for this class). So, let’s do the easy way. We will add something called a VTEP flood list.

LEAF-SW012ND-FLOOR-SW
vlan 10
 name Prod-Servers
!
ip route 0.0.0.0 0.0.0.0 192.168.100.1
!
interface Ethernet9
 no switchport
 description “Uplink to Spine”
 ip address 192.168.100.2 255.255.255.252
!
interface Ethernet1
 description “Server-A”
 switchport access vlan 10
!
interface Vxlan1
 vxlan flood vtep 192.168.200.2
 vxlan source-interface Ethernet1
 vxlan udp-port 4789
 vxlan vlan 10 vni 10010 
vlan 10
 name Prod-Servers
!
ip route 0.0.0.0 0.0.0.0 192.168.200.1
!
interface Ethernet9
 no switchport
 description “Uplink to Spine”
 ip address 192.168.200.2 255.255.255.252
!
interface Ethernet1
 description “Server-B”
 switchport access vlan 10
!
interface Vxlan1
 vxlan flood vtep 192.168.100.2
 vxlan source-interface Ethernet1
 vxlan udp-port 4789
 vxlan vlan 10 vni 10010

That’s it. We’ve configured VXLAN and turned each switch into a VTEP and, by adding that flood list, we’ve told each switch: “every time you learn about a MAC address on VLAN10, tell the other switch that you have it, and what VNI it’s on.” Then, when a computer on VLAN10 on LEAF-SW01 wants to talk to a computer on VLAN10 on LEAF-SW02, VXLAN happens. The first-floor switch takes the ethernet frame and slaps a VNI 10010 header on top of it. This VNI-Headered frame becomes the payload for a UDP/IP packet, which is sent to 192.168.200.2. When LLEAF-SW02 receives it, sees the VNI header 10010, and drops it into VLAN10. To add more switches to this configuration, you add vxlan flood vtep statements.

So we started with a packet. We put it in a frame. We’re going to slap some VXLAN info on that frame and put it in a packet, which will end up in its own frame. It’s like matryoshka dolls, but networking! To visualize that, we end up with something like this unnecessarily colorful, but fun representation:

The huge benefit here is there is no longer a requirement to trunk VLANs from switch to switch. It doesn’t matter if there are 100 physical switches between LEAF-SW01 and LEAF-SW02. VLAN10 doesn’t need to exist on any of them. In fact, the two VLAN tags don’t even need to match. Same VNI? Same layer-2 boundary. When all the links between switches are routed links, there are no more loops. VLANs become switch-local, allowing for greater flexibility. A world without trunks. A world without spanning tree. Just imagine. And, you can also use the magic of BGP and the switches can automatically discover and peer with each other and start sharing information. Then you don’t need to deal with the static flood lists.

VTEPs can be more than just switches, as well. A hypervisor can be a VTEP! In fact, this is exactly how NSX and Flow work. Our deployments of NSX uses Geneve instead of VXLAN, and Flow leverages both Geneve and VXLAN, but the concept is identical. The differences are under the hood, and not relevant to this class. One note is that Geneve tunneling endpoints are just called TEPs instead of VTEPS. Instead of trunking the VLANs between all of the hypervisors, the hypervisors just act as TEPs. Networks are assigned a VNI, and they all peer up and exchange information about what device is where.

When a network encapsulation fabric is added to a network, you will often hear the terms underlay and overlay. The underlay or underlying network is the base layer of network infrastructure and configuration that everything else is deployed on and depends on. In the case of a network encapsulation fabric, the underlay is the physical route/switch with traditional VLAN networking that allows the tunnel endpoints to communicate with each other. The overlay, then, is the network encapsulation fabric itself. Both NSX and Flow will describe Geneve-backed networks with the term overlay. As the term Virtual Private Network (VPN) is often used to describe connections between remote layer-3 segments, the term Ethernet Virtual Private Network (EVPN) is often used to describe a VXLAN deployment at scale to indicate that it is used to create connections between remote layer-2 segments.

If you are interested in learning more about network encapsulation concepts at a deeper level, including how you can leverage a routing protocol in the overlay to facilitate exchange of information between TEPs at scale, I strongly recommend the Arista EVPN Design Guide, which can be downloaded from the Arista Design & Deployment Guides page. A free Arista account registration required to access downloads.

As I said, a simple concept that can be used in exceptional ways to do remarkable things, and NSX and Flow do those remarkable things. Let’s learn about how.


  1. Not REALLY new, but comparatively new. The first widely used encapsulation of this type was GRE (RFC-2784) which was officially standardized in 2000. The most commonly used one is VXLAN (RFC-7348) and was officially standardized in 2014. The one used by NSX-T, Geneve (RFC-8926), was standardized in November 2020. However, VXLAN’s popularity among network engineers skyrocketed in the late 2010s as a way of moving away from more costly technologies such as MPLS. ↩︎
  2. 212 = 4096, and since it’s a binary number, we count zero so 0-4095; the first (0) and the last (4095) are reserved, so 4094 are usable. ↩︎
  3. 224 = 16777216; VNI 0 is not used, so 16777215. ↩︎