Tunneling And Network Virtualization: NVGRE, VXLAN

The hype about NVGRE and VXLAN tunneling protocols begun 2 years ago. It is important to remebmer that tunneling protocols are minor components of a full virtualized network. These tunnels doesn’t not provide any functions for the system providers, rather than just to define how the packets are encapsulated over the wire, and forwarded between VMs. In this post I’ll focus on 2 of them: NVGRE and VXLAN. Both are encapsulating L2 protocols over L3 protocols. Both techniques also remove the scalability issues with VLANs which are bound at a max of 4096. As many times of new technologies, one standard is usually not enough, and tech giants are rivaling to push their own to be the industry standard. I will try to explain the differences between the two of them.

VXLAN

Mainly driven by Cisco. The VXLAN packet header includes a 24-bit segment ID (16M unique virtual segments),This ID is usually generated by using pseudo-random source UDP ports (which are hash-generated upon the original MAC addresses within the frame). The incentive is to keep the ordinary 5-tuple based load balancers, to preserve the inter-VM packet order, and it achieve this by “reflecting” the internal packet mac pairs, to a unique UDP port pair. L2 broadcast is converted to IP multicast: VXLAN specifies the use of IP multicast to flood within the virtual segment and relies on dynamic MAC learning. The VXLAN encapsulation increases the size of the packet  by 50 bytes, as described below:

VXLAN Encapsulation

Due to the oversized packets, VXLAN rely on the transport network (to support the increase of the packet size), by requiring it to support jumbo frames.

NVGRE

Driven mainly by Microsoft. In opposite to VXLAN, does not take advantage of a standard transport protocol (TCP/UDP), instead uses Generic Routing Encapsulation (GRE) as the encapsulation method. It uses the lower 24 bits of the GRE header to represent the Tenant Network Identifier (TNI.) Similar to VXLAN this 24 bit space allows for 16 million virtual networks. In order to provide flow-level granularity (which is desirable to take advantage of all bandwidth) the transport network should use the GRE header (not backward compatibly to traditional load balancers) – this is the main drawback (and most important difference) of the NVGRE over VXLAN.  In order to enhance load-balancing the draft suggests the use of multiple IP addresses per NVGRE host, which will allow for more flows to be load balanced. 

NVGRE Encapsulation

NVGRE does not rely on flood and learn behavior over IP multicast, which makes it a more scalable solution for broadcasts. This makes it an hardware/vendor dependant. The last difference is related to fragmentation. NVGRE support reduce of the packet MTU (mtu discovery) to reduce intra-virtual-network packet sizes, and does not rely on jumbo frames support of the transport network.

Implementation

OVS (open vSwitch) do support both tunneling protocols. One can experience a simple connectivity of 2 VMs (on different hosts) by setting up 2 hosts, each running a VM on it, and creating a tunnel between them. Without the GRE tunnel no connectivity will be available between the two VMs. A simple setup to create a tunnel between the two hosts: 1. Host 1 configuration

ovs-vsctl add-br br_gre
ovs-vsctl add-br br_vm
ovs-vsctl add-port br_gre eth0
ifconfig eth0 0 
ifconfig br_gre 192.168.1.100 netmask 255.255.255.0
route add default gw 192.168.1.1 br_gre
ifconfig br_vm 10.1.2.10 netmask 255.255.255.0
ovs-vsctl add-port br_vm gre1 -- set interface gre1 type=gre options:remote_ip=192.168.1.111

2. Host 2 configuration:

ovs-vsctl add-br br_gre
ovs-vsctl add-br br_vm
ovs-vsctl add-port br_gre eth0
ifconfig eth0 0 
ifconfig br_gre 192.168.1.111 netmask 255.255.255.0
route add default gw 192.168.1.1 br_gre
ifconfig br_vm 10.1.2.11 netmask 255.255.255.0
ovs-vsctl add-port br_vm gre1 -- set interface gre1 type=gre options:remote_ip=192.168.1.100

I created 2 fake bridges, one simulating a vm (br_vm), and the other one is used for tunneling (VTEP) to the other host (br_gre). eth0, connected to br_gre, which is used to carry the tunnel, is attached to IP. I kept it simple, and let the GRE tunnel to be on the same subnet. Ofcourse it can be changed to different domains, like a more realistic scenario. Now to test this, ping 10.1.2.10 should reply, and 10.1.2.11 too, respectively. Those two VM domains are now connected through a tunnel. Similar, this configuration can be changed to a VXLAN tunnel.

One thought on “Tunneling And Network Virtualization: NVGRE, VXLAN

  1. Pingback: 隧道和网络虚拟化:NVGRE vs VXLAN | SDNLAB | 专注网络创新技术

Leave a Reply

Your email address will not be published. Required fields are marked *