VMware NSX-T Data Center transport nodes

Hypervisor transport nodes are prepared and configured for VMware NSX-T Data Center.

The VMware VDS provides network services to the virtual machines that are running on those hypervisors. VMware NSX-T Data Center supports VMware ESXi and KVM hypervisors. The N-VDS that is implemented for KVM is based on the Open vSwitch (OVS) and is platform-independent. The N-VDS can be ported to other hypervisors and serves as the foundation when implementing VMware NSX-T Data Center in other environments (for example, cloud and containers). For the VMware NSX-T Data Center on VxBlock Systems design, Dell Technologies Sales Engineers deploy and support only VMware ESXi based transport nodes. VMware supports other types of transport nodes.

Transport node topology 1

The VMware NSX-T Data Center for VxBlock System transport node topology 1 implementation uses the following criteria:

  • One VMware VDS uses VMNIC0 and VMNIC1 for VMware ESXi host functions including port groups and kernels for VMware ESXi management, VMware vSphere vMotion, and NFS.
  • There is one VMware N-VDS using VMNIC2 and VMNIC3 for workload VMs.

    The VMware N-VDS is used for east-west traffic with the use of TEPs to create an overlay network.

  • Use uplink teaming of source port on the VMware N-VDS to ensure load-balancing.
  • Use an MTU value of 9000 on these vNICs to allow for the overhead of GENEVE tunnel encapsulation.
  • TEP traffic requires VLAN tagging at the uplink profile. There is no VMware VDS in front of the VMware N-VDS.

The use of the VMware VDS and VMware N-VDS separates the VMware ESXi host functions from VMware NSX-T Data Center traffic and if a failure occurs, troubleshooting and recovery of a host is easier. The following figure illustrates the topology of this design:

In this design, the transport nodes are connected to FI A and B through all four VMNICs.

The VLANs are added to the vNIC templates as follows:
  • The VLANs for Overlay and VLAN-backed segments are added to the templates for vNIC2 and vNIC3.
  • The VLANs for Management, vMotion, and NFS traffic are added to the templates for vNIC0 and vNIC1.
  • Non-NSX site VLANs are not deployed on NSX transport nodes, but participation in NSX can be defined at a cluster level so that some clusters carry non-NSX site VLANs and others carry NSX VLAN and overlay-backed segments.

Transport node topology 2

The VMware NSX-T Data Center for VxBlock System transport node topology 2 implementation uses the following criteria:

  • In VMware vSphere 7.0, all port groups and VLANs are under the VMware VDS, including ESXi-mgmt, VMware vSphere vMotion, Uplink1, Uplink2, and Overlay.
  • Uplink teaming of source ports on VDS ensures load balancing.
  • An MTU value of 9000 on these vNICs allows for the overhead of GENEVE tunnel encapsulation.
  • TEP traffic requires VLAN tagging at the uplink profile.

The following figure illustrates the topology that is used for VMware vSphere 7.0 on the transport nodes:

In this design, the transport nodes connect to FI A and B through all four VMNICs. The four-vNIC design aligns with the VxBlock System standard configuration for a VMware vSphere 7.0 compute host, allowing standardization of service profile templates across hosts with single and dual VIC configurations.

The overlay VLAN is added to the vNIC template for vNICs 0, 1, 2, and 3 and the ToR switch A and ToR switch B trunk ports to FIs for the overlay network.