Edge ECMP topology In VMware NSX-T Data Center for VxBlock Systems, ECMP provides load-balancing across multiple paths northbound from the T0s to the Cisco Nexus 9000 Series ToR switches. VMware NSX-T Data Center supports a maximum of eight paths per edge node cluster. Each T0 gateway peers with each of the two Cisco Nexus 9000 Series ToR switches. This peering limits the maximum number of edge node VMs in the cluster to four. The following figure shows a minimum edge cluster configuration with two edge node VMs peering with the ToR switches, consuming four ECMP paths: The following figure shows a maximum edge node cluster configuration with four edge node VMs peering with the ToR switches, consuming eight ECMP paths: This design peers each edge node with each ToR switch. Child TopicsECMP routing configurationEach T0 gateway peers with each ToR switch using BGP, and traffic is distributed across the available edge node VMs using ECMP. Edge node VM resource usage and Data Plane Development KitThe VMware NSX-T Data Center edge node VMs implement the Data Plane Development Kit (DPDK) standard to provide high-performance packet forwarding capabilities. NS-peering edge clusterThe engineered VxBlock System implementation of VMware NSX-T Data Center deploys medium-sized edge VMs for the NS-Peering edge cluster. This cluster is used exclusively for North-South traffic flows. It should not be running any services other than BGP peering and ECMP. The medium appliance meets these requirements. Production01 edge clusterThe production01 edge cluster uses the large-sized edge VM. The production01 cluster hosts various services for the VMware NSX-T Data Center environment, including load balancing, VPN services, NAT, DHCP, and Edge Firewall. These services consume additional resources. Use of a large-sized edge VM is appropriate. This cluster should be used only in cases where VMware NSX-T Data Center Tier 1 centralized services are instantiated. Edge clusterThe VMware NSX-T Data Center solution for VxBlock Systems includes two edge clusters. Optionally, deploy additional edge VM requirements after initial system deployment. All VMware supported configurations are allowed provided they do not interfere with operation of the original design. The physical edge nodes have sufficient resources to allow an additional large edge node or several small or medium edge nodes per host. Ensure that each of the hosts in the cluster has some reserve capacity. That way, maintenance and recovery operations can be completed as needed with no warnings or errors appearing in the VMware vCenter Server UI. SegmentsNSX-V included the concept of a VXLAN Network Identifier, or VNI. A VNI is the equivalent of a port group on a Distributed Virtual Switch (VDS) in VMware vSphere. However, it is located behind an NSX-V logical switch. VMware NSX-T Data Center uses GENEVE encapsulation instead of VXLAN, so the concept of a VNI is not relevant. Instead, VMware NSX-T Data Center uses a construct called an overlay backed segment. Transport zonesSegments are created as part of a VMware NSX-T Data Center object called a transport zone. There are VLAN-backed transport zones and overlay-backed transport zones. ProfilesAn uplink profile is a template that defines how an NSX-managed Virtual Distributed Switch (N-VDS) connects to the physical network. Tier 0 gatewayA Tier 0 gateway has downlink connections to Tier 1 gateways and uplink connections to physical networks. The Tier 0 gateway provides a gateway service between the logical and physical network. The Tier 0 gateway enables route redistribution, routing, and BGP, as well as optional T0 services. Tier 1 gatewayA Tier 1 gateway is typically connected to a Tier 0 gateway in the north-bound direction and to segments in the south-bound direction. Tier 1 gateways may also provide centralized services such as load balancing, VPN services, NAT, DHCP, and so on.