Skip to content

VirtuallyThatGuy

Anything VMware , PowerCLI, PowerShell, Automation and some Windows

Menu
  • Home
  • PowerCLI
  • VMware
  • Automation
  • Windows
  • About
Menu

NIC teaming on vSwitch, Distributed Switch and ESXi – Load Balacing Policy – VirtuallyThatGuy

Posted on 11 July 20186 December 2022 by VirtuallyThatGuy

I have for long been itching to blog about network teaming, load balancing and failover order. Let’s start off with the main benefit of NIC teaming. When uplinks are teamed, it can share the load of traffic between physical and virtual networks among some or all of its members, as well as provide passive failover in the event of a hardware failure or network outage. There are many KB articles from VMware explaining more details into this but that’s for later.

Below diagram is a vSwitch teaming and failover menu with an active active vmnic adapters.​​ 

NIC teaming within a vSwitch has several options which are more relevant when your vSwitch is using multiple uplinks in my case 2 uplinks per vmkernel and each portgrpup in my homelab.

Load Balancing​​ Options​​ 

The first point of interest is the load-balancing policy. This is basically how we tell the vSwitch to handle outbound traffic, and there are four choices on a standard vSwitch:

  • Route based on the originating virtual port

  • Route based on IP hash

  • Route based on source MAC hash

  • Use explicit failover order

Keep in mind that we’re not concerned with the inbound traffic because that’s not within our control. Traffic arrives on whatever uplink the upstream switch decided to put it on, and the vSwitch is only responsible for making sure it reaches its destination.

The​​ first option, route based on the originating virtual port, is the default selection for a new vSwitch. Every VM and VMkernel port on a vSwitch is connected to a virtual port. When the vSwitch receives traffic from either of these objects, it assigns the virtual port an uplink and uses it for traffic. The chosen uplink will typically not change unless there is an uplink failure, the VM changes power state, or the VM is migrated around via vMotion.

The​​ second option, route based on IP hash, is used in conjunction with a link aggregation group (LAG), also called an EtherChannel or port channel. When traffic enters the vSwitch, the load-balancing policy will create a hash value of the source and destination IP addresses in the packet. The resulting hash value dictates which uplink will be used.

The​​ third option, route based on source MAC hash, is similar to the IP hash idea, except the policy examines only the source MAC address in the Ethernet frame. To be honest, we have rarely seen this policy used in a production environment, but it can be handy for a nested hypervisor VM to help balance its nested VM traffic over multiple uplinks.

The​​ last​​ option, use explicit failover order, really doesn’t do any sort of load balancing. Instead, the first Active NIC on the list is used. If that one fails, the next Active NIC on the list is used, and so on, until you reach the Standby NICs. Keep in mind that if you select the Explicit Failover option and you have a vSwitch with many uplinks, only one of them will be actively used at any given time. Use this policy only in circumstances where using only one link rather than load balancing over all links is desired or required.

Network Failover Detection

There are two options for network failover detection.

Link Status only: This will detect the link state of the physical adapter. If the link state changes, for example, if the physical switch fails or the network cable is unplugged, failover to a working NIC will be initiated. Link Status works by checking physical (layer 1) connectivity. It is not able to determine or react to configuration issues such as misconfigured VLANs on trunks.

Beacon Probing: When this setting is enabled, beacon probes are sent out and listened for on all the NICs that are in the team.  It uses the probes it receives to determine the link status, and, unlike the Link Status detection method, is able to detect downstream switch failures. Beacon probing works best when there are at least 3 NICs in the team. You can read more about it here. Note: Do not use beacon probing with IP-hash load balancing.

There are some additional settings associated with failover detection.

Notify Switches.  When this is set to ‘Yes’, the host will notify the physical switch the NIC is connected to whenever a failure occurs. Generally this option is set to ‘Yes’, except when a VM using Microsoft NLB in unicast mode is assigned to the port group in question – In which case is should be set to ‘No’.

Failback.​​ Select Yes or No for the Failback policy. If ‘Yes’ is selected then traffic will fail back to the original NIC once it has recovered following a failure.

Failover Order

The last policy relating to failover and load balancing is the Failover Order policy. This lets you define which adapters are in use, in standby or unused for each vSwitch or portgroup. The three categories available for placing NICs into are:

  • Active Adapters: NICs listed here are active and are being used for inbound/outbound traffic.

  • Standby Adapters: NICs listed here are on standby and only used when an active adapter fails.

  • Unused Adapters: NICs listed here will not be used.

Configure Explicit Failover to Conform with VMware Best Practices

When using Explicit Failover, each portgroup is given its own dedicated network adapter, however also has a standby adapter configured, which is the dedicated adapter for a different portgroup. For example, on the same vSwitch you could have a management portgroup and a vMotion portgroup. vmnic5 would be active for management and standby for vMotion, whilst vmnic0 would be active for vMotion and standby for management. This solution provides each port group with its own dedicated network adapter, effectively isolating it from the impact of any network activity for the others. However, it also allows each port group to be failed over to the remaining network adapters if its own network adapter loses connectivity.

C:\Users\rboadi\AppData\Local\Temp\enhtmlclip\Image.png

 

 

C:\Users\rboadi\AppData\Local\Temp\enhtmlclip\Image(1).png

Types of port binding (VMWare)

These three different types of port binding determine when ports in a port group are assigned to virtual machines:

  • Static Binding
  • Dynamic Binding
  • Ephemeral Binding
Static binding

When you connect a virtual machine to a port group configured with static binding, a port is immediately assigned and reserved for it, guaranteeing connectivity at all times. The port is disconnected only when the virtual machine is removed from the port group. You can connect a virtual machine to a static-binding port group only through vCenter Server.

Note: Static binding is the default setting, recommended for general use.

Dynamic binding

In a port group configured with dynamic binding, a port is assigned to a virtual machine only when the virtual machine is powered on and its NIC is in a connected state. The port is disconnected when the virtual machine is powered off or the NIC of the virtual machine is disconnected. Virtual machines connected to a port group configured with dynamic binding must be powered on and off through vCenter.

Dynamic binding can be used in environments where you have more virtual machines than available ports, but do not plan to have a greater number of virtual machines active than you have available ports. For example, if you have 300 virtual machines and 100 ports, but never have more than 90 virtual machines active at one time, dynamic binding would be appropriate for your port group.

Note: Dynamic binding is deprecated from ESXi 5.0, but this option is still available in vSphere Client. It is strongly recommended to use Static Binding for better performance.

Ephemeral binding

In a port group configured with ephemeral binding, a port is created and assigned to a virtual machine by the host when the virtual machine is powered on and its NIC is in a connected state. When the virtual machine powers off or the NIC of the virtual machine is disconnected, the port is deleted.

You can assign a virtual machine to a distributed port group with ephemeral port binding on ESX/ESXi and vCenter, giving you the flexibility to manage virtual machine connections through the host when vCenter is down. Although only ephemeral binding allows you to modify virtual machine network connections when vCenter is down, network traffic is unaffected by vCenter failure regardless of port binding type.

Note:Ephemeral port groups are generally only used for recovery purposes when there is a need to provision ports directly on a host, bypassing vCenter Server.

VMware Validated Designs, for example, use these for the Management Domain to help allow flexibility in the management cluster in the event of a vCenter outage.

If a Management Cluster is not used, then it is recommended to create an ephemeral port group on the VDS for Management workloads (including vCenter), allowing them to attach to it during a vCenter outage.

1 thought on “NIC teaming on vSwitch, Distributed Switch and ESXi – Load Balacing Policy – VirtuallyThatGuy”

  1. Pingback: Homepage

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

When autocomplete results are available use up and down arrows to review and enter to go to the desired page. Touch device users, explore by touch or with swipe gestures.

Recent Posts

  • vROps: Management Pack Troubleshooting
  • Windows AD {Active Directory} (PowerShell) samples
  • Migrate VMs Between vCentres Using Powershell or PowerCLI
  • Set VM Tools to Update Automatically on VM Reboot using powershell
  • Windows Administrator Must Have Powershell Commands

Recent Comments

  • JB on Script: How to get VM with Tag Assignment and export results to csv using PowerCLI or Powershell
  • DL on How to change VCSA root password and bypass BAD PASSWORD: it is based on a dictionary word for vCenter VCSA root account warning
  • 360coolp on How to change VCSA root password and bypass BAD PASSWORD: it is based on a dictionary word for vCenter VCSA root account warning
  • Yogesh on ESXi 8.x, 7.x, 6.x Service sfcbd-watchdog Not Running / Fails to Start – VirtuallyThatGuy
  • VirtuallyThatGuy on ESXi 8.x, 7.x, 6.x Service sfcbd-watchdog Not Running / Fails to Start – VirtuallyThatGuy

Archives

  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017

Categories

  • Automation
  • PowerCLI
  • VMware
  • Windows
© 2025 VirtuallyThatGuy | Powered by Superbs Personal Blog theme