Thứ Hai, 16 tháng 7, 2018

Configuring Distributed Switches in vCenter 6

Introduction to Distributed Switches

Version: vSphere 6.5 Update 1
600px-VMW-DGRM-vNTWRK-DIST-SWTCH-lg.jpg
Distributed vSwitches were first introduced in vSphere 4.0 - and since then various enhancements have been made as each subsequent release of vSphere has been released. At their heart, Distributed vSwitches are method of centralizing the management of the virtual network into single plane. Every VMware ESXi host added to Distributed vSwitch inherits its configuration, and those settings are stored within vCenter, rather than on the ESXi host itself. This means adding new portgroups for a new VLAN for a cluster of ESXi hosts are relatively trivial affair. The VMware ESXi host "caches" its Distributed vSwitch to local storage so if the vCenter is unavailable for whatever reason network communications are unaffected. However, no management of the Distributed vSwitch is possible until it is restored. For this reason some virtualization admins prefer that infrastructure VMs such as vCenter, SQL, Domain Controller and other VMware services and appliance remain on standard vSwitch to allow for continued management even if vCenter is offline.
Distributed vSwitch also off some features which are easier to configure than with Standard vSwitches, such as adjusting the MTU for Jumbo Frame support - in addition there are some unique features these include:

  • Private Virtual LAN support (PVLAN)
  • Port Binding
  • Traffic Shaping for both inbound and outbound traffic
  • Port Policies and Port Mirroring
  • Network IO Control
  • NetFlow
  • Network Rollback and Recovery
  • Health Check
  • Enhanced LACP Support
  • Additional load-balancing options on a Distributed Portgroup called "Route based on physical NIC Load"

Managing a Distributed vSwitch (Web Client)

Creating Distributed Switch

1. In the Web Client, right-click a Cluster or DataCenter, and select Distributed Switch and New Distributed Switch
Screen Shot 2016-04-28 at 10.34.56.png

Note: It is possible to export the configuration of Distributed Switch to a file, and then import it - this can be used as method of backing up a Distributed Switch configuration.
2. Type a friendly name for the Distributed Switch. Names must be unique to the enter vCenter namespace, and commonly reflect a cluster of VMware ESXi hosts. Distributed vSwitches can span clusters, but many virtualization administrators prefer to curtail the scope to the host in a clusters. This means administration can be limited impact an smaller number of physical hosts. In this case DSswitch-GoldCluster01 was used to reflect the naming structure envisioned for the VMware Clusters that will be created.
Screen Shot 2014-02-11 at 13.49.46.png
3. Next, select what version of Distribute Switch you wish to use. Older formats are supported for backwards compatibility in situations where vCenter is managing a cluster of VMware ESXi hosts that have yet to be upgraded to the latest version.
Screen Shot 2018-02-21 at 11.19.13.png
4. Select the number of uplinks to be assigned. The default here is 4 vmnics. In our case vmnic0/1 have been assigned to a Standard vSwitch (vSwitch0), leaving another two physical NICs available (vmnic2/3). These physical adapters will be assigned to the Distributed Switch when we add the VMware ESXi hosts to the Distributed Switch in a later wizard. NIOC allows for advanced methods of triaging and prioritising network traffic which makes it a more sophisticated method than using Traffic Shaping on its own. Finally, you can create an initial portgroup. If you do wish for VLAN tagging to be supported this done by modifying the settings of the portgroup after it has been created in this wizard.
Screen Shot 2014-02-11 at 13.50.16.png
5. Click Finish to complete the wizard. Notice how this wizard indicates further steps including adding VMware ESXi hosts to the Distributed Switch and
Screen Shot 2016-04-28 at 10.48.53.png
If you switch into the network view in the Navigator pane of the Web Client - After a short while the Distributed Switch should appear in the Web Client like so:
Screen Shot 2014-02-11 at 13.50.51.png

Adding VMware ESXi hosts

Once a Distributed Switch has been created we can make it accessible to the VMware ESXi hosts, and select which physical vmnics will back the movement of network traffic out to a physical switch.
1. You can add VMware ESXi hosts, by right-clicking the Distributed Switch and selecting Add and Manage Hosts...
Screen Shot 2014-02-11 at 17.03.38.png
2. Select Add Hosts
Screen Shot 2014-02-11 at 17.05.53.png
3. Click the green plus + to add hosts into the list. Enable the option to Configure identical network settings on multiple hosts (Template Mode). Template mode allows the administrator to select one reference VMware ESXi host and have its settings applied to all other hosts added to the Switch. This option massively reduces the amount clicking to complete the configuration - but does assume the physical configuration such as vmnic number is identical.
Screen Shot 2014-02-23 at 14.36.03.png
4. Select the server to be the template host, in our case we selected esx01nyc.corp.com
Screen Shot 2014-02-23 at 14.40.54.png
5. In our case we deselected Manage VMkernel Adapters, but keep Manage Physical Adapters selected. This is because will look at migrating both virtual machine network and VMkernel Networking from Standard vSwitches to Distributed Switches later
Screen Shot 2014-02-11 at 17.08.31.png
6. Select a free physical network adapter, in our case vmnic2, and assign it an Uplink container
Screen Shot 2014-02-23 at 14.42.24.png
7. Repeat this process for all the vmnics you wish to assign to the Distributed Switch, once completed - click the Apply to all button to apply this configuration to all the other VMware ESXi hosts
Screen Shot 2014-02-23 at 14.44.47.png
8. vCenter will look at your management changes, and assess the likely impact on your infrastructure.
Screen Shot 2014-02-11 at 19.36.17.png
Clicking Finish will trigger the adding of the ESXi host to the Distributed Switch, and you can confirm the hosts are correctly added and connected from
Screen Shot 2018-02-21 at 11.25.43.png

Creating and Modifying Virtual Machine Portgroups

Note: Advanced settings and options on both the Distributed Switch and Portgroup will be covered later.
The default portgroup created along side the Distributed Switch can be easily modified to support features such as VLAN tagging, and additional portgroups for VLANs can be easily added with the Web Client.
1. Right-click the target Distributed Switch, and select Distibuted Portgroup and New Distributed Port Group
Screen Shot 2016-04-28 at 11.38.06.png
2. Type in a friendly and unique name for the portgroup
Screen Shot 2014-02-12 at 09.09.13.png
3. Assign the VLAN Tagging option, and type in the VLAN ID
Screen Shot 2014-02-12 at 09.12.09.png
Notes: PVLAN configuration is covered later.
4. Click Finish to create the portgroup
Screen Shot 2014-02-12 at 09.26.56.png

Creating VMKernel Portgroup

As well as virtual machine networking, it is possible to use Distributed Switches for VMware ESXi host network as well. Although many VMware Admins still prefer to use Standard vSwitches for this type of functionality - there maybe case where the administrator is compelled to do use just Distributed Switches. For instance a physical server may only have two 10Gps/20Gps/40Gps NIC interfaces which need to be teamed up in one switch technology - as physical vmnics can only be used once per switch type.
Creating a VMkernal Portgroup is a two-stage process and involves interacting with the Distributed Switch, and with the VMware ESX host to configure the VMkernel's IP settings which are unique to the VMware ESXi. In this example we will create a portgroup to enable the VMware Fault Tolerance.
1. Right-click the target Distributed Switch, and select Distibuted Portgroup and New Distributed Port Group
Screen Shot 2016-04-28 at 11.38.06.png
2. Type in a friendly and unique name for the portgroup
Screen Shot 2014-02-12 at 10.05.22.png
3. Assign the VLAN Tagging option, and type in the VLAN ID
Screen Shot 2014-02-12 at 10.05.38.png
4. Click Finish to create the portgroup
Screen Shot 2014-02-12 at 10.46.18.png
Once the portgroup has been created then we can transition to creating a VMKernel Port.
5. Select the VMware ESX host, and under the Configure tab, and the Networking section - select the VMKernel Adapters
6. Click the Globe icon, to Add Host Networking
Screen Shot 2018-02-21 at 11.32.23.png
7. In the wizard, select VMKernel Adapter
Screen Shot 2014-02-12 at 13.52.03.png
8. Select option to use an Existing Distributed portgroup, and click the Browse button
Screen Shot 2014-02-12 at 13.55.30.png
9. In this case under VMkernel port settings we selected Fault Tolerance Logging
Screen Shot 2018-02-21 at 11.40.02.png
10. Configure appropriate IP settings as befits your network
Screen Shot 2014-02-12 at 14.26.00.png
11. After clicking next, the Web Client should refresh, and show the new VMK portgroup with its assign Distributed Switch. These configuration steps need to be repeated for each host
Screen Shot 2018-02-21 at 11.50.02.png
Note: For the configuration status for FT to switch from No, to Yes - the VMware ESXi host must be part of High-Availability Cluster (HA)

Advanced Distributed vSwitch Settings (Web Client)

The Distributed Switch offers up a whole host of advanced settings applicable to the Switch and the portgroup. Not all of these settings will be applicable, important or supportable in your deployment. However, we have chosen to cover these in as much detail as possible - highlighting the options that appear to be most commonly required in most environments.

Topology

The topology view is a good way to get an overhead view on the Distributed Configuration. He was can see the three distributed portgroups; the IP addresses used by the FT VMkernel portgroup; The physical NICs associated with the hosts and the UpLink container. The small (i) information icon allows you to view the settings on each component.
Screen Shot 2018-02-21 at 11.55.55.png

Properties

The general properties options allow you to:
  • Rename the Distributed vSwitch
  • Decrease/Increase the number of UpLink contains, as well as rename them
  • Enable/Disable Network IO Control (Enabled by default)
  • Set a description
Screen Shot 2014-02-12 at 15.11.31.png
The Advanced propeties options allow you to:
  • Set the MTU size for Jumbo Frames
  • Configure the Cisco Discovery Protocol settings, and switch to Link Layer Discovery Protocol
  • Set a name and contact details for the Administrator
Screen Shot 2018-02-21 at 12.06.02.png
The MTU value is applied to all communications passing to/from the Distributed Switch - and its important all paths in the communication flow are configured for the correct MTU size. If the MTU size impact on a virtual machine portgroup the MTU size should be adjusted within the guest operating system to a matching size. This is to aviod a scenario called fragmentation. If a 9000 MTU ethernet packet encounters a 1500 system then the packet will be split into 6x1500 packets which will actually reduce performance, and increase the overhead on the device/system that carries out the fragmentation.
Assuming your physical switch supports Cisco Discovery Protocol (CPD) the support can be adjust to listen/advertise/both. Listen enables the vSphere Administrator to query and return information from the physical switch - this can be useful in diagnosing configuration mismatches. Likewise Advertise allows the Cisco administrator to query the Distributed Switch as if it was a physical switch. Both allows for a combination of Listen/Advertise.
CDP is available for both Standard and Distributed Switches, but Link Layer Discovery Protocol (LLDP) is only available on version 5.0 Distributed Switches or higher. Assuming the physical switch supports one of these protocols, then you should see information in the (i) icon on a physical adapter like so:
Screen Shot 2014-02-12 at 16.26.28.png
CDP-web.png

Migrating to Enhanced LACP

Introduction

Recommendation: If you are intend to use LACP consider deploying from day one rather than migrating from Distribute Uplink containers. The migration is simple but adds additional steps that would be unneccesary.
Image Source: KB2051826
GUID-2F8C5C5F-F4AC-4DB7-90BC-963EFB87B1CD-high.png
Distributed Switches support an enhanced version of LACP which allows for the use of dynamic link aggregation. Prior to vSphere 5.5 only single Multiple Link Aggregation Groups (LAGs) could be created - with the release of vSphere 5.5 up to 64 LAGs can be created per Distributed Switch. Despite these new enhancements a number of configuration requirements to do exist in terms of its current implementation - consult KB2051307 for further details. There is more detailed documentation that outlines the differences between vSphere 5.0, 5.1 and 5.5 which you should consult if you are running in a mixed environment - these are details in KB2051316
However, expressed briefly LACP does currently:
  • Support is not compatible with software iSCSI multipathing.
  • support between two nested ESXi hosts is not possible (virtualized ESXi hosts).
  • LACP cannot be used in conjunction with the ESXi dump collector. For this feature to work, the vmkernel port used for management purposes must be on a vSphere Standard Switch.
  • Port Mirroring cannot be used in conjunction with LACP to mirror LACPDU packets used for negotiation and control.
  • The teaming health check does not work for LAG ports as the LACP protocol itself is capable of ensuring the health of the individual LAG ports. However, VLAN and MTU health check can still check LAG ports.

The VMwareTechPub YouTube channel has short video which explains how enhanced LACP functions, with a second video that guides you through the process.
{{#ev:youtube|QWUXL7N4HFQ||center|Understanding Enhanced LACP Support in vSphere Distributed Switches with Ravi Soundararjan}}
{{#ev:youtube|ATpGa1EqvCs||center|Configuring Enhanced LACP Support in vSphere Distributed Switches with Ravi Soundararjan}}
LAGs are similar to the UpLink containers in the Distributed Portgroup. When you define a Distributed Switch you set how many Uplink containers you require - the same is true with a LAGs. You define the LAG, and then indicate how many vmnics it will support. If you intended to use LACP as the primary method for communications you could choose to create a Distributed Switch with 0 Uplinks, and then define the LAGs. This would mean there would be no need to migrate from using Distributed Uplinks to LAGs.

Create Multiple Link Aggregation Group (LAG)

1. Select the Distributed Switch, click the Manage tab and select the Settings column
2. Click the green + to create a LAG
3. Set a friendly name of LAG, and specify how many network ports (or vmnics) it will support - in this case our server has four network cards, and our Standard Switch0 is using vmnic0/1, and the Distributed Switch is using vmnic2/3. Change mode from Passive to Active, and this will trigger a negotiation with the physical switch. Finally, select a load-balancing algorithm supported by the physical switch
Screen Shot 2014-02-21 at 14.49.31.png

Reconfigure Portgroups to use the LAG in Standby Mode

Next we will reassign the portgroups that we want to use use with LACP. This is two stage process to avoid a situation where VMs or other traffic becomes disconnected.
1. Right-click the Distributed Switch, select Manage Distributed Portgroups
2. Select Teaming and Failover
Screen Shot 2014-02-21 at 14.55.15.png
3. Select the Portgroups you wish to modify. In our case we selected dVLAN101 and dVLAN102
Screen Shot 2014-02-21 at 14.56.01.png
WARNING: During the migration phase the selected Distributed Portgroups will have connectivity to both UpLinks and LAGs. However, once the VMware ESXi hosts have their physical NIC assignment it is entirely possible for portgroup to find that its Uplink containers contain now vmnics at all - as such communication would cease. So care must be taken to avoid situations where network outages are generated.
4. Next move the LAG from Unused Uplinks to Standby Uplinks. This an intermediary configuration as we migrate from using Uplinks to LAGs as the container for assigning physical vmnics. You cannot use LAGs and Uplinks together
Screen Shot 2014-02-21 at 14.58.46.png
If you do move the LAG in this state to the Active state, then the Web Client will warn you that this is an in proper configuration.
Screen Shot 2014-02-21 at 15.01.34.png
When you click next you the Web Client will warning you that normally Uplinks and LAGs cannot be used together - and they are only supported in this method during the migration stage.
Screen Shot 2014-02-21 at 15.07.03.png
5. Click Next and Finish
Note: Only one LAG group can be used per portgroup. It is not support to add more than one LAG either to Active or to Standby. So whilst it would be perfectly possible to assign LAG1 (containing vmnic2/3) to one portgroup and LAG2 (containing vmnic4/5) to another portgroup - LAG1 and LAG2 could not be assigned to the SAME portgroup, as the screen grab below demonstrates.
Screen Shot 2014-03-03 at 12.57.21.png

Reassign vmnics from to Uplink to the LAG

1. Right-click the Distributed Switch and select Add and Manage Hosts
2. Select Manage Host Networking
3. Attach to the hosts allocated to the Distributed Switch. Enable the option Configure identical network settings on mulitple hosts (Template Mode). Template mode allows the administrator to make a change to one host, and have that be the assumed configuration for all the hosts.
4. Select one of your VMware ESXi hosts to be the template host
5. Deselect Manage VMKernel Adapters, and ensure that Manage Physical Adapters is selected
6. In the Manage physical adapters page, notice how all the vmnics are assigned to the Uplink container.
Screen Shot 2014-02-21 at 15.25.52.png
Select the first vmnic, in our case this is vmnic2, and click the Assign Uplink option - in the subsequent dialog box select a free network port within the LAG group...
Screen Shot 2014-02-21 at 15.27.40.png
Once finished, click the Apply to all to have these settings applied to all the VMware ESXi hosts
Screen Shot 2014-02-21 at 15.29.57.png

Assign the LAG to be Active for Distributed Portgroups

1. Right-click the Distributed Switch, select Manage Distributed Portgroups
2. Select Teaming and Failover
Screen Shot 2014-02-21 at 14.55.15.png
3. Select the Portgroups you wish to modify. In our case we selected dVLAN101 and dVLAN102
Screen Shot 2014-02-21 at 14.56.01.png
4. Next move the LAG to be the Active uplink, and move the Uplink containers to Unused uplinks. Once the LAG is fully active, the settings for load-balancing on a Distributed Portgroup, are over-ridden by the setting on the LAG itself.
Screen Shot 2014-02-21 at 16.26.45.png
From the Topology view on the Distributed Switch you can see how its the LAG group which contains vmnic2/3 that is now responsible for the traffic.
Screen Shot 2014-02-21 at 16.41.02.png

Private VLAN

The VMwareKB YouTube channel has short video which explains how PVLANs work which you might useful if your configuring this feature for the first time.
{{#ev:youtube|u3lNqB-MBeo||center|Private VLAN on a vNetwork Distributed Switch - Concept Overview}}
Private VLAN or PVLANs is an extension to the VLAN standard, and allows for the extension of a single VLAN into secondary PVLANs. These secondary PVLANs reside within the domain of a primary VLAN. As with VLANs, VMware's implementation of PVLAN allows the Distributed Switch to be aware of the underlying physical switches PVLAN configuration. PVLANs are usual seen within the Service Providers, ISPs and government bodies - they are less prevalent in the corporate world, and hardly if ever configured at the SMB markets except in unique use cases.
There are three types of Secondary PVLAN:
  • Promiscuous
  • Isolated
  • Community
Screen Shot 2014-02-26 at 15.26.07.png
Each type allows determines how packets are forwarded or not. A node attached to a Promiscuous PVLAN can send and receive packets to ANY other node residing on Secondary PVLAN associated with it. Typically, devices such as routers are attached to Promiscuous PVLANs to act as a gateway to further networks. An Isolated PVLAN has more limited communication properties, it can only communicate to and from its configured Promiscuous PVLAN. It cannot communicate to other Isolated PVLANs or to Community PVLANs - additionally, nodes WITHIN a Isolated PVLAN cannot communicate to each other either. Isolated PVLANs are typically used to create a DMZ for firewall purposes - as one compromised node, cannot be used as source to launch attacks to others. The Community PVLAN allows a node to send and receive packets from other ports within the same PVLAN, and can communicate to the Promiscuous PVLAN. For a Distributed Switch to interact with a PVLAN implementation it must support and be enabled for 802.1q Tagging. Physical switch can be confused by the fact that in some cases each MAC address is visible in more than one VLAN tag. For this reason Physical switch must trunk to the ESXi host and not be in a secondary PVLAN.

The configuration begins with defining the Promiscuous PVLAN, and then adding support with the various secondary PVLANs to which it is associated with. The PVLAN is define as flat list with just the PVLAN ID number in left-hand column of the UI with the Secondary PVLANs defined on the right-hand side. Once both Promiscuous, Isolated and Community PVLANs have been defined they can be utilised by the Distributed Portgroup
Define the Promiscuous PVLAN
1. Select the Distributed Switch, and click the Configure tab
2. Select Private VLAN, and click the Edit button
3. In the subsequent dialog box, in the left-hand column click Add and type the PVLAN ID number that represents the Promiscuous PVLAN - in this case PVLAN ID 103
Screen Shot 2018-02-21 at 12.23.32.png
Notice how along side every Promiscuous PVLAN, a secondary PVLAN is created.
4. Next in the same dialog box, in the right-hand column click Add, and type the PVLAN ID number that represents your Isolated or Community PVLAN.
Screen Shot 2014-02-21 at 11.17.05.png
5. Once the configuration is completed the Web Client will refresh to show the new configuration.
Screen Shot 2014-02-21 at 11.21.28.png
Notice how the Community (203) and Isolated (204) PVLANs are configured to speak via the Promiscuous PVLAN (103)
Create PVLAN Enabled Distributed Portgroups
Now that the PVLANs have been definied on the Distributed Switch, Distributed Portgroups can be created to reference them. Friendly names can be used to distinguish their function and purpose
1. Right-click the Distributed Switch, and select New Distributed Portgroup
2. Type a friendly name such as pVLAN103-CorpBackBone
3. Under VLAN and VLAN Type, select Private VLAN, and select one of the Promiscuous PVLAN(s) created earlier
Screen Shot 2014-02-21 at 11.27.10.png
The drop-down list shows the type (Promiscuous, Isolated, Community) together with the Promiscuous PVLAN number first (103) followed by that PVLANs own unique PVLAN Number (203 or 204)
4. Next we can create portgroups to utilize the other PVLANs that are accessible - these can be used by VMs and virtual firewalls and routers
Screen Shot 2014-02-21 at 11.32.42.png

NetFlow

From monitoring it is possible to enable the NetFlow protocol on the portgroup. Once enabled, you can configure the NetFlow collection IP address and TCP Port number on the Netflow Settings of the Distributed Switch. By default NetFlow is disabled on the Distributed Portgroup.
To use NetFlow you will need a collector that gathers the statistics generated by the NetFlow protocol. There are number of free open-source Netflow Collectors as well as commercially available ones as well. This section uses a 30-day evaluation of Scrutinizer NetFlow & sFlow Analyzer from Plixer. They also have a free edition with some feature limitations. It Supports unlimited interfaces on up to 5 routers and stores data for just 24 hours. They also have a Virtual Appliance Edition for which you can apply for a license key.
1. Start by modifying the NetFlow Settings on Distributed Switch underneath the Manage tab, and Edit settings
Screen Shot 2014-02-14 at 11.02.28.png
2. In this configuration box you will need to set the IP address of the collector, together with the TCP port number the NetFlow Connector listens on for NetFlow traffic - typically Scrutinizer listens for devices on ports 2055, 2056, 4432, 4739, 9995, 9996, 6343 by default. The second IP address is a management address assigned to the Distributed Switch (this is not the IP address of the physical switches to which the VMware ESXi hosts are connected).
3. Next configure your Advanced Settings which control the rate of collection.
Screen Shot 2014-03-10 at 13.42.57.png
4. Finally, enabled the NetFlow Setting on the properties of a Distributed Portgroup
Screen Shot 2014-02-14 at 11.03.02.png
5. After a short while the NetFlow Collector should start receiving information from the distributed in the case of Scrutinizer its possible to create groupings - to gather your various Distributed Switch into single view separating from other devices reporting NetFlow information.
Screen Shot 2014-03-10 at 13.50.40.png
From the Reports within Scrutinizer node under the registered Distributed Switch its possible to run reports that cover different time periods and traffic types. For instance this 24-hour report shows traffic between various VMs configured for the Distributed Switch.
Screen Shot 2014-03-10 at 13.53.32.png

Port Mirroring

Screen Shot 2014-02-21 at 11.42.43.png
Port Mirroring allows the capturing of packets between two VMs on the same Distributed Portgroup. Communication between two VMs can be viewed by third VM running packet capturing software such as WireShark. This can be used to facilitate network troubleshooting, and as source of data for other network analysis appliances. Many physical switch vendors support port mirroring - so in similar way Distributed Switches can offer up like for like features. As with other features of the Distributed Switch port mirroring fundamentally makes the virtual layer aware of features and functions at the physical layer - so port mirroring must be enabled on the physical switch. Therefore consult your vendors documentation for the steps to enabled this functionality if it has not already been carried out.
Note: Descriptions come directly from VMware's online help documentation.
  • Distributed Port Mirroring: Mirror packets from a number of distributed ports to other distributed ports on the same host. If the source and the destination are on different hosts, this session type does not function. This is important consideration as VMs can be moved from one VMware ESXi host to another via the action of manual vMotion, Maintenance Mode or DRS.
  • Remote Mirroring Source: Mirror packets from a number of distributed ports to specific uplink ports on the corresponding host.
  • Remote Mirroring Destination: Mirror packets from a number of VLANs to distributed ports.
  • Encapsulated Remote Mirror (L3) Source: Mirror packets from a number of distributed ports to remote agent’s IP addresses. The virtual machine’s traffic is mirrored to a remote physical destination through an IP tunnel.
  • Distributed Port Mirroring (Legacy): Mirror packets from a number of distributed ports to a number of distributed ports and/or uplink ports on the corresponding host.
Port Mirroring sessions are defined on the Distributed Switch in this a Remote Mirroring Destination is being used where a monitor is place on VLAN102 is used to monitor VMs on VLAN101.
1. Select the Distributed Switch, click the Manage tab, and the Settings column
2. Underneath Port mirroring, click the Green + symbol to add a session
Screen Shot 2014-02-21 at 12.32.50.png
3. Select Remote Mirroring Destination
4. Next give the Mirror Session a friendly name, and enable it - and configure your Advanced properties
Screen Shot 2014-02-21 at 12.42.30.png
Normal I/O on destination ports allow or disallow is only available for uplink and distributed port destinations. If you disallow this option, mirrored traffic will be allowed out on destination ports, but no traffic will be allowed in. Mirrored packet length puts a limit on the size of mirrored frames. If this option is selected, all mirrored frames are truncated to the specified length. Sampling rate is enabled by default for all port mirroring sessions except legacy sessions.
5. In Select Sources click the green + to add a VLAN ID. This is location of the network sniffer. Notice the description says traffic from WHICH to be mirrored.
Screen Shot 2014-02-21 at 12.48.11.png
6. In the "Select Destinations" there are two methods indicate which ports on Distribute Portgroup will be monitor - using a point and click method:
Screen Shot 2014-02-21 at 12.52.40.png
alternatively the other method allows you to type a range of port numbers to be monitored:
Screen Shot 2014-02-21 at 12.58.16.png

Health Check

Caution: There has been ancedotal tales that enabling health check can cause problems. Allegedly this is to do with the method by which the L2 Switch is being inspected. I personally have never had a problem - and I've found the feature to be incrediably useful (ML)
Important: Remember if a VLAN is unaccessible you will get two alarms - one for the VLAN and another for the MTU value. The MTU value cannot be retrieved until the VLAN is accessible. So the VLAN alarm must be resolved first before diagnosing any potential MTU mismatch issue.
Health Check allows vSphere to analyse the attributes of the Distributed Switch, and compare them to the attributes of the physical switch. It can be use to highlight misconfiguration such as invalid MTU sizes, VLANs, NIC Teaming and failover. For instance an alarm can be raised when the Distribute Switch refers to a VLAN on a portgroup, which isn't available at the physical layer. By default Health Check is not enabled.
Enabling Health Check
1. Select the Distributed Switch, and click the Configure Tab
2. Select Health check
3. The Edit button to enable the feature.
Screen Shot 2014-02-13 at 13.42.16.png
Viewing Health Check Status
1. Select the Distributed Switch, and click the Monitor tab
2. Select the Health Column
3. Problems will be flagged with a yellow exclamation mark on each affect host.
Screen Shot 2014-02-13 at 13.49.12.png
In this case flagging up that although VLAN101 is available, VLAN102 is not accessible to either vmnic2 or vmnic3 on the host. With further investigation it became clear that whilst VLAN101/103/104 had the vmnic2/3 of each host enabled for tagging - VLAN102 had not been enabled for VLAN Tagging. In error the FT Portgroup was not enabled for VLAN was attemtping to use VLAN0 which did not exist.
These issues were resolved to and the health check refreshed like so - with VLAN103 being used for the FT communication.
Screen Shot 2014-02-13 at 13.58.01.png

Using Network I/O Controls and Network Resource Pools

Network I/O controls together with Network Resource Pools allow for a sophisticated management of all the traffic types available on a Distributed Switch. There are number of built-in Network Resource Pools, as well the capacity to create user-defined pools. There built-in pools support the following traffic types:
  • Fault Tolerance
  • iSCSI
  • NFS
  • vMotion
  • Management
  • vSphere Replication (VR)
  • Virtual Machine
The settings on these "System" Network Resource Pools can be modified - and no work has been done to assign them to a portgroup. VMware ESXi automatically recognises the traffic type leaving the host, and assigns the settings. By default all traffic is treated equally, except for Virtual Machine traffic which is given a share value of "High".
Screen Shot 2014-02-20 at 14.48.44.png
These default settings can be adjusted to affect the traffic.
Optionally, user-defined Network Resource pools allow the administrator to define their own pools - and configure the custom settings. This consists of a proportional "share" value. This is a settings is usually configured with parameters of Low, Normal, Medium and High. A custom option allows allocation of a number such as 100. The label or numerical allow to set a high allocation of resources dependent on contention. Under normal operations when network resources are not scarce, the traffic types are able to use as much of the bandwidth available. However, when contention kicks in because of a lack of bandwidth the share system guarantees a predictable outcome. Typically, administrators allocate a higher share value to mission critical traffic such as iSCSI, NFS, Management and Virtual Machines - whereas ancillary traffic which is more expendable and used in background process such as vMotion or vSphere Replication. In addition to the share allocation, these built-in network resource pools an allocation of bandwidth can be granted per traffic type.
Enabling Network I/O Controls
By default Network I/O Controls is automatically enabled on the setting of Distributed Switch. You can confirm that is still enabled by navigating to Manage tab, Resource Allocation column
Screen Shot 2014-02-20 at 12.28.08.png
If this is not case:
1. Right-Click the Distributed Switch, and select Edit Settings
2. Under General, ensure that Network I/O Control is set to be Enabled.
Screen Shot 2014-02-20 at 12.29.46.png
Create a new User-defined Network Resource Pool
1. Click the Distributed Switch, and select the Manage tab, and the Resource Allocation column
2. Click the green plus + to create a new Network Resource Pool
Screen Shot 2014-02-20 at 14.54.26.png
In this case the Network Resource Pool only allows for 10Mps with a low share value together with a QoS tag of 0. QoS tags use the IEEE 802.1p tag system prioritise traffic at the physical switch. The higher the QoS setting the high priority allocated to that frame on the switch. The QoS Tags begin number at 0 and goes a high as 7.
0 - Background
1 - Best Efforts
2 - Excellent Efforts
3 - Critical Applications
4 - Video, <100ms latency
5 - Video < 10ms latency
6 - Internetwork Control
7 - Network Control
3. This process can be repeated to create as many classifications of traffic as you wish - in this case two user-defined network resource pools were created like so:
Screen Shot 2014-02-20 at 15.07.23.png
Allocate a Distribute Portgroup to Network Resource Pool
1. Next we can assign the Network Resource Pool to the correct portgroup(s). Right-click the portgroup, and select Edit Settings
2. Under General, click the pull-down list to assign the Network Resource Pool
Screen Shot 2014-02-20 at 15.10.09.png
3. Once assign it is possible to see which Network Resource Pool is allocated to which portgroup, and which virtual machines are affected by its settings by being configured for that portgroup.
Screen Shot 2014-02-20 at 15.14.45.png
If you wish to see user-defined Network Resource Pools in action vExpert Eric Sloof has a good video on his YouTube Channel -

Advanced Distributed Portgroup Settings (Web Client)

General

The General settings on distributed portgroup allow the administrator to control:
  • Set Portgroup name
  • How ports are consumed and used by virtual machines
  • The number of ports
  • Assign a Network Resource Pool
  • Assign a description
Screen Shot 2014-02-13 at 14.26.08.png
Its possible to see virtual distributed switch as if it is a physical switch. Physical switches come with a limited and predefined number of ports to which ethernet ready devices can be attached. Typically physical switches come with 8/16/24/48 ports on them. One a device is using a port it cannot be used by another device until the original device is unplugged. With a virtual switch the number of ports is configurable, and a combination of two settings Port Binding and Port Allocation control how they are consumed. A Distributed Switch does have a finite number of support ports which is outlined in the maximum configuration guide.
Screen Shot 2018-02-21 at 12.35.18.png
Static Binding with an Elastic port allocation is enabled as the default, and was introduce in vSphere 5.0. With this configuration the number of ports used rises and falls as needed based on the creation of new VMs. For performance reasons Static Binding/Elastic is regard as the best option. The table below outlines the other possible configurations, and the result of making the changes.
Port Binding Port Allocation Result
Static Binding Elastic Assigns a port to VM when it connects to the portgroup, when the default 8 ports are used, another 8 ports are added. A VM is allocated a port on the Distribute Portgroup at creation, and then is bound to that port. Even we powered off it claims that port until it is deleted, or moved to different portgroup. The Static Binding is useful if you want to over-ride the portgroup settings, with specific per-VM, per-port settings. The Advanced options control what settings are configurable on a per-port basis. By default only per-port blocking allowed as portgroup over-ride.
Static Binding Fixed As above, but once the default 8 ports are used, no further ports are added. Increase the number of ports to account for the number of VMs. VMs with 2 NICs consume 2 ports.
Dynamic Binding N/A No longer recommended. Depreciated since vSphere 5.0. Retained for backwards compatibility and upgrades only.
Ephemeral Binding N/A Ports created on demand; Allocating a number of ports not available. Ports are allocated on a power on basis, and once the VM is powered off the port is available for another VM to use.
When a VM is configured for port group on a Distributed Portgroup, it is automatically assigned a port. Starting with port0 initially. A VM can be assigned to specific port on the portgroup if necessary. The administrator can view the VMs and the ports they are using from the Ports column on the portgroup group.
Screen Shot 2018-02-21 at 12.42.18.png
The port assignment is optional, and is configured on a VM when it is allocated access to the portgroup:
Screen Shot 2018-02-21 at 12.44.45.png

Advanced

The Advanced settings on distributed portgroup allow the administrator allows for the over-riding of settings configured on per-port settings of a VM. This allows an individual VM connected to the portgroup to have it own unique settings. Control allow these to be temporary, so the next time the VM is powered down and up again, it inherits the settings from the portgroup.
Screen Shot 2014-02-13 at 15.57.23.png
By default a VM inherits all the setting from the portgroup - except on "block ports". This allows communication to a specific VM to be blocked. Any per-port settings that are applied reset at the next disconnect. You can modify the settings a per-VM, per-port basis by locating the VM on the port and clicking the Edit Distributed Port Settings:
Screen Shot 2018-02-21 at 12.56.19.png
By default you will find the other settings on the port will be disabled, and this is due to the setting under Advanced on the portgroup.

Security

Security Settings are on a Distributed Switch Portgroup are exactly the same as those found on the properties of the Standard Switch or its portgroups. The following information is a direct copy of the information from the Standard Switch content.
Screen Shot 2014-02-13 at 16.50.22.png
By default Promiscuous Mode is set to reject - and this prevents packet capturing software installed to compromised virtual machine for being used to gather more network traffic to facilitate a hack. Nonetheless it could modified by a genuine network administrator to capture packets as part of network troubleshooting. Even with this option enabled it would not stop an administrator from receiving packets to the VM. Another reason to change this option to Accept if you want to run intrusion detection software inside a VM. Such intrusion detection needs to be able to sniff network traffic as part its process of protecting the network. Finally, a less well-known reason for loosening the security on promiscuous mode is to allow so called "Nested ESX" configurations. This is where ESX is installed into a VM. This generally done in homelab and testing environments, not generally recommended for production use.
MAC Address Changes and Forged Transmits are both related to MAC address. Certain availability allows the guest operating system to send out traffic even if the traffic is being sent using a MAC address that is different from the one assigned virtual machine. This can happen in certain clustering environments such as Microsoft NLB or Microsoft Clustering Services (MSCS) where a "virtual IP" address is assigned a "virtual MAC" address to allow inbound/outbound access to the cluster. Configuring Reject for MAC Address Changes and Forged Transmits would cause such software to malfunction, as such the setting is compromise between security and usability.

Traffic Shaping

Settings controlling traffic shaping on a Distributed Switch is similar to those on a Standard Switch, except with Distributed Switch the portgroup can control both outbound (Egress) and inbound (Ingress) traffic. By default there is no traffic throttling enabled.
Screen Shot 2014-02-13 at 16.52.20.png

Teaming and Failover

Screen Shot 2014-02-14 at 10.14.26.png
The NIC Teaming and Failover settings control a rich array of possible configurations and options. Load-Balancing supports five different options including (1) Route based on IP Hash, (2) Route based on MAC Hash, (3) Route based on Originating Port and (4) Use explicit failover order. Route based on originating port essentially load-balancing controlled by a round-robin on the teamed network interfaces, and is the default because its highly compatible with many physical switch configurations. Route based on IP Hash is consider the most optimal for performance . However, for this method to function the physical switches must be enabled for the 802.3 (AD) Link Aggregation standard and IP data is used to select the best physical NIC with which to send the traffic. Although a load is placed on all the NICs the maximum throughput is limited to the maximum transfer of the given physical NIC.
Distributed Portgroups have an additional load-balancing option which is called (5) Route based on physical NIC load. This dynamically looks at the burden being taken by a NIC - once the load is close to reaching percentage of statuation, it when distributes the load to other NICs in the team. For best performance optimization it recommend to use EtherChannel and use Route based on IP Hash.
Screen Shot 2014-02-14 at 10.22.59.png
With the Explicit Failover Order option it is possible mark the Uplink contains as Active and another as Standby. This an improvement on the method used by Standard Switches - firstly, the configuration needs to be done only once affecting all the ESXi hosts added to the Distributed Switch. Secondly, because the change is made on the Uplink container - rather than the vmnic - the physical NIC layout of the ESXi host is abstracted. One host could be using vmnic5 with UpLink1, and another host could be using vmnic4 with UpLink1. This means ESXi host with differing numbers of physical NICs can managed as if they were more alike.
Such a configuration looks like this:
Screen Shot 2014-02-14 at 10.30.46.png
This option works in conjunction with the Failback option of Yes/No. With Failback set to Yes. If a vmnic in UpLink1 failed, a vmnic in Uplink2 would take over - when vmnic0 became available then traffic would failback (return to) the Active NIC. If the setting is changed to No, even if the Active NIC becomes available again - the traffic stays with the vmnic1 device until such time that it experiences a failure. Such a configuration guarantees dedicated allocation of bandwidth and is highly reliable. However, its is an uncommon configuration as many consider dedicating a NIC into a standby mode a waste of precious bandwidth. A configuration where all the physical UpLink are marked as Active looks like this from the Distributed Switch topology view
Screen Shot 2014-02-14 at 10.50.03.png
Whereas a Active/Standby configuration is represented like so - with the yellow line indicating which UpLink is taking the network load currently:
Screen Shot 2014-02-14 at 10.50.32.png
Network Failover Detection controls how VMware ESX discovers that a physical network path is unavailable. It supports two options Link Status Only and Beacon Probing. Link Status Only merely checks the NIC and its primary connection to the physical switch. It cannot validate if further upstream paths are valid. In contrast Beacon Probing sends out a discover packet to check the state of the networking. In theory it is a more intelligent method of detecting the true state of the network. However, in practise many switching architectures including those from Cisco often don't benefit from the use of beacon probing. Consult your switching vendors best practises and recommendations before enabling beacon probing. VMware specifically advises against using Beacon Probing when a Distributed Portgroup is enabled for IP Hash load-balancing.
Notify Switches this option when set to Yes, sends out a reverse ARP (RARP) packet which helps keep the physical switch aware of changes occurring in the network. Typical leaving the default in place is the best practise as it helps with the vMotion process when VMs are being moved across the network. Although the virtual MAC address of the VM does not change when it is moved from one host to another - the MAC addresses of the physical world certainly are different from one ESX to another. Microsoft NLB software when configured in a unicast mode is incompatible with Notify Switches set to Yes. If Microsoft NLB is configured with multi-cast functionality then this is not a problem. Therefore if you are using Microsoft NLB within a VM using Multi-cast is recommended. Alternatively, a portgroup can be created merely for the Microsoft NLB cluster with the Notify Switches set to No.

Monitoring

Monitoring is where you will find the option to enable or disable NetFlow on a portgroup. NetFlow and its configuration is covered in detail in the Distributed Switch Settings earlier in this chapter.

Traffic Filtering and Marking

This feature allows for the tagging the virtual network to protected from unwanted traffic, security attacked or apply a QoS tag to control valid traffic. With this system you can restrict or prioritise the traffic that matches the rule. It supports CoS tags (Class of Service) at Layer (values 0-7 are supported) 2, and DSCP (Differentiated services code point) values for layer3 (values from 0-63 are supported). Both Ingress and Egress traffic is supported, and there a number of different ways of applying the rules (referred to as "qualifier") including VLAN tagging, Virtual Gust Tagging (VTG) if enabled, IP Address/Version and MAC address.
The following example is taken from the vSphere 5.5 Online Documentation Center, and show how to filter traffic based for Voice over IP communications. Voice over IP (VoIP) flows have special requirements for QoS in terms of low loss and delay. The traffic related to the Session Initiation Protocol (SIP) for VoIP usually has a DSCP tag equal to 26, which stands for Assured Forwarding Class 3 with Low Drop Probability (AF31). For example, to mark outgoing SIP UDP packets to a subnet 192.168.101.0/24 network, you can use the following configuration.
1. Right-click the Distributed Portgroup, and Edit settings - and select Traffic filtering and marking
2. Change the status from Disabled, to Enabled
3. Click the green + symbol to add a New Network Traffic Rule
4. Type a friendly name such as Voice-over-IP
5. Enable the DSCP Tag option, and set the tag value to be 26 and change the Traffic Direction to be Egress
Screen Shot 2014-02-23 at 13.42.58.png
6. In the New Network Traffic Rule dialog box, click the green + to set a new qualifier. Select the option for a New IP Qualifier
Screen Shot 2014-02-23 at 13.44.31.png
7. Create a rule that set the Protocol to UDP, with a Destination Port of 5060 and with Source IP Address that patches 192.168.101.0 with a Prefix Length of 24
Screen Shot 2014-02-23 at 13.49.58.png
After clicking OK to the New Qualifier the dialog the new New Network Traffic Rule is ready to be enabled
Screen Shot 2014-02-23 at 13.55.25.png

Miscellaneous

Miscellaneous contains only one setting currently, the ability to block ports on the Distributed Portgroup. In this context it would shutdown all communications on all ports on the Distributed Portgroup. This setting is perhaps best configured on the properties of a port on portgroup, which would disable communications just for an individual VM.
In this case disabling all ports on a portgroup would have the same affect as pulling the network cable out from a physical machine, or disabling the NIC from the VM. As such not only could the VM not communicate outside of their designated VLAN, they would not be able to communicate to each other.
Screen Shot 2014-02-14 at 12.05.16.png

Migrating from Standard Switches to Distributed Switches (Web Client)

Note: Currently the desktop vSphere Client supports a drag-and-drop move of mulitple VMs from one portgroup to another - however, the Web Client does not.
Generally, it is recommend to starting using Distributed Switch from day one for virtual machines - so as avoid a migration process altogether. However, this may not possible due to licensing restrictions or the fact that using Distributed Switches was not included in the original design at deployment phase. Moving VMs from a Standard Switch portgroup to Distribute Portgroup can be achieved in a bulk way using the Web Client. However, its perhaps wise to test the configuration before embarking on a migration, and to claim a maintenance period for the effected VMs. Therefore should an outage occur then no end-users or customers would be affected. It is also possible to migrate the VMkernel networking away from a Standard Switch if necessary.

Migrating VMs from Standard Switch to a Distributed Switch

Note: This process can also be used to return VMs back to the Standard Switch portgroup if necessary.
Migrating VMs from a Standard Switch portgroup to a Distributed Switch Portgroup can be achieved using a specific option within the "Add and Manage Hosts" wizards.
Screen Shot 2014-02-14 at 14.11.24.png
However, there is a dedicated wizard designed to "Migrate VM(s) to another network" which may be easier for those wishing to move large numbers of VMs from one portgroup to another.
1. Right-click the target Distributed Switch in the Web Client, and select Migrate VM to another network
Screen Shot 2014-02-14 at 14.16.15.png
2. In the wizard, under Source Network browse for the Standard Switch portgroup name
Screen Shot 2014-02-14 at 14.17.25.png
Note: The Distributed Portgroup names were named from VLAN101 to dVLAN101 to distinguish them from the Standard Switch portgroup names called VLAN101 and VLAN102
3. Under Destination Network browse for the Distributed Switch portgroup name
Screen Shot 2014-02-14 at 14.19.00.png
4. Select the VMs to be migrated to the Distributed vSwitch. The migration will only occur on the interfaces within the scope of the Standard Switch portgroup. Therefore a firewall or router with multiple interfaces will find its NICs being connected to both Standard and Distributed Portgroups until all the interfaces have been migrated. During this migration process so long as the VLAN and IP configuration remains unmodified no disconnects or dropped packets occurs.
Screen Shot 2014-02-14 at 14.21.47.png
5. Confirm the VMs have been relocated to the appropriate portgroup. By confirming the association on the properties of the Distributed Portgroup, the Related Objects tab and Virtual Machines column.
Screen Shot 2014-02-14 at 14.23.32.png

Migrating VMKernal Ports from the Standard Switch to a Distributed Switch

Along side virtual machine traffic it is possible to migrate VMKernel traffic - or traffic which concerned purely with the VMware ESXi host from a Standard Switch to the Distributed Switch. Many administrators prefer to keep this traffic on different virtual switch layers. However, you may be compelled to move all the traffic to the Distributed Switch if you have insufficient physical NICs. For example a physical server with just two 20Gps NICs cannot offer tolerance and load-balancing to both switch types as a physical adapter cannot be associated with more than one virtual switch. However, in these types of environment commonly associated with so called "converged blade" environment IO Virtualization can be used to present as array of "virtual adapter" up from the physical layer regardless of the number of actual physical NICs onboard. A good example of this would be HP "Virtual Connect" technology, Cisco refers to there implementation of IO Virtualization as Cisco Adapter Fabric Extender (or Adapter FEX) occasionally Cisco refers to it as "SingleConnect"
Begin the migration process by creating a series of target Distributed Portgroups, and ensure that the correct VLAN Tagging is in place. You may wish to temporarily place a VM on these portgroups to confirm that communications is valid. For example place a VM on a Distributed Portgroup called "dIPStorage0" allocate an IP address valid for the storage network (e.g. 172.168.3.101) and confirm you can ping the IP Storage system (e.g. 172.168.3.254).
IMPORTANT: This process removes the Standard Switch VMkernel port (vmkN) and its associated portgroup. Reversing the process cannot be carried out using the same wizard. Instead the "Migrate a VMkernel Network adapter to selected switch" must be run on each VMware ESXi host.
Screen Shot 2014-02-14 at 15.08.44.png
1. Right-click the Distributed Switch and select Add and Manage Host networking
Screen Shot 2014-02-14 at 15.49.06.png
2. Select the radio button Manage Host Networking
Screen Shot 2014-02-14 at 15.15.51.png
3. Select the Attached Hosts
Screen Shot 2014-02-14 at 15.16.18.png
4. Deselect Manage Physical Adapters, so only Manage VMkernel Adapter is selected
Screen Shot 2014-02-14 at 15.17.12.png
5. Select the Source VMkernel/Portgroup you wish to migrate. In the screen grab below vmk1 on the Standard Switch portgroup vMotion is selected... Next, click Assign Port group - and from the subsequent windows select the Destination Portgroup - in this case the Distributed Portgroup called dMotion. Repeat this process for each VMware ESXi host...
Screen Shot 2014-02-14 at 15.20.13.png
6. Confirm your changes will have no impact on iSCSI communication (or NFS for that matter)
Screen Shot 2014-02-14 at 15.21.58.png
7. Once the process has completed you can confirm the VMkernel Port has been migrated by viewing the settings on the VMware ESXi host under the Manage Tab, Networking column and VMkernel Adapters.
Screen Shot 2014-02-14 at 15.23.57.png

Migrating VMKernal Ports from the Distributed Switch to a Standard Switch

To move a VMkernel Port on a Distributed Switch to a Standard Switch carry out these steps
1. Select the VMware ESXi host, click the Manage Tab and the Networking column
2. Select the Standard Switch you wish to use as the target, and click the Migrate a VMkernel Network adapter to selected switch icon
Screen Shot 2014-02-14 at 16.23.50.png
3. Select the VMkernel port you wish to move
4. Type in a name for the new Standard Switch portgroup
Screen Shot 2014-02-14 at 16.24.43.png
5. Confirm the change will not impact negatively on IP based storage
Screen Shot 2014-02-14 at 16.25.26.png

Removing a Distributed Switch (Web Client)

If you want to remove an VMware ESXi from vCenter once you are using Distribute Switch, you cannot simply use maintenance mode to evacuate the host of VMs, and then remove from the vCenter inventory. Additionally, the host needs to be removed from the Distributed Switch. Similiarly, you cannot merely delete the Distributed Switch when the ESXi host are currently members - or it is in use typically by virtual machines.
If a Distributed Switch has VMs attached to it, and delete is attempt this will fail:
Screen Shot 2014-02-14 at 16.57.29.png
If an VMware ESXi host has no VMs allocated to it, but it is configured for Distributed Switch, it cannot be removed:
Screen Shot 2014-02-14 at 17.00.23.png
To remove a Distributed Switch the correct proceedure should be to relocate all VMs and VMkernel Ports (if any) to a Standard Switch - and then remove the VMware ESXi hosts from the Distributed Switch, and then delete the Distributed Switch. If only one host needs to be removed. Then carry out the same proceedure on the single VMware ESXi host.
1. Right-Click the Distributed Switch, and select Add and Manage Hosts
2. Select the radio button to Remove hosts
Screen Shot 2014-02-14 at 17.04.18.png
3. Select the hosts to remove
Screen Shot 2014-02-14 at 17.04.39.png
4. Once completed, right-click the Distributed Switch, and select All vCenter Actions, and Remove from Inventory
Screen Shot 2014-02-14 at 17.05.26.png

Creating and Managing a Distributed Switch (PowerCLI)

Creating an Distributed Switch

Screen Shot 2013-10-21 at 14.01.16.png
The PowerCLI cmdlet New-VDSwitch can create a Distributed Switch in this case the command creates the switch but also enables LLDP in both directions, together with an MTU of 9000 with support for two Distributed Uplinks. Distributed Switches created without the -version parameter will automatically use the most recent iteration of the Distributed Switch.
New-VDSwitch -Name "DSwitch0-GoldCluster01" -Location "New York" -LinkDiscoveryProtocol "LLDP" -LinkDiscoveryProtocolOperation "Both" -MTU 9000 -NumUplinkPorts 2

Adding Distributed Portgroups to Distributed Switch

Screen Shot 2013-10-21 at 14.01.16.png
The PowerCLI cmdlet New-VDPortgroup can be used together a range and foreach loop to create a series of portgroups dVLAN101 to dVLAN105, each with the VLAN Tag set for 101 to 105 on each portgroup created
101..105 | Foreach {
$Num = $_

New-VDPortgroup -VDSwitch (Get-VDSwitch -Name "DSwitch0-GoldCluster01") -Name dVLAN$num -VLanId $num
}

Adding VMware ESXi Hosts to the Distributed Switch

Screen Shot 2013-10-21 at 14.01.16.png
The PowerCLI cmdlet Add-VDSwitchVMHost can be used to add VMware ESXi hosts to the Distributed Switch. Using a combination of range and foreach loop we can add hosts esx01nyc.corp.com, esx02nyc.corp.com and esx03nyc.corp.com to the Distributed Switch DSwitch0-GoldCluster01.
1..3 | Foreach {
 $Num = "{0:00}" -f $_
        Add-VDSwitchVMHost -VDSwitch (Get-VDSwitch -Name "DSwitch0-GoldCluster01") -VMHost esx"$Num"nyc.corp.com
 }

Adding Physical vmnics to Distributed Switch Uplinks

Screen Shot 2013-10-21 at 14.01.16.png
The PowerCLI command Add-VDSwitchPhysicalNetworkAdapter can be used to assign physical vmnic to the UpLinks container. Again a combination of ranges and foreach loops finds each ESXi host from esx01nyc to esx03nyc, retrieving the ID of the vmnic. Each vmnic added consumes the 1st available Uplink container - in this case vmnic2 is added to Uplink1 and vmnic3 is added Uplink2.
1..3 | Foreach {
 $Num = "{0:00}" -f $_
        $vmhostNetworkAdapter = Get-VMHost esx"$Num"nyc.corp.com | Get-VMHostNetworkAdapter -Physical -Name vmnic2
        Get-VDSwitch "DSwitch0-GoldCluster01" | Add-VDSwitchPhysicalNetworkAdapter -VMHostNetworkAdapter $vmhostNetworkAdapter -Confirm:$False
 }

1..3 | Foreach {
 $Num = "{0:00}" -f $_
        $vmhostNetworkAdapter = Get-VMHost esx"$Num"nyc.corp.com | Get-VMHostNetworkAdapter -Physical -Name vmnic3
        Get-VDSwitch "DSwitch0-GoldCluster01" | Add-VDSwitchPhysicalNetworkAdapter -VMHostNetworkAdapter $vmhostNetworkAdapter -Confirm:$False
 }

Enabling Health Check

Screen Shot 2013-10-21 at 14.01.16.png
PowerCLI has number of cmdlets to manage Distributed Switches, however some settings are accessed by using the Get-View cmdlet to retrieve parameters from the SDK. In this case this one-liner PowerCLI command enables the Health Check feature on the Distributed Switch.
Acknowledgement: This sample PowerCLI script was originally found on http://www.hypervisor.fr/?p=4229
Get-View -ViewType DistributedVirtualSwitch|?{($_.config.HealthCheckConfig|?{$_.enable -notmatch "true"})}|%{$_.UpdateDVSHealthCheckConfig(@((new-object Vmware.Vim.VMwareDVSVlanMtuHealthCheckConfig -property @{enable=1;interval="1"}),(new-object Vmware.Vim.VMwareDVSTeamingHealthCheckConfig -property @{enable=1;interval="1"})))}

Không có nhận xét nào:

Đăng nhận xét