vSphere 6 Availability Enhancements

With the introduction of vSphere 6, many new enhancements have been introduced. Given that IT is primarily delivered as a service within a business, the availability of our environment is often high priority. This new version of vSphere introduces the following enhancements:

  • Better vMotion Capabilities
  • Multi-Processor Fault Tolerance (FT) (up to 4 vCPUs)
  • App HA now supports more applications
  • vSphere Replication has better RPO (15 minutes) and scalability (2000 VMs)

There are other availability enhancements in vSphere 6, but the previous list really called my attention. Specifically the vMotion capabilities. In previous versions, moving VMs between vCenters was a little cumbersome and required a lot of manual intervention such as scripts or even down time. Such capability is now possible with vSphere where VMs can be moved not only across datacenters, but also across long distances (greater than 100ms round trip time. It is now possible to perform vMotion tasks across virtual switches. However, it is important to understand that the vCenters have to be part of the same SSO domain for this to work.

What does all this mean to me? Well, in my opinion, these enhancements can be extremely handy for disaster prevention exercises. Take a scenario where there is an advanced notice about a hurricane, or flood. Let’s assume that that a stretched VLAN or VXLAN has been configured across 2 data centers with a reasonable rtt (about 100 ms or less). In this case, the option exists to move some powered-on VMs to another vCenter within the same subnet in order to prevent down time for the business. Of course, this can also be accomplished by SRM if already implemented.

These enhancements as well as the ones in the network, managements, and storage realms makes vSphere 6 impossible to ignore, and set VMware apart from its competitors.

vSphere 6 Web Client: Yes, Let’s go there…

Since the introduction of vSphere 5.1, VMware introduced the new Web Client. Yes, there was another web client out there, but it was not widely used. A lot of people questioned the change towards a web interface, so here are many reasons for the Web Client:

  • Access from any device with Web access
  • No need to install binaries in multiple locations to access the vSphere environment
  • Multi OS friendly
  • Scalable solution
  • API friendly

This first version was well received by many, but others noticed some slow response within the browsers. Well, I am happy to say that the new Web Client in vSphere is anything but slow. I know for a fact that the VMware team has spent countless hours working to get the slow response issue resolved. I was privileged to be part of a private customer Alpha test for vSphere x.y , and the difference made since the Alpha up until Beta 2 has been tremendous. I had the chance to voice concerns in many areas and obviously the Web Client was one of them, and let me tell you, VMware listens very well and does whatever needs to be done to make customers happy.

I will list some of the changes to the Web Client that I believe most customers will REALLY like.

  • Fast response times for Web Client interaction
    • Very noticeable
  • Faster log on process
  • Browser Friendly
    • Previous version had best results using Google Chrome
  • Recent Tasks (at bottom) is back
  • Drop down menu from home icon for easy, 1-click navigation
  • Core items added to left pane (Networking, Storage, VMs, Hosts)
  • vCenter Inventory Lists
  • 1-click task filtering

 

These are some of many improvements in the new vSphere release that will satisfy the requests of many customers. I was extremely impressed about the speed of the Web Client, but the additional features are icing on the cake.

As you may infer, the “fat client” will play a small to non-existent role moving forward. The C# client may still be used to access the individual hosts, as well as having read only capabilities for objects with virtual hardware version 9 and above, but vCenter tasks will be have to be done through the new an improved Web Client. Based on the huge improvements and new features, I don’t think many people will miss the old client.

Web_Client

VVols: All Systems Go

After a long wait and development/marketing effort from VMware, VVols are finally ready to take over your datacenter(s).

VVols are the next generation, integration between vSphere and storage arrays. VVols leverage a new set of APIs (VASA) that allows vSphere to communicate with the array and provide additional features at the VM level. VVols are based on storage policies, which in turn allows for further automation between products.

This storage abstraction provided by VVols, allows for the control of storage, not only at the VM level but also at the VDMK level. This is a great feature, as now you can control VMDKs as separate entities. The connections between the hosts and VVols are done through an abstraction layer known as Protocol Endpoints, which provides the user the freedom to use several protocols at once such as FC, iSCSI, or NFS.

There are a few requirements for VVols. One of them is that the array vendor can support VVols. The APIs from the vendor (VASA), as well as other vendor requirements. In the case of a storage array vendor such as NetApp, VSC is also required.

The Policy-Based Provisioning provided by VVols brings us even closer to the Software Defined Data Center (SDDC)

 

VVOLS

Deploying NetApp NFS Plug-in for VMware VAAI

NetApp’s NFS plug-in for VMware VAAI (VMware vStorage API for Array Integration) is an API that allows for the offload of certain tasks from the physical hosts to the storage array. Tasks such as thin-provisioning and hardware acceleration can be done at the array level to reduce the workload on the ESXi hosts.

The steps necessary to deploy VAAI on ESXi hosts as well as the NetApp storage can be accomplished using VSC or ESXi CLI, as well as NetApp’s CLI/Shell. The nice thing about VSC is that it is capable of enabling VMware vStorage for NFS on the storage and also enables VAAI on the VMware hosts if not already done.

Prior to installing NetApp’s NFS plugin for VMware VAAI, NFS datastores cannot take advantage of offloading activities such as Hardware Acceleration.

VAAI_NFS_notSupported

In order to install and configure NetApp NFS Plug-in for VMware VAAI, the following steps are necessary:

  • Enable NFSv3 on the storage system. NFSv4 is necessary for C-Mode on the export policy for VAAI to work.
    • Different methods to enable vStorage between 7-Mode and C-Mode
  • Have vSphere 5.0 or later
  • Download VAAI plug-in from NetApp site
  • Copy/Install bundle on ESXi host

Enabling VMware vStorage for NFS

VMware vStorage needs to be enabled on the NetApp storage controller. Since NetApp ONTAP 7-Mode and C-Mode commands are different, you will need to use the one for your array version.

7-Mode

Log in to the CLI and run the following command on both nodes of the HA pair.

“options nfs.vstorage.enable on

7M_vStorage_ON

C-Mode

In 7-mode, the option is enabled “globally” at the controller level. In C-Mode, this option is enabled at the SVM (Storage Virtual Machine) aka vServer.

Log in to the cluster shell and enable vStorage on the desired vServer.

“vserver nfs modify –vserver <your SVM name> -vstorage enabled”

CM_vStorage_ON

 

Verify that VAAI is enabled on the VMware hosts

By default, VAAI is enabled on vSphere 5.0 or later, but you can verify using the following commands from the host CLI.

“esxcfg-advcfg -g /DataMover/HardwareAcceleratedMove”
 “esxcfg-advcfg -g /DataMover/HardwareAcceleratedInit”

 If VAAI is enabled, the commands will return a 1 instead of 0.

ESXi_vStorage_on

If for some reason VAAI is not enabled on the ESXi host, you can enable them by using these commands:

“esxcfg-advcfg -s 1 /DataMover/HardwareAcceleratedInit’
“esxcfg-advcfg -s 1 /DataMover/HardwareAcceleratedMove”

You can also check these settings by using the Web GUI by selecting the host>Manage>Advanced System Settings.

WebUI_vStorage_on

 

Installing Plug-in via CLI

You can install the plug-in via VSC or CLI. When using CLI, you can choose to use the online bundle (.vib) or offline bundle (.zip). I will show the offline bundle installation.

After you have downloaded the offline bundle, copy the .zip file to a datastore available to your ESXi hosts.

You can verify the contents of the bundle by running “esxcli software sources vib list –d <path of your .zip file>”. In this example, the offline bundle is located in the root of a datastore available to this host.

VIB_List

From the ESXi CLI, run the following command to install the plug-in

“esxcli software vib install –n NetAppNasPlugin –d <path of your offline bundle>”

At this point, the NFS plug-in for VMware VAAI is installed. Remember that the host MUST be rebooted after installation, so either use vMotion to move you VMs, or schedule some down time after hours to complete the reboot.

VIB_Install

 

Installing Plug-in via VSC

VSC simplifies this installation. Before you can install the plugin on an ESXi host, you will need to copy the .vib file from the offline bundle to the install directory of the VSC server. The default location is C:\Program Files\NetApp\Virtual Storage Console\etc\vsc\web. Also make sure that the name of the .vib file is NetAppNasPlugin.vib, if not, rename it so you don’t have to restart VSC or NVPF service. Don’t forget to reboot the ESXi host after installing the plugin.

VSC>TOOLS>NFS VAAI Tools >Install on Host>Select host and reboot.

VAAI_Install_VSC

After installing the NFS VAAI plug-in, NFS is now supported for Hardware Acceleration as well as other enhancements.

VAAI_NFS_Supported