vCenter Server Reduced Downtime Upgrade

I have seen some questions coming in about Reduce Downtime Upgrade features lately, so I figured I’d share some more information about this. This feature was introduced in vSphere 7.0 Update 3 and it provides a new way of doing migration based upgrades for vCenter servers.

Reduced Downtime Upgrade (RDU) simplifies the migration process and reduces downtime (as the name implies) for vCenter while the data is being moved/copied from the old vCenter to the new vCenter. So the only downtime happens when the services on the old vCenter are stopped and started on the new vCenter. The data is copied almost in a vMotion type of way. Pretty slick.

The main question I see is: Does this apply to all deployment types including On-Premises and Cloud deployments?

The answer is NO. This feature (as of right now) only applies to VMC on AWS and Project Arctic. So for now, RDU is not available/supported for on-premises deployments, but that’s not to say it will never be supported on-premises in the near future. Also RDU is only available via API at the moment, and for the VMC on AWS and Project Arctic use cases, the vCenter upgrade is done by VMware Site Reliability Engineers (SRE), so you as a customer don’t need to worry/trigger the upgrade/update of vCenter server. You can safely pass the burden on to the SREs. That alone can justify moving to VMware’s Project Arctic offering when available IMHO.

Hopefully this post answers some questions. For more information refer to the official blog post here.

Lab ESXi err_cert_revoked in Chrome

I recently deployed a new lab and encountered an error from Chrome – err_cert_revoked. Usually I click through the Chrome warnings and accept moving forward in “unsafe” mode.

However, there was no option to continue. The error supplied indicated “You cannon visit <yoursite.com” right now because this certificate has been revoked…”

Since this is an internal lab, I don’t worry much about external certs and what not, i just needed to get in my lab to do some work…

Workaround

Rename Hosts to correct name

First I found out all my hosts were named “localhost.mylab.com”, so naturally the first step was to fix the host names. Easy. Go to DCUI and change the host names for each host.

 

Backup certificate and Generate a new certificates

Once I changed all the names, I made a backup of the original certs, just in case by running the following commands under /etc/vmware/ssl 

mv rui.crt backup.rui.crt

mv rui.key backup.rui.key

Then generated new certificates by running /sbin/generate-certificates

Rebooted my hosts

 

Download new certificate

For this step, I opened the esxi UI in FireFox and when I got the error, I had the option to download the certificate and keychain. I clicked on PEM (cert) to download the cert.

 

Trust Certificate

Once I downloaded the cert I opened it on my Mac with Keychain Access. I trusted the certificate by double clicking on the cert and under Trust> changed from Use System Defaults to Always Trust under “When using this certificate” drop-down.

 

THIS IS A LAB ENVIRONMENT (internal). DO NOT TRUST sites you are not familiar with. 

 

HTML 5 – vSphere and ESXi Host Web Clients

H5The wait is over (almost). Since the introduction of vSphere Web Client, many admins have slowed down the adoption of the Web Client as well as updates to vSphere due to the performance of said client.

VMware has released a couple of flings in relation to this problem. One of them was the host web client, where you can manage your hosts directly without the need to install the vSphere client. This fling is now part of the latest update to vSphere 6.0 U2. A few days ago, VMware released a similar option for vCenter. Both of these options are based on HTML 5 and javascript.

Host Web Client

Like I mentioned before, starting with vSphere 6.0 U2, the host web client is already embedded into vSphere. If you do not have this update you can still download the OVA and access the host web client that way. Currently it only works if you have vSphere 6.0+ but once version 5.5 U3 is released, it will also work with that version. Here is a link to download the fling.

To access the web client, you will need to add “/ui” at the end of the name/ip address of your host. For example https://<host-name-or-IP>/ui

The client is very responsive and has a nice UI. Not all the features are currently supported, but more will be coming at some point in the near future.

host_ui

 

vCenter Web Client

This HTML web client is only available as a fling at the moment. You will need to deploy an OVA and register the appliance with the vCenter that you would like to manage. Being a fling, not all features are included. It basically focuses on VM management, but I am sure they are working to port all the features over at some point (I hope).

To deploy this ova, you will need to enable SSH and Bash Shell on your VCSA. You can do both from the VCSA web UI. If you are running Windows based vCenter refer to the Fling documentation here.

vcsa_uI-shell

Prior to going through the configuration you will need to

  1. Create an IP Pool (If deploying via C# Client)
    • Note: I deployed using Web Client and didn’t create the IP Pool for me automatically as it is supposed to, so double check you have an IP Pool before powering on the appliance
  2. Deploy the OVA

IP_Pool

After deploying the OVA, creating an IP Pool, and enabling both SSH and Bash Shell on VCSA, it is time to configure the appliance.

  • SSH to the IP address you gave to the appliance using root as the user and demova as the password
  • Type shell to access Bash Shell
  • run the following command in Bash Shell
    • /etc/init.d/vsphere-client configure –start yes –user root –vc <FQDN or IP of vCenter> –ntp <FQDN or IP of NTP server>
  • If you need to change the default password for your root account, you can run the following command from bash shell
    • /usr/bin/chsh -s “/bin/bash” root
  • answer the question by answering YES
  • and enter the credentials for your vCenter


H5_deploy1

H5_deploy2

 

The HTML Web Client is pretty awesome, I gotta say, even if not all the features are there yet. It is super clean, and responsive. I can’t wait for it to be embedded with a full feature set.

 

H5_1

H5_2

Golden Nuggets: #1 vSphere vFlash

ToolsWith so many tools and features from many different vendors, it is almost impossible to research them all and find useful tools to make your job easier. Some features also provide a faster/cheaper way to solve common problems without spending a fortune, unfortunately, these “Golden Nuggets” are often underutilized. I’ll post a few quick tools that may make a big difference in someone’s environment. As always, test before deploying to production.

One of the cool features introduced in vSphere 5.5 was vFlash, which replaced swap to SSD from previous versions, but I won’t get into that. Essentially, this is flash-based read cache on the host that functions at the vmdk level for a specific VM. This feature works by adding flash-based resources such as PCIe cards or SSD drives to create a vFlash pool of resources at the host level, and configuring the amount of storage to be used for host swap cache. Such cache is placed on the data path of the vmdk between the host and the storage array.

Once the host is configured, you can expand the virtual disk of a VM’s properties in the Web Client and assign the amount of cache for that particular vmdk, as well as having the option to select the block size (4KB – 1024KB). So, for each pool, chunks are carved out or reserved for a specific vmdk on the host where the VM is located.

vFlash_vmdk

As far as data locality goes and features like HA, DRS, vMotion; it is possible to migrate the cached data to another host while migrating a VM, as long as the other hosts have also been configured with vFlash. You may also specify not to migrate the cached data during migration.

Requirements:

  • Check HCL for compatible Flash devices
  • vCenter 5.5 or later (VCSA or Windows)
  • VM hardware version 10 or later
  • vSphere vMotion if using DRS
    • Requires vFlash on hosts within the cluster

 

Implementing vFlash can be beneficial for resolving or minimizing performance degradation for read intensive applications, or simply by utilizing local resources at the host level for read cache instead or in addition to storage read caching solutions. Having local cache eliminates the “extra hop” on the network to get to cached data at the storage array.

This is a high level view of vFlash but in my opinion, I think this is a nice feature that can get rid of some headaches and fire drills.

 

vFlash_highLevelImage source – VMware doc (Rawlinson)

 

Deploying NetApp NFS Plug-in for VMware VAAI

NetApp’s NFS plug-in for VMware VAAI (VMware vStorage API for Array Integration) is an API that allows for the offload of certain tasks from the physical hosts to the storage array. Tasks such as thin-provisioning and hardware acceleration can be done at the array level to reduce the workload on the ESXi hosts.

The steps necessary to deploy VAAI on ESXi hosts as well as the NetApp storage can be accomplished using VSC or ESXi CLI, as well as NetApp’s CLI/Shell. The nice thing about VSC is that it is capable of enabling VMware vStorage for NFS on the storage and also enables VAAI on the VMware hosts if not already done.

Prior to installing NetApp’s NFS plugin for VMware VAAI, NFS datastores cannot take advantage of offloading activities such as Hardware Acceleration.

VAAI_NFS_notSupported

In order to install and configure NetApp NFS Plug-in for VMware VAAI, the following steps are necessary:

  • Enable NFSv3 on the storage system. NFSv4 is necessary for C-Mode on the export policy for VAAI to work.
    • Different methods to enable vStorage between 7-Mode and C-Mode
  • Have vSphere 5.0 or later
  • Download VAAI plug-in from NetApp site
  • Copy/Install bundle on ESXi host

Enabling VMware vStorage for NFS

VMware vStorage needs to be enabled on the NetApp storage controller. Since NetApp ONTAP 7-Mode and C-Mode commands are different, you will need to use the one for your array version.

7-Mode

Log in to the CLI and run the following command on both nodes of the HA pair.

“options nfs.vstorage.enable on

7M_vStorage_ON

C-Mode

In 7-mode, the option is enabled “globally” at the controller level. In C-Mode, this option is enabled at the SVM (Storage Virtual Machine) aka vServer.

Log in to the cluster shell and enable vStorage on the desired vServer.

“vserver nfs modify –vserver <your SVM name> -vstorage enabled”

CM_vStorage_ON

 

Verify that VAAI is enabled on the VMware hosts

By default, VAAI is enabled on vSphere 5.0 or later, but you can verify using the following commands from the host CLI.

“esxcfg-advcfg -g /DataMover/HardwareAcceleratedMove”
 “esxcfg-advcfg -g /DataMover/HardwareAcceleratedInit”

 If VAAI is enabled, the commands will return a 1 instead of 0.

ESXi_vStorage_on

If for some reason VAAI is not enabled on the ESXi host, you can enable them by using these commands:

“esxcfg-advcfg -s 1 /DataMover/HardwareAcceleratedInit’
“esxcfg-advcfg -s 1 /DataMover/HardwareAcceleratedMove”

You can also check these settings by using the Web GUI by selecting the host>Manage>Advanced System Settings.

WebUI_vStorage_on

 

Installing Plug-in via CLI

You can install the plug-in via VSC or CLI. When using CLI, you can choose to use the online bundle (.vib) or offline bundle (.zip). I will show the offline bundle installation.

After you have downloaded the offline bundle, copy the .zip file to a datastore available to your ESXi hosts.

You can verify the contents of the bundle by running “esxcli software sources vib list –d <path of your .zip file>”. In this example, the offline bundle is located in the root of a datastore available to this host.

VIB_List

From the ESXi CLI, run the following command to install the plug-in

“esxcli software vib install –n NetAppNasPlugin –d <path of your offline bundle>”

At this point, the NFS plug-in for VMware VAAI is installed. Remember that the host MUST be rebooted after installation, so either use vMotion to move you VMs, or schedule some down time after hours to complete the reboot.

VIB_Install

 

Installing Plug-in via VSC

VSC simplifies this installation. Before you can install the plugin on an ESXi host, you will need to copy the .vib file from the offline bundle to the install directory of the VSC server. The default location is C:\Program Files\NetApp\Virtual Storage Console\etc\vsc\web. Also make sure that the name of the .vib file is NetAppNasPlugin.vib, if not, rename it so you don’t have to restart VSC or NVPF service. Don’t forget to reboot the ESXi host after installing the plugin.

VSC>TOOLS>NFS VAAI Tools >Install on Host>Select host and reboot.

VAAI_Install_VSC

After installing the NFS VAAI plug-in, NFS is now supported for Hardware Acceleration as well as other enhancements.

VAAI_NFS_Supported