HTML 5 – vSphere and ESXi Host Web Clients

H5The wait is over (almost). Since the introduction of vSphere Web Client, many admins have slowed down the adoption of the Web Client as well as updates to vSphere due to the performance of said client.

VMware has released a couple of flings in relation to this problem. One of them was the host web client, where you can manage your hosts directly without the need to install the vSphere client. This fling is now part of the latest update to vSphere 6.0 U2. A few days ago, VMware released a similar option for vCenter. Both of these options are based on HTML 5 and javascript.

Host Web Client

Like I mentioned before, starting with vSphere 6.0 U2, the host web client is already embedded into vSphere. If you do not have this update you can still download the OVA and access the host web client that way. Currently it only works if you have vSphere 6.0+ but once version 5.5 U3 is released, it will also work with that version. Here is a link to download the fling.

To access the web client, you will need to add “/ui” at the end of the name/ip address of your host. For example https://<host-name-or-IP>/ui

The client is very responsive and has a nice UI. Not all the features are currently supported, but more will be coming at some point in the near future.



vCenter Web Client

This HTML web client is only available as a fling at the moment. You will need to deploy an OVA and register the appliance with the vCenter that you would like to manage. Being a fling, not all features are included. It basically focuses on VM management, but I am sure they are working to port all the features over at some point (I hope).

To deploy this ova, you will need to enable SSH and Bash Shell on your VCSA. You can do both from the VCSA web UI. If you are running Windows based vCenter refer to the Fling documentation here.


Prior to going through the configuration you will need to

  1. Create an IP Pool (If deploying via C# Client)
    • Note: I deployed using Web Client and didn’t create the IP Pool for me automatically as it is supposed to, so double check you have an IP Pool before powering on the appliance
  2. Deploy the OVA


After deploying the OVA, creating an IP Pool, and enabling both SSH and Bash Shell on VCSA, it is time to configure the appliance.

  • SSH to the IP address you gave to the appliance using root as the user and demova as the password
  • Type shell to access Bash Shell
  • run the following command in Bash Shell
    • /etc/init.d/vsphere-client configure –start yes –user root –vc <FQDN or IP of vCenter> –ntp <FQDN or IP of NTP server>
  • If you need to change the default password for your root account, you can run the following command from bash shell
    • /usr/bin/chsh -s “/bin/bash” root
  • answer the question by answering YES
  • and enter the credentials for your vCenter




The HTML Web Client is pretty awesome, I gotta say, even if not all the features are there yet. It is super clean, and responsive. I can’t wait for it to be embedded with a full feature set.




Troubleshooting vSphere PSOD

VMware_PSOD The Screen of Death, as most of us know it as, is the result of a system crash. Windows has his famous Blue Screen of Death (BSOD), and VMware has a purple screen of death (PSOD). Of course there is also a Black Screen of Death, which is usually when Windows systems are missing a boot file or one or more of those files have become corrupted. Although there is a range of colors, the problem for many is How do I fix this? How do I know what caused this?

Many admins start with the obvious and simply reboot the machine hoping it was a hiccup, but chances are, there is a bigger problem going on that needs addressed. In VMware, just like other systems, a core dump file is created when the stop error is generated. This is where you start digging…

Where is my DUMP…file?!?

So, during the purple screen, the host is writing the dump file to a previously created partition called VMKcore. There is a chance that the core dump file won’t be written due to internal problems, so it is always a good idea to take a screen shot of the PSOD. Exporting the core dump file can be done via CLI, manually from vCenter path for both Windows and/or appliance, as well as vSphere Client and WebClient; which is the preferred method from most admin since it is so simple to do.

To export the logs from vSphere Web Client, use the following steps:

  • Open vSphere Web Client > Hosts & Clusters > Right click on vCenter > Export System Logs…


  • Choose the host that had the PSOD > Next


  • Make sure you select CrashDumps, all others are optional



Once you have the dump file (vmkernel-zdump….), its time to look for the needle in the haystack. There are a lot of entries, and this file can be overwhelming to many people, but don’t stress, it is quite simple to find it. The first logical step is to find the crash entry point You can use the time when you noticed the PSOD or you can simply search within the log file for “@bluescreen”.


Once you find this, you will see the exact cause for the PSOD. In the screenshot below, you can see that the error generated is in relation to E1000. You should automatically think vNIC/Drivers, as well as looking online for any VMware KB articles regarding the errors generated. In this case, there is a known issue for different versions of vSphere that have already been patched; so keeping up to date on patches is very important.



The issue that triggered the PSOD in this environment was related to updates (fix) not being applied. The work around was to not use E1000e NIC on the VM but rather VMXNET3. Also, you HAVE to install the VMTools on your VMs. The VMTools have drivers needed for your VM to work properly. In this particular instance, VMTools were not installed on the VM. Once the tools were installed and the vNIC was switch to VMXNET3, the issue was resolved.


Refer to VMware’s KB2059053 for more info.

Golden Nuggets: #1 vSphere vFlash

ToolsWith so many tools and features from many different vendors, it is almost impossible to research them all and find useful tools to make your job easier. Some features also provide a faster/cheaper way to solve common problems without spending a fortune, unfortunately, these “Golden Nuggets” are often underutilized. I’ll post a few quick tools that may make a big difference in someone’s environment. As always, test before deploying to production.

One of the cool features introduced in vSphere 5.5 was vFlash, which replaced swap to SSD from previous versions, but I won’t get into that. Essentially, this is flash-based read cache on the host that functions at the vmdk level for a specific VM. This feature works by adding flash-based resources such as PCIe cards or SSD drives to create a vFlash pool of resources at the host level, and configuring the amount of storage to be used for host swap cache. Such cache is placed on the data path of the vmdk between the host and the storage array.

Once the host is configured, you can expand the virtual disk of a VM’s properties in the Web Client and assign the amount of cache for that particular vmdk, as well as having the option to select the block size (4KB – 1024KB). So, for each pool, chunks are carved out or reserved for a specific vmdk on the host where the VM is located.


As far as data locality goes and features like HA, DRS, vMotion; it is possible to migrate the cached data to another host while migrating a VM, as long as the other hosts have also been configured with vFlash. You may also specify not to migrate the cached data during migration.


  • Check HCL for compatible Flash devices
  • vCenter 5.5 or later (VCSA or Windows)
  • VM hardware version 10 or later
  • vSphere vMotion if using DRS
    • Requires vFlash on hosts within the cluster


Implementing vFlash can be beneficial for resolving or minimizing performance degradation for read intensive applications, or simply by utilizing local resources at the host level for read cache instead or in addition to storage read caching solutions. Having local cache eliminates the “extra hop” on the network to get to cached data at the storage array.

This is a high level view of vFlash but in my opinion, I think this is a nice feature that can get rid of some headaches and fire drills.


vFlash_highLevelImage source – VMware doc (Rawlinson)


VCSA 6.0 OVA install

In my last post I talked about some of the gaps with VCSA compared to the Windows vCenter version. I mentioned that the OVA was no longer available for download; however, it was quickly pointed out to me by William Lam from VMware that the OVA is in fact still available within the download; however, there is a disclaimer stating that such method is not officially supported by VMware.

Anyway, the OVA is buried within the ISO. Once you have mounted the ISO, you can navigate to the vcsa folder and the file named vmware-vcsa (with no extension) is the actual OVA (ISO->vcsa->vmware-vcsa). You may need to rename the ova to vmware-vcsa.ova or <other>.ova.

vcsa_ovaFile OVA_VCSA




From that point on, the deployment is the same as before.

William works for VMware and is a super sharp guy; although he may not remember me, I had the pleasure of meeting him during the vSphere 6 onsite Alpha over a year ago. Make sure to check out his blog full of tips and tricks. His blog site is virtuallyGhetto .

Deploying VCSA 6.0: Mind the Gap

VMware’s VCSA 6.0 brings a lot of enhancements compared to previous versions. I would seriously consider deploying VCSA in a production environment in order to replace the Windows flavor. For those not familiar with VCSA, this is the virtual appliance option to deploy vCenter in an environment. It reduces the time needed to deploy vCenter and offers an integrated database for no additional cost. Although this post may not be entirely technical, it will allow you to be aware of possible constraints that will prevent you from deploying VCSA before you invest too much time on it.

One of the great things about deploying VCSA over the Windows vCenter is that you will reduce the cost by not deploying a Windows VM as well as having to purchase an MSSQL license. VCSA sounds great so far, but there are some gaps that you need to be aware of before deploying this in an environment.



Some of the shortcomings of VCSA are primarily related to its nature of not being a Windows VM. For some deployments Windows vCenters have been used to also host the VUM (Update Manager) components, as well as programs that provide additional capabilities to the virtual environment such as VSC for NetApp storage, among others. This means that you would still need to deploy a Windows VM to host VUM as well as VSC in this case. Even though you would still be deploying such VM, the need for a MSSQL server/instance is not required which translates in savings.

Another aspect to keep in mind is the installation and migration from previous versions. There is no in-place upgrade from previous versions, but migrations are possible. With this in mind, you may want to consider to just start with a new, fresh environment. I would. Same applies to the Windows flavor. The installation method now comes as an ISO image. This may cause some confusion. In order to deploy VCSA, the ISO is mounted from a Windows system (can be your computer) and installation can be done remotely.

Before installation, make sure you install the Client Integration Plugin located within the ISO under the vcsa folder.





Start the installation by launching the vcsa-setup.html file from the ISO. A Web UI opens up after a few seconds, and gives you the option to install and ‘Upgrade” (migrate). During installation, just provide the target Host information, and the rest of the information needed for the installation. Make sure the VCSA appliance has a proper network connection and you can reach it from the computer deploying the appliance.










Both Windows and appliance vCenter offerings have the same scalability numbers as it relates to hosts, VMs, clusters, etc.

In conclusion, VCSA is a great choice for vCenter, but just be aware of some of the constraints of not using the Windows option. By the way the Web UI in vSphere 6 is soooo much faster!!! I’m just saying.