vSAN Encryption at Rest & In Transit: What is the difference?

In the past, I’ve written a few posts about vSAN Data-at-Rest Encryption, which became available with vSAN 6.6. You can find those posts here. In vSAN version 7.0U1 there is a new option for encryption, Data-In- Transit Encryption. So what is the difference? Can I only choose one or both? Let’s find out.

vSAN Data at Rest Encryption

Data-at-rest (D@RE) was designed to do just that. Encrypt all your data once it lands on the disks being used by vSAN. This will work regardless the Storage Policy you choose, and all the data replicas will be encrypted at both the cache layer and the capacity layer. One major advantage of Data-at-Rest Encryption over the vSphere VM encryption is that vSAN will still allow you to encrypt your data and take advantage of space saving features such as deduplication and compression. When the data lands in cache it will be encrypted using the Data Encryption Key (DEK), then while the data is being destaged to the capacity layer it will be decrypted, and it is here where the deduplication and compression takes place. Finally when the data lands in the capacity devices, the data gets encrypted once again. It is also important to highlight that the DEK is protected by the Key Encryption Key (KEK) which is coming from the Key Management Server (KMS)… and this is one of the differences between the two options.

vSAN Data in Transit Encryption

Data-In-Transit Encryption (DIT) comes in to complete the end-to-end encryption of the data while in transit between hosts. Data-at-Rest encryption only encrypts the data when it lands on disk, so if someone takes a disk out of a server, all data is encrypted. But what about other attacks such as Man-in-the-middle attacks? Well, this is where Data-In-Transit encryption can protect the data. The keys used for DIT encryption are managed internally and there is no need for a KMS. Such keys are also rotated much, much faster when compared with D@RE. DIT encryption keys are rotated weekly by default, but you can change this option and rotate keys either every 7 days or every 6 hours or something in between. Just like D@RE encryption, DIT encryption works at a vSAN cluster level; so either all the hosts are protected or none.

Here is a quick comparison between the two options

FAQ

Can I enable both at the same time?

Yes. You can enable Data at rest and Data in Transit encryption in order to get full protection in your vSAN environment. It is recommended to enable vSAN Data at Rest encryption in the early stages of the cluster to minimize the time for on-disk formatting as there is less data to move around.

What is the performance impact of turning encryption on?

There are a lot of variables that come in to play when we talk about performance. However; vSAN encryption (both) will take advantage of AES-NI and offload operations in order to reduce any performance hit. Most modern CPU have AES-NI, but sometimes this feature is not enabled, so make sure to check this at deployment. Please also be mindful that enabled D@RE when the cluster has a lot data in it will result in large amounts of data being moved, so plan this to be done during off hours if possible.

What vSAN License do I need to enable vSAN Encryption?

In order to enable Data-at-Rest and/or Data-In-Transit Encryption you will need vSAN Enterprise or vSAN Enterprise Plus licenses. Refer to licensing guide here.

How do I enable Data-In-Transit Encryption?

Enabling DIT encryption is easy. Within the vCenter UI, select the vSAN cluster > Configure > Services > Data-In-Transit can be enable with or without Data-at-Rest encryption. Here is where you can also change the key rotation schedule for the DIT encryption keys.

@GreatWhiteTec

vSAN Encryption KMS info retrieval

A few years ago I wrote a blog post about “Replacing vCenter with vSAN Encryption Enabled“. For this particular exercise, one key piece of information needed to be retrieved was the kmipClusterId.

A couple of things have changed since then, in newer version of vSAN.

Change #1: ESXCLI commands

An easier way to retrieve this information with esxcli command was added. This command allows you to obtain a lot of information about the state of vSAN encryption, retrieve the hostKeyId, kekID, etc.

esxcli vsan encryption <option> get/list

 

So, based on this addition, you can now get the kmipClusterId needed for vCenter replacement by using esxcli vsan encryption kms list

As you can see, you can still look for this information on the esx.conf file which is where the hosts store this information for this particular version of vSAN (6.7 P01 – Build 15160138). Which brings me to the second update…

 

Change #2: vSAN Persistence

In vSAN 7.0 and beyond some changes were made on how this configuration gets stored. In this case, the encryption information that was previously file based (esx.conf) is now stored in a database. This provides better concurrency for multiple readers and writers versus the file based esx.conf option, among other advantages.

The good news is that the esxcli vsan encryption command will still allow you to retrieve the information needed in regards to encryption. However, if you attempt to retrieve this information from the esx.conf file, you won’t be able to find it there anymore.

Alternatively, you can retrieve the information directly from the config-store… maybe more info than you need. So, I’ld just stick to esxcli commands.

Devices unavailable for vSAN

I’ve been experiencing this scenario quite a bit lately, so I figured I’ll write something about it, also helps me refresh my memory.

Lately I’ve been helping out with a lot of VMware Cloud Foundation Proof of Concepts (VCF POCs)… That’s a mouthful!

Upon inspection of the environment I am supposed to work on, I have found hosts that were once part of a vSAN cluster but were not properly cleaned up prior to ESXi rebuild.

The vSAN clean up process involves the following steps:

  1. Set host in Maintenance Mode
  2. Delete Disk Group(s)
    • This removes vSAN partitions as well
  3. Remove host from vSAN cluster
  4. Clean up network
  5. Remove from vCenter

As previously mentioned, I usually get called in after the vCenter is gone and hosts have been re-imaged. Now what?!?!

The steps to manually clean the hosts involve the following tasks:

Delete Disk Groups via CLI

Since we do not have access to vCenter in this case, we can delete the Disk Group via CLI using esxcli commands.

You can remove the disks by specifying the ssd cache device to remove that device and all backing capacity devices, or you can do it one device at a time by using the device’s uuid.

To remove the devices one a time you can use vdq -q command to list the devices and then use esxcli vsan storage remove -u <uuid>. I prefer the other method since it is a lot faster…

  • List devices by using vdq -i

  • Run esxcli vsan storage remove -s <cache device>
  • In this case we will run this command for naa.55cd2e404b66fcbb and naa.55cd2e404b4393b9

 

Now What?!?!

Now that we have deleted the disk groups, we need to make sure that the host doesn’t think it is still part of a vSAN cluster.

Remove Host from vSAN Cluster via CLI

  • Check vSAN cluster membership
    • esxcli vsan cluster get
  • If it belongs to a cluster, remove it
    • esxcli vsan cluster leave

Clean up Network configuration

Since the hosts were re-imaged, this should be all set to default, but in case this was not cleaned up, or was manually re-created, you can clean it up by resetting the configuration.

The following command resets the root, password, overrides all the network configuration changes, and reboots the host. Don’t do this unless you are planning to reset the host to default… You’ve been warned.

/bin/firmwareConfig.py –reset

 

At this point, you can use the cleaned host to either join a new vSAN cluster, commission a host in VCF SDDC Manager, or use it as part of the VCF bring up process for the Management Domain.

 

@GreatWhiteTec

Unable to power on VM “vmwarePhoton64Guest’ is not supported”

I was in the process of deploying vSAN Performance Monitor  and came across an error.

The error pops up as “No host is compatible with the virtual machine”

It further describes the error as “vmwarePhoton64Guest’ is not supported”

 

The FIX:

Right Click on the vSAN Performance Monitor VM

Select Compatibility> Upgrade VM Compatibility

 

 

 

Select yes for the upgrade. Warning: This step is NOT reversible.

 

 

Finally, select the version you want the VM to be compatible with depending on the version you are currently running.

 

More info about VM Compatibility Settings available here

What’s new on vSAN Encryption 6.7 U1?

I’ve written a few blog posts in the past about vSAN Data at Rest Encryption (D@RE). These posts explain how encryption works, and how the keys are handed over to vSphere. Go here for more info.

For vSAN D@RE to work properly, ESXi hosts need to be able to reach the KMS cluster during reboot operations. Yes, hopefully you have a cluster for redundancy, but a single KMS server will still work. This is necessary in order for ESXi hosts within the vSAN cluster to be able to obtain both the Host Encryption Key (let’s call this HEK), and the Key Encryption Key (KEK).

Wait!!! Why do we have to go to KMS again if we already received the keys?!?!

See, The Host Encryption Key, and the Key Encryption Key live in a non persistent state in memory, in the key cache. When a vSAN node (ESXi server) is rebooted, these key go away (poof…gone). So, when vSAN encryption is enabled, and the hosts are rebooted, it needs to go out to the KMS and get those keys. So you may want to make sure that your hosts can talk to KMS, and that KMS has your keys before you consider rebooting your hosts. Oh yeah, it goes without saying that the KMS should NOT be in the vSAN cluster, and you can see why.

Once the HEK is obtained, the host reaches a crypto-safe mode, which allows the host to obtain a good operational state, and continue with the boot process, at which point it asks for the KEK from KMS. If the host is not able to obtain such keys from the KMS cluster, the host will continue to boot; however, the disks will not be mounted as the host was not in crypto-safe mode, and it was not able to obtain the KEK from KMS resulting in failure to unwrap the Data Encryption Key (DEK).

In a scenario where hosts are being updated/upgraded via VUM, in most occasions the hosts will do a rolling reboot as part of the VUM process. With vSAN versions 6.7 and prior, rolling reboots of hosts via VUM were allowed, irrelevant of the state of the connection with KMS, and the availability of keys. As already described, these keys are necessary in order to properly mount the drives on each host during a reboot.

In vSAN 6.7 Update 1, VMware has added guard rails to prevent disks of multiple hosts from unmounting due to lack of connectivity with KMS, or accidental key deletion. During an upgrade operation, VUM will place a host in Enhanced Maintenance Mode (EMM), perform updates, reboot, and exit EMM. If after a reboot, the host is not able to reach crypto-safe mode, the host will not exit EMM – stalling the VUM progress. In this case, the host’s drives are not mounted due to it not being able to reach the crypto-safe mode, if we allow the upgrade to continue, all other hosts will upgrade, but all the drives within the vSAN datastore will be unmounted.

This new guard rail, helps prevent losing all vSAN storage due to connectivity issues, or accidental changes with KMS, and key availability. This feature also highlights the benefits of having a HCI solution embedded in the kernel, the ease of orchestration with other vSphere components, and features makes vSAN even more appealing.