Cisco Live 2015: Day 0

Cisco_Live_0

It is conference season, and Cisco’s 2015 conference is in San Diego, CA. The weather here is awesome, so far. Woke up at 4am EDT, so it has been a long day already, but I’m excited about this year’s conference.

The convention center has a nice flow to it in my opinion, and the Cisco store has a lot of cool stuff, including tons of books. Already got one book, and going back tomorrow for more. A lot of tracks are included as Cisco keeps expanding their portfolio. As a datacenter, virtualization and storage guy, I’ll be attending a lot of UCS, ACI, and security sessions.

Already met old acquaintances, and met new people. Definitely a great place to network.

Navigating through the conference its pretty easy. Shuttles are constantly going back and forth to/from the hotels, and there are many restaurants within walking distance from the convention center. Make you sure you download the Cisco Live App, not only will you be able to see your agenda but also see maps of the convention center and get directions (similar to Google maps) to your sessions.  Tons of WAPs across the conference center, so for once you may be able to get your email while at a conference full of geeks.

More on info on Cisco live will be added throughout the conference. Stay tuned.

Deploying VVols on NTAP

VVolsEven before the release of vSphere 6, the hype for VVols has been in the upswing, and for a good reason. VVols allow for a granular management of VM objects within one or more datastores based on policies. I have written a few blogs about VVols, and also the requirements within NetApp here. I tend to write about the integration between the two vendors as I really like, and believe on their technology, and I am an advocate for both.

Anyway, deploying VVols on NetApp requires to first understand how this all works. So, with that in mind, don’t forget that this a software solution that relies on policies from both the VMware side and the NetApp side. As I explained on previous posts, deploying VVols on NetApp has certain requirements, but the one I’ll focus on is the VASA provider (VP). The VP acts as the translator between the VMware world and the storage array world, regardless of the storage vendor. Some storage vendors integrate the VP within the array others come as an OVA.

So, from the storage side, you first need to deploy the VP, and also in this case VSC, which is NetApp’s storage console within VMware. After all components have been installed, VASA will become your best friend as it will provision not only VVol datastores, but will also provision the volumes within NetApp, automatically create exports with proper permissions, and create the PE among others. The PE is a logical I/O proxy that the host sees and utilizes to talk to VVols on the storage side. In the case of an NFS (NAS) volume, the PE is nothing more than a mount point, in the case of iSCSI (SAN), the PE is a lun. Again, the VASA provider will automatically create the PE for you when you provision a VVol datastore.

Let’s start the roll out. Assumptions here are that you have already deployed VSC 6.0, VASA 6.0, and currently have vSphere 6.0 or later. On the NetApp side it is assumed that you have at least ONTAP 8.2.1 or later, and that you have already created an SVM of the protocol of preference whether it is iSCSI, FCP/FCoE or NFS, up to you.

The first thing you should do if you have both NetApp and VMware, or FlexPod for that matter, is to make sure your VMware hosts have the recommended settings from NetApp. To do this, go to VSC within the VMware Web Client, click summary, and click on the settings that are not green. VSC will open a new window and allow you to deploy those settings to the hosts. You should do this regardless if you are deploying VVols or not.

VSC_settings

 

The next step is to create a Storage Capability Profile within VSC/VASA. Within the VSC, go to VASA Provider for cDOT, and select Storage Capability Profiles (SCP). Here you will create your own profile of how you would like to group your storage, based on a specific criteria. For example, if you want a criteria for high performance, you may select a specific storage protocol, SSD drives, dedupe options, replication options, etc. This is the criteria that VASA will use to create your storage volumes when deploying VVol datastores, and if you already created a volume, this is also the criteria that will be qualified as compliant for the desired VVol storage.

I created an SCP that required the protocol to be iSCSI and SAS drives, the rest was set to any. This will result in VVol creation on the SAS drives only, and under the SVM that has iSCSI protocol and LIFs configured. If there are no iSCSI SVMs this would not work. Pretty self explanatory, I hope.

SCP_iSCSI

Now that the SCP is created, we can provision a VVol Datastore. Right click on the cluster or host and select “VASA Provider for clustered Date ONTAP”, then Provision VVol datastore.

Provision_VVol

Start the wizard and type the name of the VVol datastore, and select the desired protocol. Select the SCP that you want to include within the VVol, the qualified SVM(s) will be available if it matches the SCP you selected. For example, if you selected the SCP/protocol that calls for iSCSI and you only have one iSCSI SVM, that will be the only one that you will have as an option, and the NFS or FCP/FCoE SVMs will not appear. If there is a qualified volume, you may select to use it, or you may select none to create a new. If creating a new vol, choose the name, SCP, and other options just like you would from NetApp’s System Manager. You will also have the capability to add/create more volumes to the VVol datastore. The last step is to select a default SCP the VMs will use if the do not have a VMware profile assigned to them.

VVol_Complete

This will cause VASA to talk to your NetApp array and create a volume based on the SCP specified, at the same time, VASA will create the PE, which in this case is a lun.  You can add/remove storage to the VVol datastore you created at a later time simply by right-clicking the VVol and go to the VASA settings. Below you can see the PE that the VP created within the volume that was created during the VVol deployment process.

VVol_PE

 

The next step is to create a VM Storage Policy that points to the SCP. Once this policy is attached to a VM, it will “tell” the VM which datastore it is supposed to be on. So if you have a SQL VM on a high performance policy, you know that as long as the VM is in compliance, it will run in the high performance profile you created.  To create the VM policy within the Web Client, click on VM Storage Policies, select new (scroll with green + sign), give it a name and select the vCenter. For the rule set, select the VP from the drop-down box for “Rules based on data services” and add a rule based on profile name. For the profile name option, select the SCP you created initially under VASA. This will show you what storage is compatible with this rule. Since I selected the iSCSI SCP, it will show me the iSCSI VVol I have already created. This creates the VM policy that you can assign to individual VMs.

VSP_Rule1

VVol_Complete

 

You can also have different storage policies for the Home folder and VMDK.

VM_Policy

 

VM_Storage_Policy

 

Pretty cool, right?!?

I hope this helps you get started with VVols.

SHIFT VMs between Hypervisors

computer_key_ShiftAs more and more hypervisors emerge in the market, and more features are added to current hypervisor offerings; businesses may opt to migrate between vendors to save money, add features, consolidate to a single platform, or simply to follow a strategic plan. The migration is costly, complex, time consuming and disruptive as it requires down time of the VMs to be migrated; which is often not an option.

If you use NetApp as your storage vendor, the solution for this problem is now available to you. NetApp has announced the release of OnCommand Shift; a tool that enables the migration between Hyper-V VMs into a VMware environment and vice versa. This conversion is automated and requires minimum intervention from the user, and it takes a fraction of the time that other tools would take otherwise. OnCommand Shift supports VMware ESX/ESXi 5.0 -> 6.0, and Microsoft Hyper-V 2008 R2 -> 2012 R2.

How does OnCommand Shift works?

OnCommand Shift utilizes NetApp’s FlexClone technology which allows for the creation of the target VM within seconds/minutes. This is accomplished by creating a VMDK; for example, from a VHDX file by simply creating pointers to the existing data rather than copying and duplicating such data in the storage array. So, regardless of the size of the VMs, the time necessary to create the new VMs will be minutes or even seconds, rather than hours or days.

Shift_FlexClone

OnCommand Shift creates a virtual hard disk file in the format of the destination hypervisor and includes headers, metadata, and writes only the differences in the file format, instead of copying all the data from the source. Shift also collects and stores all VM information, as well as other VM settings, removes any hypervisor proprietary tools, and takes a backup of the VM prior to the conversion. It then proceeds to convert the VMs, say from Hyper-V to VMware, and then it restores all the network interface configuration as well as other settings.

Shift_Overview

Requirements

  • NetApp FAS2440 or higher
  • NetApp cDOT 8.2 or later
  • NetApp NFS and CIFS/SMB license for the controllers
  • Server to control the migration process (Physical or Virtual)
    • 2 vCPUs
    • 4GB RAM
    • 250GB storage Minimum (500GB preferred)
  • Data ONTAP PowerShell Toolkit version 3.0.1 or higher
  • PowerCLI 5.1 or higher
  • Microsoft Hyper-V PowerShell cmdlets
  • Microsoft .NET Framework 3.5

 

Golden Nuggets: #2 NetApp VSC – Provisioning

Wizard_LegoI previously wrote a post or two related to NetApp’s Virtual Storage Console for VMware vCenter, including its uses and how to install it. In this post I would like to highlight its importance in a NetApp/VMware or FlexPod environment.

NetApp’s VSC is a very handy tool that will allow you to achieve many tasks in an automated fashion rather than doing them manually; therefore, it will save you time and eliminate the possibility of human error. During the configuration and provisioning process, the human error factor can result in a lot of frustration for the admin/engineer, as troubleshooting often ends with the finding of a simple step that was missed. VSC is not a new tool, but there is a new version (6.0) that introduces new features and fixes. VVols requires VSC 6.0 , by the way.

One of the coolest features of VSC (IMHO), is the provisioning of storage from the VMware Web client. If we were to create an NFS datastore for VMware, the manual process will include the creation of a volume, granting the correct permissions for the export, and then mounting the datastore to each host. This takes quite a bit of time and requires having to jump between UIs.

VSC allows you to do all the aforementioned steps from the VMware client (Web or C#) from one easy to use provisioning wizard. You can provision datastores, volumes, exports, and permissions by simply right-clicking the cluster or an individual host. If  you do this from the cluster level, VSC will create the volume and exports which is cool, but the coolest part is that it will also add the hosts’ IP addresses with the necessary permissions to the export, and it creates and mounts the datastores on all hosts within the cluster, NFS in this case. That alone is a good reason to have VSC, albeit there are many other tasks that VSC is capable of.

One of the frequently asked questions I see on both the VMware and NetApp communities, relates to errors and failure to mount an NFS datastore in vCenter. Often times it relates to the permissions within the exports, so VSC will do all this for you and prevent such issues.

Note: Please refer to NetApp’s Interoperability Matrix Tool (IMT) to determine which version you need. Specific versions are needed for VMware’s Web client and the same goes for the vSphere Client.

VSC_Cluster

 

 

 

 

 

 

 

VSC options for Cluster

 

VSC_Host

 

 

 

 

VSC options for Host

 

 

Golden Nuggets: #1 vSphere vFlash

ToolsWith so many tools and features from many different vendors, it is almost impossible to research them all and find useful tools to make your job easier. Some features also provide a faster/cheaper way to solve common problems without spending a fortune, unfortunately, these “Golden Nuggets” are often underutilized. I’ll post a few quick tools that may make a big difference in someone’s environment. As always, test before deploying to production.

One of the cool features introduced in vSphere 5.5 was vFlash, which replaced swap to SSD from previous versions, but I won’t get into that. Essentially, this is flash-based read cache on the host that functions at the vmdk level for a specific VM. This feature works by adding flash-based resources such as PCIe cards or SSD drives to create a vFlash pool of resources at the host level, and configuring the amount of storage to be used for host swap cache. Such cache is placed on the data path of the vmdk between the host and the storage array.

Once the host is configured, you can expand the virtual disk of a VM’s properties in the Web Client and assign the amount of cache for that particular vmdk, as well as having the option to select the block size (4KB – 1024KB). So, for each pool, chunks are carved out or reserved for a specific vmdk on the host where the VM is located.

vFlash_vmdk

As far as data locality goes and features like HA, DRS, vMotion; it is possible to migrate the cached data to another host while migrating a VM, as long as the other hosts have also been configured with vFlash. You may also specify not to migrate the cached data during migration.

Requirements:

  • Check HCL for compatible Flash devices
  • vCenter 5.5 or later (VCSA or Windows)
  • VM hardware version 10 or later
  • vSphere vMotion if using DRS
    • Requires vFlash on hosts within the cluster

 

Implementing vFlash can be beneficial for resolving or minimizing performance degradation for read intensive applications, or simply by utilizing local resources at the host level for read cache instead or in addition to storage read caching solutions. Having local cache eliminates the “extra hop” on the network to get to cached data at the storage array.

This is a high level view of vFlash but in my opinion, I think this is a nice feature that can get rid of some headaches and fire drills.

 

vFlash_highLevelImage source – VMware doc (Rawlinson)