Bug Catcher: SRA/SRM testFailoverStart

scared-bugI’ve decided to write this post since I spent quite a bit of time troubleshooting this problem just to find out that it was a bug, grrrr. So, hopefully this will save someone some hair tearing and time as well.

I was recently implementing SRM 6.1 on a NetApp cluster running clustered mode (8.3.1). Configuration was flawless and was happy that I may complete a project early on a Friday. We decided to run a failover test to DR site, and this is where the issue came about.

After double checking all the settings, I had no luck finding any resources with this issue/resolution. The job would fail almost immediately after starting, with the error “Storage ports not found”. I checked the SRA ontap_config file to make sure the IPv4 option (isipv4) was set to match the IP format of the NFS configuration within SRM. Checked to make sure the firewall on the NetApp was set properly to allow communication, but everything looked correct.

I learned later on, that SRA 2.1 cannot detect NetApp interfaces that are set to mgmt. So, with NetApp LIFs, you have the option to “bundle” your data and mgmt interfaces within the same LIF (NFS, CIFS). If this is the case, those interfaces will be set to have the mgmt firewall-policy rather than the data firewall-policy, which is ok unless you are trying to use SRM/SRA in that setup.


  • Create a separate/dedicated mgmt LIF for your NFS SVM (per SVM)
    • Otherwise you are removing all mgmt interfaces for that SVM without a replacement
  • Remove the mgmt option for the NFS data LIFs
  • Change the firewall-policy for the NFS data LIFs from mgmt to data
    • You can use this command to do so:

network interface modify -vserver [vserver_name] -lif [data_lif_name] -firewall-policy data

Also make sure to check the ontap_config file for SRA. This is located under C:\Program Files\VMware\VMware vCenter Site Recovery Manager\storage\sra\CMODE_ONTAP.  If the ip addresses for the data LIFs are IPv4, this option needs to be set to YES.





Bug should be fixed with SRA 3…. coming soon to a datacenter near you.

Deploying VVols on NTAP

VVolsEven before the release of vSphere 6, the hype for VVols has been in the upswing, and for a good reason. VVols allow for a granular management of VM objects within one or more datastores based on policies. I have written a few blogs about VVols, and also the requirements within NetApp here. I tend to write about the integration between the two vendors as I really like, and believe on their technology, and I am an advocate for both.

Anyway, deploying VVols on NetApp requires to first understand how this all works. So, with that in mind, don’t forget that this a software solution that relies on policies from both the VMware side and the NetApp side. As I explained on previous posts, deploying VVols on NetApp has certain requirements, but the one I’ll focus on is the VASA provider (VP). The VP acts as the translator between the VMware world and the storage array world, regardless of the storage vendor. Some storage vendors integrate the VP within the array others come as an OVA.

So, from the storage side, you first need to deploy the VP, and also in this case VSC, which is NetApp’s storage console within VMware. After all components have been installed, VASA will become your best friend as it will provision not only VVol datastores, but will also provision the volumes within NetApp, automatically create exports with proper permissions, and create the PE among others. The PE is a logical I/O proxy that the host sees and utilizes to talk to VVols on the storage side. In the case of an NFS (NAS) volume, the PE is nothing more than a mount point, in the case of iSCSI (SAN), the PE is a lun. Again, the VASA provider will automatically create the PE for you when you provision a VVol datastore.

Let’s start the roll out. Assumptions here are that you have already deployed VSC 6.0, VASA 6.0, and currently have vSphere 6.0 or later. On the NetApp side it is assumed that you have at least ONTAP 8.2.1 or later, and that you have already created an SVM of the protocol of preference whether it is iSCSI, FCP/FCoE or NFS, up to you.

The first thing you should do if you have both NetApp and VMware, or FlexPod for that matter, is to make sure your VMware hosts have the recommended settings from NetApp. To do this, go to VSC within the VMware Web Client, click summary, and click on the settings that are not green. VSC will open a new window and allow you to deploy those settings to the hosts. You should do this regardless if you are deploying VVols or not.



The next step is to create a Storage Capability Profile within VSC/VASA. Within the VSC, go to VASA Provider for cDOT, and select Storage Capability Profiles (SCP). Here you will create your own profile of how you would like to group your storage, based on a specific criteria. For example, if you want a criteria for high performance, you may select a specific storage protocol, SSD drives, dedupe options, replication options, etc. This is the criteria that VASA will use to create your storage volumes when deploying VVol datastores, and if you already created a volume, this is also the criteria that will be qualified as compliant for the desired VVol storage.

I created an SCP that required the protocol to be iSCSI and SAS drives, the rest was set to any. This will result in VVol creation on the SAS drives only, and under the SVM that has iSCSI protocol and LIFs configured. If there are no iSCSI SVMs this would not work. Pretty self explanatory, I hope.


Now that the SCP is created, we can provision a VVol Datastore. Right click on the cluster or host and select “VASA Provider for clustered Date ONTAP”, then Provision VVol datastore.


Start the wizard and type the name of the VVol datastore, and select the desired protocol. Select the SCP that you want to include within the VVol, the qualified SVM(s) will be available if it matches the SCP you selected. For example, if you selected the SCP/protocol that calls for iSCSI and you only have one iSCSI SVM, that will be the only one that you will have as an option, and the NFS or FCP/FCoE SVMs will not appear. If there is a qualified volume, you may select to use it, or you may select none to create a new. If creating a new vol, choose the name, SCP, and other options just like you would from NetApp’s System Manager. You will also have the capability to add/create more volumes to the VVol datastore. The last step is to select a default SCP the VMs will use if the do not have a VMware profile assigned to them.


This will cause VASA to talk to your NetApp array and create a volume based on the SCP specified, at the same time, VASA will create the PE, which in this case is a lun.  You can add/remove storage to the VVol datastore you created at a later time simply by right-clicking the VVol and go to the VASA settings. Below you can see the PE that the VP created within the volume that was created during the VVol deployment process.



The next step is to create a VM Storage Policy that points to the SCP. Once this policy is attached to a VM, it will “tell” the VM which datastore it is supposed to be on. So if you have a SQL VM on a high performance policy, you know that as long as the VM is in compliance, it will run in the high performance profile you created.  To create the VM policy within the Web Client, click on VM Storage Policies, select new (scroll with green + sign), give it a name and select the vCenter. For the rule set, select the VP from the drop-down box for “Rules based on data services” and add a rule based on profile name. For the profile name option, select the SCP you created initially under VASA. This will show you what storage is compatible with this rule. Since I selected the iSCSI SCP, it will show me the iSCSI VVol I have already created. This creates the VM policy that you can assign to individual VMs.




You can also have different storage policies for the Home folder and VMDK.





Pretty cool, right?!?

I hope this helps you get started with VVols.

SHIFT VMs between Hypervisors

computer_key_ShiftAs more and more hypervisors emerge in the market, and more features are added to current hypervisor offerings; businesses may opt to migrate between vendors to save money, add features, consolidate to a single platform, or simply to follow a strategic plan. The migration is costly, complex, time consuming and disruptive as it requires down time of the VMs to be migrated; which is often not an option.

If you use NetApp as your storage vendor, the solution for this problem is now available to you. NetApp has announced the release of OnCommand Shift; a tool that enables the migration between Hyper-V VMs into a VMware environment and vice versa. This conversion is automated and requires minimum intervention from the user, and it takes a fraction of the time that other tools would take otherwise. OnCommand Shift supports VMware ESX/ESXi 5.0 -> 6.0, and Microsoft Hyper-V 2008 R2 -> 2012 R2.

How does OnCommand Shift works?

OnCommand Shift utilizes NetApp’s FlexClone technology which allows for the creation of the target VM within seconds/minutes. This is accomplished by creating a VMDK; for example, from a VHDX file by simply creating pointers to the existing data rather than copying and duplicating such data in the storage array. So, regardless of the size of the VMs, the time necessary to create the new VMs will be minutes or even seconds, rather than hours or days.


OnCommand Shift creates a virtual hard disk file in the format of the destination hypervisor and includes headers, metadata, and writes only the differences in the file format, instead of copying all the data from the source. Shift also collects and stores all VM information, as well as other VM settings, removes any hypervisor proprietary tools, and takes a backup of the VM prior to the conversion. It then proceeds to convert the VMs, say from Hyper-V to VMware, and then it restores all the network interface configuration as well as other settings.



  • NetApp FAS2440 or higher
  • NetApp cDOT 8.2 or later
  • NetApp NFS and CIFS/SMB license for the controllers
  • Server to control the migration process (Physical or Virtual)
    • 2 vCPUs
    • 4GB RAM
    • 250GB storage Minimum (500GB preferred)
  • Data ONTAP PowerShell Toolkit version 3.0.1 or higher
  • PowerCLI 5.1 or higher
  • Microsoft Hyper-V PowerShell cmdlets
  • Microsoft .NET Framework 3.5


Golden Nuggets: #2 NetApp VSC – Provisioning

Wizard_LegoI previously wrote a post or two related to NetApp’s Virtual Storage Console for VMware vCenter, including its uses and how to install it. In this post I would like to highlight its importance in a NetApp/VMware or FlexPod environment.

NetApp’s VSC is a very handy tool that will allow you to achieve many tasks in an automated fashion rather than doing them manually; therefore, it will save you time and eliminate the possibility of human error. During the configuration and provisioning process, the human error factor can result in a lot of frustration for the admin/engineer, as troubleshooting often ends with the finding of a simple step that was missed. VSC is not a new tool, but there is a new version (6.0) that introduces new features and fixes. VVols requires VSC 6.0 , by the way.

One of the coolest features of VSC (IMHO), is the provisioning of storage from the VMware Web client. If we were to create an NFS datastore for VMware, the manual process will include the creation of a volume, granting the correct permissions for the export, and then mounting the datastore to each host. This takes quite a bit of time and requires having to jump between UIs.

VSC allows you to do all the aforementioned steps from the VMware client (Web or C#) from one easy to use provisioning wizard. You can provision datastores, volumes, exports, and permissions by simply right-clicking the cluster or an individual host. If  you do this from the cluster level, VSC will create the volume and exports which is cool, but the coolest part is that it will also add the hosts’ IP addresses with the necessary permissions to the export, and it creates and mounts the datastores on all hosts within the cluster, NFS in this case. That alone is a good reason to have VSC, albeit there are many other tasks that VSC is capable of.

One of the frequently asked questions I see on both the VMware and NetApp communities, relates to errors and failure to mount an NFS datastore in vCenter. Often times it relates to the permissions within the exports, so VSC will do all this for you and prevent such issues.

Note: Please refer to NetApp’s Interoperability Matrix Tool (IMT) to determine which version you need. Specific versions are needed for VMware’s Web client and the same goes for the vSphere Client.









VSC options for Cluster







VSC options for Host



Get your NetApp – VVols while they are HOT

pistonToday, the long awaited NetApp VASA Provider (VP) and the new shiny VSC console have been released to general availability.

So what does VASA and VSC have to do with VVols? Everything. In previous posts I talked about both NetApp’s VSC and VASA provider for VMware here. These offerings along with VAAI provide a tight integration between VMware and NetApp. Given the transition from VMware’s C# Client (fat) to Web Client, it resulted in the need of updated versions, and this is how VSC 6.0 and VASA Provider 6.0 were born.

Now to VVols. In order to be able to deploy VVols with NetApp there are a few requirements.

  • vSphere 6.0 (or later)
  • NetApp Clustered Data ONTAP 8.2.1 or later (thanks Nick for the clarification)
  • VSC 6.0
  • NetApp VASA Provider 6.0

You can see now why this announcement is such a big deal, both VSC and VP make up the engine that powers up the VVols machine. Both vSphere and cDOT 8.2.1 have been out for a while, but those that wanted to test drive VVols with GA code could not do that until today except by using beta code.

VSC brings and additional enhancement with its new version and that is the addition of PowerShell cmdlets for most VSC features. These cmdlets along with PowerCLI and NetApp’s PowerShell Tool Kit can provide tighter integration and automation between NetApp and VMware.


You can download VSC and VP from the links below: