VMworld 2014 – Day 2

Day 2

It is a little funny to see people moving a bit slower today, I should say a lot slower. That just shows you how busy this conference is. Lots of great stuff.

Day 2 started with a recap of day one during the keynote about EVO, vCloud Air (VCHS), vRealize, NSX, etc, and of course the liquid business models that Pat talked about yesterday.

End user computing was reviewed a bit, in particular the new underlying architecture. Also the acquisition of CloudVolumes was re-visited to highlight the delivery of applications. Along with EUC and mobility, it made sense for VMware to acquire AirWatch, which in my opinion it was a very smart move and a much needed piece to the mobility puzzle.

There were some really great demos about CloudVolumes, and especially EVO:RAIL. EVO:Rail has 4 independent nodes in each physical box, migrated VMs in/out automatically while upgrading a node. It not only has built-in redundancy but it utilizes VSAN technology and comes with Log Insight already installed. This is a really cool solution, and the UI is very intuitive and fast. Yes, a Web UI from VMware that is actually fast.

vSphere 6.0 beta enhancements were introduced such as FT for VMs with up to 4 vCPUs. Cross datacenter vMotion is another one of the new features of 6.0. These are some really great enhancements, along with the noticeable faster and overall better Web Client.

VSAN – Part 3

So, I am anxiously waiting for hardware to arrive in order to make my hosts VSAN compatible. I ordered 32GB SD Cards so I can install vSphere on, 1 SSD drive per host and (1) 4-port CNA for each host for the 10GbE interfaces for VSAN traffic. While I am waiting for the hardware, I decided to make some time now to review my design rather than making time later to do things over again. In search of further knowledge, I tuned in to listen to my compadre Rawlinson Rivera (VMware) speak about VSAN best practices and use cases. I was very happy to learn that I am on the right path; however, I found a couple of areas where I can tweak my design. In part Deux (2) of the VSAN topic, I spoke about using 6 magnetic disk drives and one SSD drive per host. There is nothing wrong with this design, BUT Rawlinson brought up an excellent point; when thinking about VSAN, think wide for scalability and keeping failure domain in mind.

One thing I did not talk about is disk groups. Each host can have up to 5 disk groups, and 7 magnetic drives + 1 SSD. There is no way I can put 40 drives in a 1U HP DL360p, but I can however, create several disk groups. At least 2 of them. So, I am modifying my plan to have 2 disk groups of 3 magnetic disk drives + 1 SSD per group, per host. This will allow me to sustain an SSD failure within a host without affecting too many VMs in that host, given that I have another disk group with its own SSD that is not being affected by the other SSD failure. So, if you think about it, 1 disk group means that if the only SSD drive fails it affects all the other drives. If you have 2 disk groups and one SSD drive fails, it only affects the drives in that group and that is it. So, by breaking down my 6 magnetic drives into 2 disk groups, I’m reducing the probability of failure in half. The only caveat is that I will need to buy an extra SSD drive per host.

On the network side, besides having 10GbE connections, I’ll need to enable multicast on the switches in order for VSAN to work correctly for heartbeats and other VSAN related communication. My plan is to use SFPs on the Cisco 3750x switches and create a separate VLAN for those ports for VSAN traffic purposes, then disable IGMP snooping on that VLAN to allow for multicast packets for all ports on that VLAN. Actually now that I think about it, I will use TwinAx cables instead of SFP+ and fiber, SFPs are too expensive and I’m trying to keep costs down. This is an example about disabling IGMP snooping on a specific VLAN(5) rather than globally.

switch# config t
switch(config)# vlan 5
switch(config-vlan)# no ip igmp snooping

It is also recommended to use Network IO control (NIOC). Some of you may be saying, well that requires a VDS and I don’t have that version of vSphere. And I would say; well luckily for you, VSAN includes VDS for you to use regardless of the vSphere flavor you are running. Isn’t VSAN awesome?!?!

Other things to consider. No support for IPv6, bummer… not really. NIC teaming is also encouraged, but you should do more research on the different scenarios, based on the amount of uplinks you will be using. Remember PPPPPPP.

I hope by now you are starting to see how powerful VSAN is. This is not VSA in steroids, this is a whole new way of using commodity hardware to achieve performance and redundancy at a relatively low cost.

VSAN – Part Deux

So now it is time to get our hands dirty. When I do a project, I usually live by the 7Ps. What’s that? 7Ps stand for Proper Prior Planning Prevents “Pitiful” Poor Performance… yes I cleaned it up a little. But anyway, I believe that attention to detail is important during the planning phase of any project. You don’t want to buy servers for VSAN just to find out during installation that the hardware is not a supported configuration.

Since I started going down the hardware specs, let’s dig a little deeper. First off, you must be familiar with the requirements for VSAN.

  • At least one flash device (SSD drive) per host
  • At least one hard disk per host
  • A boot device – Could be either USB, SD Card or hard disk. However, if using a hard disk, you cannot assign this device to the VSAN datastore. I personally prefer using USB or SD card.
    • If using USB or SD flash card, it needs to be at least 8GB. This installation is not supported if hosts have over 512GB of memory. More info on USB installation here.
  • Host storage controller must be capable of pass-through mode or JBOD. This means, be able to leave your drives without RAID.
  • At least one dedicated 1 GB NIC for VSAN per host. 10 GB is recommended.
  • At least 3 hosts for the VSAN cluster. Yep, that’s correct, for redundancy purposes.
  • At least 6GB of memory per host.

Apart from this list, make sure you check the VSAN HCL here.

Software requirements are just as important as hardware requirements for VSAN.

  • vCenter server is required
    • Can be Windows-based or Appliance (VCSA)
    • vCenter Server version should be 5.5 U1 or later
  • ESXi 5.5 U1 is required at a minimum
  • Obviously, you’ll need the proper licenses and support.

Side note: There are many VSAN ready solutions from different vendors, check this list. This will save you some time rather than building your own solution. In my case, I want to re-use the existing hardware where my VSA solution is running on, so I’ll be upgrading/adding some hardware.

Back to the upgrade.

I am running VSA on 2 HP DL360p G8s with 6 hard drives each, so I bought a third server identical to the other two. I’m basically running a VSAN ready solution that I put together based on the VMware compatibility guide. During my planning phase, I realized that I needed to obtain SSD drives. So I just went to the local store and grabbed a few… Just joking. Of course I checked the HCL!!!

After checking the HCL and compatibility matrix, I found a few options. Not necessarily the cheapest, so in order to keep cost down I purchased 3 SSD drives (1 per host). I also found out that the USB drives I have VSA installed on are less than 8GBs, so I’m buying an SD flash card for each host to install ESXi on.

So that is pretty much it for my hardware planning. I checked the storage adapter, and the DL360p G8s I have, came with the Smart Array P420i controller, so I’m good there. Oh wait, I only have 1GB NICs and those will be used for prod traffic. Looks like I’ll be buying NICs as well, luckily those are cheap. That should it for hardware, and I have all the licenses and support needed.

I’m going to set up all the hardware up and will be right back with Part 3.

VSA – EOL (VSAN Part 1)

Good Bye, VSA

As some of you may know, VMware VSA has gone end of life as of April, 2014. This applies to all flavors of VMware’s vSphere Storage Appliance. This does not mean that you have to stop using VSA… at least not immediately.

As long as you have an active support contract for VMware’s VSA, you are still entitled to contact support regarding any issues. However, you can no longer purchase VSA. So what exactly do this mean for those of us that have VSA in our environments? Now what?

While there is no replacement per se, there is however a better alternative. Yes, I said better. VMware engineers took the concept of VSA and created a much more robust solution. The new and improved solution is called VSAN or Virtual SAN. I am not saying by any means that VSAN is built on VSA code, I am saying that they share common use cases and solutions. Last I heard, VMware was coming up with a SKU for VSA to VSAN upgrade (I’m checking my sources).

VSAN is VMware’s software-design storage solution that allows for the use of local storage and leverages such storage not only for capacity purposes, but also for performance gains. VSAN caches reads/writes by utilizing server-side flash. There are tons of documents about VSAN and their use cases. I have included a few links from other blogs about VSAN.

So, while there is no direct/in-place upgrade available, there is a way to take your existing hardware, as long as your hardware meets the VSAN Hardware requirements, and transform it into your new VSAN solution. I’m in the process of doing this myself right now, so I will post steps as I go through this process.

 

In the meantime, you may want to check these links out:

 

To be continued…