VSAN – Part 3

So, I am anxiously waiting for hardware to arrive in order to make my hosts VSAN compatible. I ordered 32GB SD Cards so I can install vSphere on, 1 SSD drive per host and (1) 4-port CNA for each host for the 10GbE interfaces for VSAN traffic. While I am waiting for the hardware, I decided to make some time now to review my design rather than making time later to do things over again. In search of further knowledge, I tuned in to listen to my compadre Rawlinson Rivera (VMware) speak about VSAN best practices and use cases. I was very happy to learn that I am on the right path; however, I found a couple of areas where I can tweak my design. In part Deux (2) of the VSAN topic, I spoke about using 6 magnetic disk drives and one SSD drive per host. There is nothing wrong with this design, BUT Rawlinson brought up an excellent point; when thinking about VSAN, think wide for scalability and keeping failure domain in mind.

One thing I did not talk about is disk groups. Each host can have up to 5 disk groups, and 7 magnetic drives + 1 SSD. There is no way I can put 40 drives in a 1U HP DL360p, but I can however, create several disk groups. At least 2 of them. So, I am modifying my plan to have 2 disk groups of 3 magnetic disk drives + 1 SSD per group, per host. This will allow me to sustain an SSD failure within a host without affecting too many VMs in that host, given that I have another disk group with its own SSD that is not being affected by the other SSD failure. So, if you think about it, 1 disk group means that if the only SSD drive fails it affects all the other drives. If you have 2 disk groups and one SSD drive fails, it only affects the drives in that group and that is it. So, by breaking down my 6 magnetic drives into 2 disk groups, I’m reducing the probability of failure in half. The only caveat is that I will need to buy an extra SSD drive per host.

On the network side, besides having 10GbE connections, I’ll need to enable multicast on the switches in order for VSAN to work correctly for heartbeats and other VSAN related communication. My plan is to use SFPs on the Cisco 3750x switches and create a separate VLAN for those ports for VSAN traffic purposes, then disable IGMP snooping on that VLAN to allow for multicast packets for all ports on that VLAN. Actually now that I think about it, I will use TwinAx cables instead of SFP+ and fiber, SFPs are too expensive and I’m trying to keep costs down. This is an example about disabling IGMP snooping on a specific VLAN(5) rather than globally.

switch# config t
switch(config)# vlan 5
switch(config-vlan)# no ip igmp snooping

It is also recommended to use Network IO control (NIOC). Some of you may be saying, well that requires a VDS and I don’t have that version of vSphere. And I would say; well luckily for you, VSAN includes VDS for you to use regardless of the vSphere flavor you are running. Isn’t VSAN awesome?!?!

Other things to consider. No support for IPv6, bummer… not really. NIC teaming is also encouraged, but you should do more research on the different scenarios, based on the amount of uplinks you will be using. Remember PPPPPPP.

I hope by now you are starting to see how powerful VSAN is. This is not VSA in steroids, this is a whole new way of using commodity hardware to achieve performance and redundancy at a relatively low cost.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s