Plan for vSphere core dump on diskless hosts

BootInstalling VMware vSphere on hardware come with many options when it comes to the location of the many partitions necessary for ESXi. ESXi can be installed on USB, SD (mini) cards, local storage, and boot from LUNs. Before you deploy your ESXi hosts, you should be thinking about your design and the limitations (is any) of each of the boot options.

Remember that ESXi has several partitions that are created during its installation.

7 Partitions:

  1. System Partition
  2. Linux Native – /scratch
  3. VMFS datastore
  4. n/a
  5. Linux Native /bootbank
  6. Linux Native /altbootbank
  7. vmkDiagnostics
  8. Linux Native /store

One thing to note is that partition 2 & 3 (/scratch & VMFS) are not present on the image below. This is because my ESXi host was installed on an SD card.

ESXi_Partitions

 

This post will focus on the vmkDiagnostics partition. VMware recommends that this partition is kept on local storage unless it is a diskless installation, such as boot from SAN. I have seen a rapid increase on boot from SAN as more and more people are transitioning into Cisco’s UCS blades. So, if you are doing this or planning on booting from SAN, make sure that you create a core dump partition for your hosts. You have a few options to do this.

  • You could have the core dumps on the boot LUN; however, it is recommended that a separate LUN is created for this partition.
    • Independent HW iSCSI only (keep reading).
  • If you set the diagnostic partition to be at the boot LUN, make sure only that host has access to it.
    • This should already be the case anyway. A boot LUN should only be accessible to the specific host.
  • IF you create a separate LUN for the diagnostic partition, you can share this LUN among many hosts.
  • You can also set up a file as a core dump location on a VMFS datastore (see caveat below).

 

That sounds pretty easy right?!? Yes but wait, there is a big caveat here. 

You CANNOT place the diagnostic partition on any of the options above if using software iSCSI or hardware dependent initiators (iBFT). This can only be done via independent hardware iSCSI. More info here. This is not version dependent at the time this post was written.

 

Now what?

If do not have any hardware HBAs, you have a couple of options.

  1. The recommended option is to set up ESXi Dump Collector
    • Requires configuration on vCenter (Windows and VCSA)
    • Available for vSphere 5.0 and later
    • Consolidates logs from many hosts
    • Easily deployed via Host Profiles or esxcli commands
  2. You could also put this on USB storage, but this requires disabling the USB Arbitrator service, which means that you will not be able to use USB passthrough on any VM.
    • I personally wouldn’t recommend this option.

Cisco UCS: The Mighty MINI

mighty_mouseCisco UCS servers are not new, but many people have looked the other way simply due to the fact that Cisco, a network company, was making servers. When it comes to scalability, ease of deployment and automation, Cisco UCS is the way to go. I’m a huge fan of UCS, even a bigger fan of FlexPod; which is the joint solution that Cisco and NetApp developed The other reason why some companies have not looked into UCS is due to its Enterprise targeted market.

Cisco was smart enough to put together a similar solution for remote offices and small/medium businesses. Meet the Cisco UCS Mini. The Mini is not new either, but many people are not aware of such solution, so I’m here to do justice to the Mini.

The UCS mini, is not a miniature version of Cisco UCS, but rather a more converged design to save rack space, energy and still deliver enterprise grade capabilities through Cisco UCS Manager. There are very little differences between UCS and UCS mini. Mainly it relates to the networking piece of it, and some scalability limitations.

The traditional Cisco UCS is composed of a Cisco 5108 chassis, blade servers, and external Fabric interconnects among other components. The Fabric interconnects are pretty much Cisco Nexus switches with UCS manager software on them. You connect one or more chassis to the Fabric Interconnects in order to scale without having to deploy an entire new solution. In UCS Mini, there are no external Fabric Interconnects. These components are moved within the same Cisco 5108 blade chassis, and still act as the brain of the solution via UCS Manager.

CIsco_UCSCisco UCS with external Fabric Interconnects

 

Cisco_UCS_MINI

Cisco UCS Mini with internal Fabric Interconnects

The price point for the UCS Mini is much lower than its big brother. So if you are not looking to add several chassis in the near future, the Mini may be a way to get started with this awesome technology. I’m not going to go into details about the capabilities of UCS, maybe on another post, but I do want to highlight that the UCS Mini is an excellent solution for smaller environments. Don’t get me wrong, you can still have up to eight half-width blades (B200) or even the full-width blades (up to four), you can even attach a C-series rack server (pizza box) to the UCS Mini, so you can have tons of horse power here. You still get 1/10GB, FCoE, FC, for a max of 500 Gbps of throughput. So there’s nothing mini about the Cisco UCS Mini, in my opinion.

Cisco Live 2015: Day 0

Cisco_Live_0

It is conference season, and Cisco’s 2015 conference is in San Diego, CA. The weather here is awesome, so far. Woke up at 4am EDT, so it has been a long day already, but I’m excited about this year’s conference.

The convention center has a nice flow to it in my opinion, and the Cisco store has a lot of cool stuff, including tons of books. Already got one book, and going back tomorrow for more. A lot of tracks are included as Cisco keeps expanding their portfolio. As a datacenter, virtualization and storage guy, I’ll be attending a lot of UCS, ACI, and security sessions.

Already met old acquaintances, and met new people. Definitely a great place to network.

Navigating through the conference its pretty easy. Shuttles are constantly going back and forth to/from the hotels, and there are many restaurants within walking distance from the convention center. Make you sure you download the Cisco Live App, not only will you be able to see your agenda but also see maps of the convention center and get directions (similar to Google maps) to your sessions.  Tons of WAPs across the conference center, so for once you may be able to get your email while at a conference full of geeks.

More on info on Cisco live will be added throughout the conference. Stay tuned.

Cisco UCS: Intro

UCS

 

With the overwhelming amount of marketing fluff directed to potential customers, admins, IT Managers and directors; such potential buyers are skeptical to look into new technology and often decide for status quo as far as the vendor selection.

Some potential buyers are starting to utilize social media to research and help them decide on future purchases, given their unbiased point of view. This brings me to Cisco UCS servers. For a long time I was a big fan of HP blade systems and refused to look into other technologies, and tried to steer away from unknown territory. One day I decided to look into UCS further and take one of their hands-on Gold Labs. Turns out Cisco UCS is a very well thought out solution. HP is a nice solution in my opinion but UCS delivers extra features which results in flexible solutions.

From a high level view, UCS delivers compute, server networking, and management from a single solution. Deploying servers, is as simple as assigning policies to the blade servers as far as server configs, networking policies, etc. This makes deployment fast and guarantees a homogenous deployment model.

One of the many advantages that I really like is the ability to scale out. Adding more chassis and servers do not require running additional cables to the core switches as the fabrics are already connected. New chassis are connected directly to the fabrics and that is it. Making it very simple to add compute when necessary. This is just a high level view of UCS, and I plan to write more about it now that I am such a fan of it.  I hope the information I provide may help others with their decisions and also with solutions, and troubleshooting Cisco UCS.