Home Lab vSphere Upgrade to vSAN 6.6

Well, I decided it was time to make the jump to vSphere 6.5 in my home lab.  You may think it is a no-brainer.  Well, not so much for my lab.  I currently have three HP DL360 G7 hosts with 2x quad core CPU’s, 64GB RAM each, and enough storage to give me about 4.5TB of Hybrid-flash vSAN storage.

I am running vCenter 6.5 with an external Platform Services Controller, but ESXi 6.0 on my hosts due to the fact that the hosts are not on the VMware Hardware Compatibility list for vSphere 6.5.  Due to this fact, I am also running vSAN 6.2 as my primary storage.  I also have 5TB available via iSCSI on a Synology 1517+ iSCSI SAN.  I am unaware as to what issues I will encounter as a part of this upgrade, since my hosts will not be using the HP ISO anymore.  I tried installing ESXi 6.5 with the HP ISO right when it came out, and was oh-so rudely greeted with a fantastic PSOD.  Even though I may encounter some issues, I am going to take the leap anyway!

This adds some complexity to the mix. I won’t be able to do an upgrade, because you cannot upgrade from an HP customized ISO to a VMware standard ESXi ISO.  I will need to do a fresh install.  Since I am running VMware vSAN for my primary storage, I have decided to migrate my virtual machines to my Synology..  Servers I need to move include my Horizon 7.x environment (3 servers and a handful of desktops), 2 domain controllers, a print server, a DHCP server, a KMS server, a Windows SQL server, a DFS file server, a virtual router, a Veeam server, and a Windows root CA.  I had a log insight and vROPS server that I removed before the upgrade.

Another fun tidbit, is that my home PC took a digger a week ago, and I have been using an Igel UD3 zero client connected to Horizon 7 in my lab to do this migration.  I will be shuffling my virtual desktop around as a part of this migration.

I have 4 NIC’s on each host in my environment.  All my hosts are connected to a distributed switch marked uplinks 1, 2, 3 and 4.  I will call the hosts ESX1, ESX2 and ESX3.  As for the uplinks, vmnic0 maps to NIC1, vmnic1 maps to NIC2, vmnic2 maps to NIC3 and vmnic3 maps to NIC4.

My port-groups are as follows:

Management VMkernel (1-Active, 2-Passive, 3-Unused, 4-Unused)
VMotionA VMkernel (1-Active, 2-Passive, 3-Unused, 4-Unused)
VMotionB VMkernel (1-Passive, 2-Active, 3-Unused, 4-Unused)
Server Virtual Machine VLAN (1-Active, 2-Active, 3-Unused, 4-Unused)
Desktop Virtual Machine VLAN (1-Active, 2-Active, 3-Unused, 4-Unused)
iSCSI Virtual Machine VLAN (1-Unused, 2-Unused, 3-Active, 4-Active)
iSCSI VMKernel PathA (1-Unused, 2-Unused, 3-Active, 4-Unused)
iSCSI VMKernel PathB (1-Unused, 2-Unused, 3-Unused, 4-Active)
vSAN VMKernel (1-Unused, 2-Unused, 3-Active, 4-Active)

I know this isn’t ideal layout for these networks, but having 4 1GBps NIC’s is a limiting factor.

Here is a high-level outline as to the steps I took to get from an HP-branded ESXi 6.0 to a generic VMware installation of ESXi 6.5, and vSAN 6.6.

(STEP 1) My first step was to svMotion all VM’s from vSAN to my Synology.  Once that was done, I removed all of my vSAN disk groups.  I wanted to start fresh when I provision vSAN 6.6.

(STEP 2) Once that was done, I VMotioned all VM’s off host 1 and powered it off / removed it from the inventory.

(STEP 3) I proceeded to install VMware ESXi 6.5 on ESX1, reboot, change the VLAN tag for management, check both management vmnics, re-IP and reboot.

(STEP 4) I then created a VSAN65 DRS/HA cluster and added ESX1 to it.  I then added ESX1 to the distributed switch, and added physical adapters NIC 1 on DVS uplink 1, and NICs 3/4 on uplinks 3/4.  Configure DRS to manual.

(STEP 5) Then, I migrated the vmk0 for management from vmnic1 (NIC2) on ESX1 to vmnic0 (NIC1) that we migrated to uplink 1 on the distributed switch.  Lastly, I added vmnic1 (NIC2) to the distributed switch uplink 2.  Now, my management vmkernel is redundant.

(STEP 6) Now, I go to the host vmkernel networking and create / assign IP addresses to vmk1 for VMotionA, vmk2 for VMotionB, vmk3 for iSCSIPathA, vmk4 for iSCSIPathB, and vmk5 for my vSAN network.

(STEP 7) Now I need to add an iSCSI Software Adapter, bind vmk3/vmk4 as uplinks for iSCSI, add my Synology IP address as an iSCSI Target, and rescan the HBA.  My 4 Synology LUN’s will then show up.

(STEP 8) Since my VMotion network is the same in this cluster as my original cluster, I am able to straight VMotion virtual machines from ESX2 and ESX3 to ESX1.

(STEP 9) Perform Steps 3-7 on ESX2 and add it to the new VSAN65 cluster. VMotion some VM’s to ESX2 to test connectivity.

(STEP 10) Perform Steps 3-7 on ESX3 and addit to the new VSAN65 cluster.  VMotion some VM’s to ESX3 to test connectivity.

(STEP 11) Enable vSAN in the new VSAN65 cluster and create your disk groups.  As I said, my environment is a hybrid vSAN, and contains a single SSD per host, and 3 – 600GB SAS drives for capacity.  This adds up to about 4.5TB capacity.

(STEP 12) The last step is to migrate the virtual machines back to the vSAN Datastore.

 

This is a high-level outline and there are a lot of sub-steps required to do each of the actions.  Before the migration to ESXi 6.5, my hosts were on the HCL for 6.0.  That said, my RAID controller, SSD disks and capacity disks were not on the vSAN HCL.  Now, none of my environment is on the VMware HCL.  At least I will get to test some of the new components of vSAN 6.6.  Wish me luck!

twitterredditlinkedinmail

Leave a Comment

Your email address will not be published. Required fields are marked *