When we decided to migrate away from our current HP EVA 4000 SAN, we wanted to make sure that the solution we chose was affordable, flexible, stable and reliable. Along with that, we wanted to purchase a solution that was compatible with VMware’s vStorage APIs for Array Integration (VAAI).
We ended up purchasing eight HP Lefthand P4500 shelves, and two HP Lefthand P4300 shelves. Each P4500 shelf included 12 600GB 15K Serial Attached SCSI (SAS) drives configured in two separate 6 drive RAID 5 arrays. Each P4300 shelf included 8 1TB 7200 RPM SATA drives configured in a single 8 drive RAID 5 array.
The shelves all come with two 1gb NICs that can be configured using either Adaptive Load Balancing (default and preferred), Active Passive, or 802.3AD Link Aggregation.
The shelf is very similar to a 2U Proliant server, and even includes two PS/2 ports for keyboard/mouse as well as a 15-pin analog video outlet. HP’s integrated lights out module is also included.
The iSCSI SAN solution we chose included two HP Procurve 6600 48-port gigabit switches out-of-the-box unconfigured. The switches will be cascaded using an ethernet cable to form a single storage fabric. We are going to employ a standalone storage fabric with no routing of any kind. The Lefthand nodes will connect one ethernet port to each Procurve switch.
The nodes will be configured using Adaptive Load Balancing (ALB) so that we will enjoy switch redundancy. The nodes will be configured with a single 10-dot IP address on the same Class-C mask as the other nodes on the storage fabric.
Now, I will outline the setup of a single P4500 shelf. I am assuming that you have a functioning IP storage network, as well as a StorageWorks P4000 Centralized Management Console system with the CMC software installed. It is also a good idea to configure two physical NICs on the CMC server, one on each switch in your IP storage network.
(Note: Non-routable IP addresses are shown in these screen captures. If you manage to make all the way in to our standalone/isolated iSCSI network, I probably deserve to get hacked 🙂 )
Once the P4500 has been powered on and connected to a KVM, you will see the below prompt. A password is not needed at this point:
Select Login. Then, click General Settings, Add Administrator and if you want added security, create an administrator with credentials.
Under the Available Network Devices section, configure a single port on the Lefthand P4500. Leave the second port unconfigured.
Configure the NIC for Gigabit / Full Duplex
Lastly, exit out of the configuration menu.
Once you are out of the configuration menu, go to your Centralized Management Console server and launch your CMC software.
Within the menu system, choose Find -> Find Systems. You should be able to communicate with the P4500 node that you placed the IP address on. If it does not show up, you may want to double check your settings on the previous screens. The nodes should also be pingable from your CMC server.
Once it shows up in your CMC, you should see it listed on the left-hand pane.
Select TCP/IP Network under the P4500 node. Your TCP/IP settings will appear in the main window.
Under the TCP/IP tasks drop-down, create a new bond.
There are three types of bonds available. The most common is Adaptive Load Balancing. We will be using this bond for our configuration.
Once you make these network changes, the CMC will no longer be able to communicate with the node until the network adapter bond comes back up.
The CMC will ask whether you want to search for the node again. Give it a moment and proceed to re-discover your newly configured adapter bond.
Click back on the TCP/IP settings to see the new ALB bond configuration settings.
Lastly, you need to create a management group by going to TASKS – > Management Group – > New Management Group. Create an administrator that the CMC will use to connect to your Management Group.
You should see your newly configured P4500 node and be able to connect the node to your management group.
Of course, at this point, your new shelf is accessible by the CMC. Typically, you will purchase more than one Lefthand node so you are able to create a cluster that contains redundancy. The next few shots will show my configuration of a 2-node cluster of P4300’s.
Assuming the nodes have already been added to your management group you created in the step above, you can create a cluster.
First you will be prompted to enter a Virtual IP address (VIP). This is what your iSCSI Initiator will use to connect to your volumes.
Next, select a node to include in your cluster. The more nodes you select, the higher level of Network RAID you can have. A cluster can have volumes configured with Network RAID0 for a single node, Network RAID10 for 2 or more nodes, Network RAID 5 for 3 or more, and Network RAID6 for 5 or more nodes.
(NOTE: the RAID Configuration on the listed nodes above is hardware RAID which is different than network RAID. Think of hardware RAID as redundancy within a node. Network RAID is redundancy within a cluster (grouping of nodes). Network RAID allows you to lose an entire shelf without losing your storage).
Lastly, you can create a volume. This volume will coincide with a datastore that you can configure with VMware ESX(i).
A much more in-depth user manual is available at http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c02008676/c02008676.pdf
That summarizes the configuration of an HP Lefthand P4500 node, cluster and volume.
I will be posting Part 2 of this series in the next day or so, including how to configure your ESX(i) host for use with your newly configured volume.