New SAN, New Datacenter – Start to finish – Part 1 – Configuring HP Lefthand P4500/P4300 iSCSI SAN

When we decided to migrate away from our current HP EVA 4000 SAN, we wanted to make sure that the solution we chose was affordable, flexible, stable and reliable.  Along with that, we wanted to purchase a solution that was compatible with VMware’s vStorage APIs for Array Integration (VAAI).

We ended up purchasing eight HP Lefthand P4500 shelves, and two HP Lefthand P4300 shelves.  Each P4500 shelf included 12 600GB 15K Serial Attached SCSI (SAS) drives configured in two separate 6 drive RAID 5 arrays.  Each P4300 shelf included 8 1TB 7200 RPM SATA drives configured in a single 8 drive RAID 5 array.

The shelves all come with two 1gb NICs that can be configured using either Adaptive Load Balancing (default and preferred), Active Passive, or 802.3AD Link Aggregation.

The shelf is very similar to a 2U Proliant server, and even includes two PS/2 ports for keyboard/mouse as well as a 15-pin analog video outlet.  HP’s integrated lights out module is also included.

The iSCSI SAN solution we chose included two HP Procurve 6600 48-port gigabit switches out-of-the-box unconfigured.  The switches will be cascaded using an ethernet cable to form a single storage fabric.  We are going to employ a standalone storage fabric with no routing of any kind.  The Lefthand nodes will connect one ethernet port to each Procurve switch.

The nodes will be configured using Adaptive Load Balancing (ALB) so that we will enjoy switch redundancy.  The nodes will be configured with a single 10-dot IP address on the same Class-C mask as the other nodes on the storage fabric.

Now, I will outline the setup of a single P4500 shelf.  I am assuming that you have a functioning IP storage network, as well as a StorageWorks P4000 Centralized Management Console system with the CMC software installed.  It is also a good idea to configure two physical NICs on the CMC server, one on each switch in your IP storage network.

(Note:  Non-routable IP addresses are shown in these screen captures.  If you manage to make all the way in to our standalone/isolated iSCSI network, I probably deserve to get hacked 🙂  )

Once the P4500 has been powered on and connected to a KVM, you will see the below prompt.  A password is not needed at this point:

Select Login.  Then, click General Settings, Add Administrator and if you want added security, create an administrator with credentials.

Under the Available Network Devices section, configure a single port on the Lefthand P4500.  Leave the second port unconfigured.

Configure the NIC for Gigabit / Full Duplex

Lastly, exit out of the configuration menu.

Once you are out of the configuration menu, go to your Centralized Management Console server and launch your CMC software.

Within the menu system, choose Find -> Find Systems.  You should be able to communicate with the P4500 node that you placed the IP address on.  If it does not show up, you may want to double check your settings on the previous screens.  The nodes should also be pingable from your CMC server.

Once it shows up in your CMC, you should see it listed on the left-hand pane.

Select TCP/IP Network under the P4500 node.  Your TCP/IP settings will appear in the main window.

Under the TCP/IP tasks drop-down, create a new bond.

There are three types of bonds available.  The most common is Adaptive Load Balancing.  We will be using this bond for our configuration.

Once you make these network changes, the CMC will no longer be able to communicate with the node until the network adapter bond comes back up.

The CMC will ask whether you want to search for the node again.  Give it a moment and proceed to re-discover your newly configured adapter bond.

Click back on the TCP/IP settings to see the new ALB bond configuration settings.

Lastly, you need to create a management group by going to TASKS – > Management Group – > New Management Group.  Create an administrator that the CMC will use to connect to your Management Group.

You should see your newly configured P4500 node and be able to connect the node to your management group.

Of course, at this point, your new shelf is accessible by the CMC.  Typically, you will purchase more than one Lefthand node so you are able to create a cluster that contains redundancy.  The next few shots will show my configuration of a 2-node cluster of P4300’s.

Assuming the nodes have already been added to your management group you created in the step above, you can create a cluster.

First you will be prompted to enter a Virtual IP address (VIP).  This is what your iSCSI Initiator will use to connect to your volumes.

Next, select a node to include in your cluster.  The more nodes you select, the higher level of Network RAID you can have.  A cluster can have volumes configured with Network RAID0 for a single node, Network RAID10 for 2 or more nodes, Network RAID 5 for 3 or more, and Network RAID6 for 5 or more nodes.

(NOTE:  the RAID Configuration on the listed nodes above is hardware RAID which is different than network RAID.  Think of hardware RAID as redundancy within a node.  Network RAID is redundancy within a cluster (grouping of nodes).  Network RAID allows you to lose an entire shelf without losing your storage).

Lastly, you can create a volume.  This volume will coincide with a datastore that you can configure with VMware ESX(i).

A much more in-depth user manual is available at http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c02008676/c02008676.pdf

That summarizes the configuration of an HP Lefthand P4500 node, cluster and volume.

I will be posting Part 2 of this series in the next day or so, including how to configure your ESX(i) host for use with your newly configured volume.

twitterredditlinkedinmail

9 Comments

  1. Michael

    Interesting stuff, thanks for sharing!

    BR from Germany
    Michael

    Reply
  2. Julian

    Interested to see part 2 especially if you use esxtop to measure kernel latency when doing a storage vmotion between datastores on the same P4500 cluster. Can you post your results?
    Thanks

    Reply
  3. Ralf

    I cant wait to see the next chapter. Esxi v5 has some decent improvements when it comes to iscsi config, as you no longer have to mess about in the cli to bing the nics etc.

    Reply
    1. Matt (Post author)

      Ralf-

      I finally found time to post update 2. Enjoy! http://www.tcwd.net/vblog

      Reply
  4. Steven

    I am an HP fan and set up one of these on a “proof of concept” basis a year + ago. We just went through an enire presentation with HP engineers and SAN technical sales … then I downloaded the User manuals and did my homework. EXCEPT I never did look at the back of the P4500 units. They have two and only two RJ45 ports per unit – which they advertize as (4 nics).

    I NEED 5 NICS to team and manage each unit. So, I figured I would just buy some 4-port GBit nics. Turns out that people who should are either not talking or are hiding from me – now that I have found them out.

    I suspect that they only offer a 10 GB nic to push you in the right direction – BUT the user manual says clearly that you can just add some GBit ports if you are not QUITE ready for 10 GB.

    Still growling, but would love the workaround – supported or not.

    QUESTION: Do you kow of any NIC – HP or otherwise – that will work in these units and which I can team??

    Reply
  5. DavidRa

    Hi Steven

    I’d suggest that since it’s a pretty standard DL380-ish server, and it probably has a PCIe riser card of some description, that you could add a pair of the dual-port Intel Pro/1000 PT cards and have them recognised. That’s certainly what I plan to do once I have my two P4500 units (delivery hopefully tomorrow 🙂 )

    Reply
    1. DavidRa

      Actually … based on this document http://vstorage.wordpress.com/2010/04/01/in-and-around-the-hp-p4500-g2-san/ I’d suggest adding two dual port, or quad port NICs is absolute child’s play.

      Reply
  6. ulysis

    Just an inquiry, why in NIC bonding we only used single IP, is that the best practice ?

    Reply
  7. Sajjad Rezaei

    was useful thx

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *