The arrow of time

Ivan Voras' blog

HP "LeftHand"

I've seen a HP "LeftHand" / StorageWorkd P4000 SAN device recently and got quite good impressions off of it. One thing that occured to me is - why didn't anyone try this before? Certainly both Linux (to lesser extent) and FreeBSD (to a somewhat greater) contain the pieces for it, and have contained for some years now. In fact, several people did such setups privately or internally for their companies but there was apparently never a concentrated effort to sell it.

What LeftHand is, is a "network RAID" SAN product, built around Linux with some custom internal software, on completely commodity / COTS Proliant hardware. It basically offers iSCSI storage with redundancy built over the network (Ethernet) to multiple servers. Each box is a separate, complete server, containing an arbitrary setup of drives in a RAID volume. Then, multiple boxes are combined together in a RAID-like setup (becoming RAIS - Redundant Array of Inexpensive Servers) offering SAN volumes with a desired RAID level using the servers as lower-level storage. An example setup might consist of three boxes, each with 8 drives in RAID-5, exporting three volumes: one RAID-5 volume spanning the three servers (in effect making this a RAID-55 setup), one RAID-1 volume spanning two servers (RAID-15), and one volume served from only one server.

This has been possible in FreeBSD at least since ZFS was imported, around 3 years ago, but can also be achieved with "lesser" file systems, a volume manager and software RAID. Here is how the example setup could be achieved:

  1. Configure three individual servers (s1, s2, s3) with some drives; in each server, make a single large ZFS RAID volume from all the drives, or use a hardware controller and create a  simple ZFS volume on it (boot from internal USB key if bootability or operating system disk or space is an issue - lots of modern servers have internal USB for things like this and VMWare).
  2. Plan your end-layout. Let's say each server holds 10 TB of user-available storage and we want to use 6 TB from each server to create the big RAID-Z volume, and the rest will go into either the RAID-1 volume or the "plain" volume.
  3. Use "zfs create -V" to create one 6 TB zvol and one 4 TB zvol on each server.
  4. Export these volumes via iSCSI, using ports/net/istgt or via ggated(8).
  5. Plan which nodes will be "head" for each volume. You can also introduce a new "head node" which will only import the iSCSI nodes, but this could become a bottleneck. Let's say that s1 will be head for the RAID-Z, s2 for RAID-1 and s3 for the plain 4 TB volume.
  6. On s1, import the other two 6 TB zvols via iSCSI with iscsi_initiator(4), on s2 import the one other 4 TB volume from s1, and on s3 do nothing in this step.
  7. On s1, create a new RAID-Z volume from one local 6 TB zvol and two iSCSI-imported ones, on s2 create a new RAID-1 ZFS volume from the one local 4 TB zvol and the one iSCSI-imported zvol, and on s3 just use the previously created ZVOL.
  8. You can now use all of the created storage devices however you want. In case of LeftHand, the end-result is again exported over iSCSI, but you can simply create a file system on the end-volumes and use them locally. Thinking on it in retrospective, you could probably shave quite a few heavy layers by using ZFS only for the end-volumes and using hardware RAID to get the volumes from step 3).

Why would someone use such a setup, especially considering it is considerably more complex than just using a simple DAS or SAN storage with a single level of RAID? First and foremost, it's a cheap way to introduce multi-server storage redundancy, while also increasing space. If you use ZFS on the end-result volumes you can automagically extend storage space by adding more boxes. With some fancy scripting, hot failover can be implemented.

Of course, Ethernet speed is an issue. A setup like this will only work good with either 10 Gbit NICs, or carefully planned network setup with multiple 1 Gbit NICs (which is the way the low-end LeftHand models work).

Why would you buy LeftHand when a setup like this can be done with FreeBSD (and even saner Linuxen)? Because the LeftHand product has a GUI (albeit a wrongly managed one - written in Java but with native installers and requiring its own bundled micro-version of Java not a generic one) which condenses all these steps in a few mouse clicks.


(On a tangential topic, ZFS v28 is ready for testing! It brings deduplication, RAIDZ3, removing devices from log volumes and more!)

#1 Re: HP "LeftHand"

Added on 2010-09-01T03:29 by Matthew Horan

We've been using HP/LeftHand SANs at my company for years now.  There have been some bumps in the road -- especially when HP bought LeftHand a few years ago.  However, the concept around the hardware is pretty awesome.

We've done some pretty extensive testing of the network RAID features in the field -- both in a controlled test environment, and as a result of hardware failures.  We've seen some funky stuff -- for example, a failed drive bringing down an entire node -- but the cool thing is that even if an entire node goes down, your storage will remain online.

Coupled with NIC bonding (LACP, etc.), the HP/LeftHand SAN is a solid platform, aside from its occasional failures.  We've had some issues with HP support when it comes to failures, e.g. telling us to upgrade firmware versions instead of diagnosing an issue and finding a root cause -- and in some cases, firmware upgrades have caused entire units (or clusters) to be offline for hours (days in one case.)

I've not had issues using the standard JVM with the CMC.  In fact, I just fired it up in OpenJDK on my Karmic laptop.  Perhaps something has changed with the version of the CMC that HP provided to you.

It's always nice to read about others using this product.  When we first deployed these units, there was very little information on them available.  Now that they've grown in deployment, it's interesting to see what others are doing with the units.

#2 Re: HP "LeftHand"

Added on 2010-09-01T11:11 by Ivan Voras

I didn't try it in practice but the overall impression was very good.

On JVM problems: I naturally tried to run the console on FreeBSD, on OpenJDK 6 and a local compile of jdk16 and both failed with "unsupported Java version". Maybe it was detecting the OS and not the Java version.

#3 Re: HP "LeftHand"

Added on 2010-09-01T16:05 by joeavg

Ivan, why are you saying that Linux contains the pieces to a lesser extend?

#4 Re: HP "LeftHand"

Added on 2010-09-01T17:24 by Ivan Voras

Because GEOM and ZFS either separately or especially together are more flexible and powerful than the Linux LVM and md subsystems with the common file systems like ext3/4.

#5 Re: HP "LeftHand"

Added on 2010-09-02T02:08 by Ben

Not sure if you're familiar with Isilon, but a similar idea.

#6 Re: HP "LeftHand"

Added on 2010-09-02T03:54 by Calvin Zito

Hi Ivan - I work for HP Storage and am familiar with the P4000 (used to be LeftHand but isn't anymore).  I manage our blog and we talk about the P4000 often.  You can find it at  I'm also very active on Twitter: 

Calvin Zito

#7 Re: HP "LeftHand"

Added on 2010-09-03T16:13 by Ivan Voras

If Isilon does it then I'm happy - they use and support FreeBSD! Unfortunately I probably cannot try it in my backwards part of the planet.

#8 Re: HP "LeftHand"

Added on 2010-09-03T16:14 by Ivan Voras

@HP: I just found out the retail price of the LeftHand setup I've tried and laughed. It's too expensive!

Post your comment here!

Your name:
Comment title:
Type "xxx" here:

Comments are subject to moderation and will be deleted if deemed inappropriate. All content is © Ivan Voras. Comments are owned by their authors... who agree to basically surrender all rights by publishing them here :)