Because of chronic money shortage at this job, I've configured one single-CPU VMWare host with 2 SATA drives and was later surprised when I couldn't mirror them within the host OS (i.e. ESXi). The alternative is atrocious but in these circumstances it's the best I could do: I create virtual hard drives, one on each physical drive, and do software mirroring within the guest OS. I have no idea how this setup will behave on drive failure; I suppose at the very least there is absolutely no chance of doing anything resembling hot replacement of hardware drives.
Anyway, in this particular instance I've set up 8.0-RC1 and used ZFS for mirroring. One of the reasons for this is to enable ZFS compression for actual data sets, which turned out nicely though not as nice as expected (only 1.25x compresssion ratio - I guess the used data simply doesn't compress much). The IRL performance is, as expected, nothing to brag about in any aspect, and bonnie++ just confirms it.
I have two cases within the same pool: without compression and with gzip compression and the (interesting parts of...) benchmark results are:
Sequential output: without compression: 30 MB/s / with compression: 23 MB/s
Sequential rewrite: without compression: 16 MB/s / with compression: 21 MB/s
Sequential input: without compression: 60 MB/s / with compression: 96 MB/s
Create files (random or sequential): without compression: 7600 files/s / with compression:
The SATA drives individually (and without VMWare emulation) easily pull between 50 and 90 MB/s. Bonnie++ reports CPU usages of around 20% which are completely bogus; measuring with 'top' reveals much larger CPU usage, almost completly in "sys" time (50% for uncompressed, 100% for compressed dataset). Obviously, bonnie++ writes easily compressible data - possibly zeroed blocks, which results in higher read rates for compressed data.
I really do not recommend this configuration to anyone who can pull an alternative - absolutely every aspect of it is suboptimal: two SATA drives, VMWare introducing its own IO slowdowns, mirroring inside the guest VMs - both performance and reliability suffer greatly. Additionally, this setup is very problematic to use with rdiff-backup (or, I suspect, any such utility working over ssh) because ZFS in combination with VMWare's slow IO results in frequent "pauses" in the operating system where it is for all purposes locked up for several seconds.
On a bit older "raw" (no virtualization) system running 7.1 with the old ZFS (ZFSv6) with three SATA drives in RAIDZ, achieved performance is around 40 MB/s for writes and 125 MB/s for reads (both without compression).