This time I tested RAID-5 performance. I used the same hardware as in last RAID-10 benchmark.
I created RAID-5 volume consisting 6 disks on a 3510 head unit with 2 controllers, using random optimization. I also created software RAID-5 (aka RAID-Z) group using ZFS on 6 identical disks in a 3510 JBOD. Both HW and SW RAIDs were connected to the same host (v440). Using filebench's varmail test below are the results.
These tests show that software RAID-5 in ZFS can not only be as fast as hardware RAID-5 it can even be faster. The same is with RAID-10 - ZFS software RAID-10 was faster than hardware RAID-10.
Please note that I tested HW RAID on a 3510 FC array not on some junky PCI RAID card.
1. ZFS on HW RAID5 with 6 disks, atime=off
IO Summary: 444386 ops 7341.7 ops/s, (1129/1130 r/w) 36.1mb/s, 297us cpu/op, 6.6ms latency
IO Summary: 438649 ops 7247.0 ops/s, (1115/1115 r/w) 35.5mb/s, 293us cpu/op, 6.7ms latency
2. ZFS with software RAID-Z with 6 disks, atime=off
IO Summary: 457505 ops 7567.3 ops/s, (1164/1164 r/w) 37.2mb/s, 340us cpu/op, 6.4ms latency
IO Summary: 457767 ops 7567.8 ops/s, (1164/1165 r/w) 36.9mb/s, 340us cpu/op, 6.4ms latency
3. there's some problem in snv_44 with UFS so UFS test is on S10U2 in test #4
4. UFS on HW RAID5 with 6 disks, noatime, S10U2 + patches (the same filesystem mounted as in 3)
IO Summary: 393167 ops 6503.1 ops/s, (1000/1001 r/w) 32.4mb/s, 405us cpu/op, 7.5ms latency
IO Summary: 394525 ops 6521.2 ops/s, (1003/1003 r/w) 32.0mb/s, 407us cpu/op, 7.7ms latency
5. ZFS with software RAID-Z with 6 disks, atime=off, S10U2 + patches (the same disks as in test #2)
IO Summary: 461708 ops 7635.5 ops/s, (1175/1175 r/w) 37.4mb/s, 330us cpu/op, 6.4ms latency
IO Summary: 457649 ops 7562.1 ops/s, (1163/1164 r/w) 37.0mb/s, 328us cpu/op, 6.5ms latency
See my post on zfs-discuss@opensolaris.org list for more details.
I have also found some benchmarks comparing ZFS, UFS, RAISERFS and EXT3 - ZFS was of course the fastest one on the same x86 hardware. See here and here.
No comments:
Post a Comment