I used 3510 head unit with 73GB 15K disks, RAID-10 made of 12 disks in one enclosure.
On the other server (the same server specs) I used 3510 JBODs with the same disk models.
I used filebench to generate workloads. "varmail" workload was used for 60s, two runs for each config.
1. ZFS filesystem on HW lun with atime=off:
IO Summary: 499078 ops 8248.0 ops/s, (1269/1269 r/w) 40.6mb/s, 314us cpu/op, 6.0ms latency
IO Summary: 503112 ops 8320.2 ops/s, (1280/1280 r/w) 41.0mb/s, 296us cpu/op, 5.9ms latency
2. UFS filesystem on HW lun with maxcontig=24 and noatime:
IO Summary: 401671 ops 6638.2 ops/s, (1021/1021 r/w) 32.7mb/s, 404us cpu/op, 7.5ms latency
IO Summary: 403194 ops 6664.5 ops/s, (1025/1025 r/w) 32.5mb/s, 406us cpu/op, 7.5ms latency
3. ZFS filesystem with atime=off with ZFS raid-10 using 12 disks from one enclosure:
IO Summary: 558331 ops 9244.1 ops/s, (1422/1422 r/w) 45.2mb/s, 312us cpu/op, 5.2ms latency
IO Summary: 537542 ops 8899.9 ops/s, (1369/1369 r/w) 43.5mb/s, 307us cpu/op, 5.4ms latency
In other tests HW vs. ZFS software raid show about the same performance.
So it looks like at least in some workloads software ZFS raid can be faster than HW raid.
Also please notice that HW raid was done on real HW array and not some crappy PCI raid card.
For more details see my post on ZFS discuss list.
who makes the RAID head that you reference & was ZFS ever running CRC checks on software RAID??? Also, what was the size of the array and how long did you run the tests???
ReplyDeleteThe array was Sun StorEdge 3510.
ReplyDelete12x 73GB 15K disk drives and each test run for 60s. All of it you can actually find in the post.
Cryptografic checksums were enabled in ZFS (default).
Hi,
ReplyDeleteSun 3510 is an OK raid controller, but it is a bit dated already, no ?
I have seen faster performance on Dell Perc6e (LSI, newer chipset) than on sun 3510/3511 type gear.. and this is even with 'low end' sata disk in the dell brick on the perc raid.
Just a thought,
Tim