Friday, November 03, 2006

Thumper throughput

For some testing I'm creating right now 8 raid-5 devices under SVM with 128k interleave size. It's really amazing how much x4500 server can do in terms of throughput. Right now all those raid-5 volumes are generating above 2GB/s write throughput! Woooha! It can write more data to disks than most (all?) Intel servers can read or write to memory :))))


bash-3.00# metainit d101 -r c0t0d0s0 c1t0d0s0 c4t0d0s0 c6t0d0s0 c7t0d0s0 -i 128k
d101: RAID is setup
bash-3.00# metainit d102 -r c0t1d0s0 c1t1d0s0 c5t1d0s0 c6t1d0s0 c7t1d0s0 -i 128k
d102: RAID is setup
bash-3.00# metainit d103 -r c0t2d0s0 c1t2d0s0 c5t2d0s0 c6t2d0s0 c7t2d0s0 -i 128k
d103: RAID is setup
bash-3.00# metainit d104 -r c0t4d0s0 c1t4d0s0 c4t4d0s0 c6t4d0s0 c7t4d0s0 -i 128k
d104: RAID is setup
bash-3.00# metainit d105 -r c0t3d0s0 c1t3d0s0 c4t3d0s0 c5t3d0s0 c6t3d0s0 c7t3d0s0 -i 128k
d105: RAID is setup
bash-3.00# metainit d106 -r c0t5d0s0 c1t5d0s0 c4t5d0s0 c5t5d0s0 c6t5d0s0 c7t5d0s0 -i 128k
d106: RAID is setup
bash-3.00# metainit d107 -r c0t6d0s0 c1t6d0s0 c4t6d0s0 c5t6d0s0 c6t6d0s0 c7t6d0s0 -i 128k
d107: RAID is setup
bash-3.00# metainit d108 -r c0t7d0s0 c1t7d0s0 c4t7d0s0 c5t7d0s0 c6t7d0s0 c7t7d0s0 -i 128k
d108: RAID is setup
bash-3.00#


bash-3.00# iostat -xnzCM 1 | egrep "device| c[0-7]$"
[omitted first output as it's avarage since reboot]
extended device statistics
r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
0.0 367.5 0.0 367.5 0.0 8.0 0.0 21.7 0 798 c0
0.0 389.5 0.0 389.5 0.0 8.0 0.0 20.5 0 798 c1
0.0 276.4 0.0 276.4 0.0 6.0 0.0 21.7 0 599 c4
5.0 258.4 0.0 258.4 0.0 6.0 0.0 22.9 0 602 c5
0.0 394.5 0.0 394.5 0.0 8.0 0.0 20.2 0 798 c6
0.0 396.5 0.0 396.5 0.0 8.0 0.0 20.1 0 798 c7
extended device statistics
r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
0.0 376.0 0.0 376.0 0.0 8.0 0.0 21.2 0 798 c0
0.0 390.0 0.0 390.0 0.0 8.0 0.0 20.5 0 798 c1
0.0 281.0 0.0 281.0 0.0 6.0 0.0 21.3 0 599 c4
0.0 250.0 0.0 250.0 0.0 6.0 0.0 24.0 0 599 c5
0.0 392.0 0.0 392.0 0.0 8.0 0.0 20.4 0 798 c6
0.0 386.0 0.0 386.0 0.0 8.0 0.0 20.7 0 798 c7
extended device statistics
r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
0.0 375.0 0.0 375.0 0.0 8.0 0.0 21.3 0 798 c0
0.0 407.0 0.0 407.0 0.0 8.0 0.0 19.6 0 798 c1
0.0 275.0 0.0 275.0 0.0 6.0 0.0 21.8 0 599 c4
0.0 247.0 0.0 247.0 0.0 6.0 0.0 24.2 0 599 c5
0.0 388.0 0.0 388.0 0.0 8.0 0.0 20.6 0 798 c6
0.0 382.0 0.0 382.0 0.0 8.0 0.0 20.9 0 798 c7
^C
bash-3.00# bc
376.0+390.0+281.0+250.0+392.0+386.0
2075.0

5 comments:

  1. Cool performance stats indeed, but why would you use anything but ZFS on a Thumper if you're running Solaris 10 on it already?

    ReplyDelete
  2. Actualy I have used only ZFS. I was just curious what is the performance difference betwen RAID-Z and SVM RAID-5. Numbers I put in this blog entry show only write throughput when SVM RAID-5 groups are being created. Later I did actualy some testing and compared results to ZFS only solution. I chose ZFS here, but must say there are cases when using ZFS+SVM could actually make sense.

    ReplyDelete
  3. Hi Milek,

    many thanks for this information. we're actually looking at the thumper platform to replace our old (V440-based) rdbms servers.
    We run mainly datawarehousing stuff on our boxes and so I'd be interested to know, how much CPU was left when running the becnhmark on your thumper system? Is there still room for some analytic (= cpu intensive) queries on the 4 cores?

    Jochen

    ReplyDelete
  4. We're looking at the x4500 as well. Do you have any recommendations with regards to ZFS Raid groupings?

    ReplyDelete
  5. 1. Witha RAID-10 in most tests there was still much CPU left. In real life in most cases I guess ZFS won't consume more than 10-20% of CPU. But you always should check with your application especially as x4500 can't be expanded with more CPUs.

    2. Depends what you want to achive and what are requirements for your application. I ended-up with 4 groups of RAID-Z2 each made of 11 disks, so two disks left for system (mirrored) and another two disks as hotspares. Everything in a one pool.

    ReplyDelete