Thursday, May 14, 2009

Open Storage and Data Caching

I've been playing with Open Storage 7410 recently. Although I've been using its GUI for quite some time thanks to FishWorks beta program it still amazes me how good it is, especially when you compare it to NetApp or Data Domain.

One of the really good things about Open Storage is it allows for quite a lot of Read/Write cache (currently up-to 128GB). If it is still not enough you can put up to ~600GB of additional Read Cache in terms of SSDs. What it means in practice is that many real-life workloads will entirely fit into the cache which in turn will provide excellent performance. In a way this is nothing new except for... economics! Try to find any other NAS product in the market when you can put ~600GB of cache and within the same price range as Open Storage. You won't find anything like this.

I have created a disk pool out of 20x 1TB SATA disk drives which are protected with RAID-DP (akd RAIDZ2 which is an implementation of RAID-6). Now RAIDZ2 is known for a very bad random read performance from multiple streams if data is not cached. Using filebench I run a random read workload for a dataset of 10GB (let's say a small MySQL database) with 16 active streams. The 7410 appliance has been rebooted prior to test so all caches were clean. As you can see on below screenshot at the beginning it was able to sustain ~400 NFSv3 operations per second. After about 50 minutes it delivered ~12,000 NFSv3 operations per second which saturated my 1GbE link. At about the same time the average latency for NFS operations were getting smaller and smaller, same for number of operations to physical disks. At some point all data had been in cache and there were no operations to physical disks at all. B

The appliance could do certainly much more if I would use more GbE links of 10GbE links. Now remember that I used 20x 1TB SATA disk drives in a RAID-DP configuration to get this performance and it could sustain it for workloads of up to ~600GB of a working set size. If you put these numbers into perspective: one 15K FC disk drive can deliver ~250 8KB random reads at most. You would need almost 100 such disk drives configured in RAID-10 to be able to match the performance and still you would get less capacity (even assuming 300GB FC 15K drives).

Open Storage is the game changer for a lot of workloads both in terms of a delivered performance and a cost - currently there isn't really anything in the market which can match it.

No comments: