Friday, August 30, 2013

Coming Back of Big Iron?

In the past decade servers have become boring - almost everything runs on a cheap x86 servers which mainly differ by color. Now, 96 CPU sockets, 152 cores and 9,216 threads and up-to 96TB of RAM in a server? I wouldn't mind to play with such a monster... read more

Friday, August 16, 2013

What's New in OpenZFS

Matt Ahrens talks about new features in Open (Illumos) ZFS. Some of the performance improvements Matt is talking about have their equivalents in Solaris 11 for some time now though and there are many more, for example:

6282155 arc doesn't always need to make a copy
6281079 ZFS I/O priority inversion
6914162 Dedup of null blocks could use special treatment
6662450 L2ARC in memory overhead should be reduced
6957289 ARC metadata limit can have serious adverse effect on dedup performance
6896307 arc_meta_limit modernization
Then Solaris 11 ZFS also has encryption, up-to 1MB recordsize, RAID-Z/mirror hybrid allocator, it does support 4k sector, and there are more new features and improvements to ZFS. Then I like LZ4 support in Illumos which is not in Solaris 11... It is good to see though that both Illumos and Oracle are innovating around ZFS. From end-user perspective it is a shame in a way that they do not actively share code though. On the other hand a little bit of competition might be good after all. We will see. ps. notice that the video contains more updates on different technologies around Illumos and they are worth watching as well.

Deduplication on ZFS and NetApp

Recently I came across a case where deduplication ratio for the same data is lower on NetApps than on ZFS. This document maybe explains why - see limits for dedup on NetApp starting with page 26. Apparently NetApp will silently stop deduping data after a specific limit which varies for different models and Ontap versions is reached.

Anyone has some other ideas why effectiveness of dedup on ZFS might be higher for the same data? (assuming the same or similar blocksize).

Wednesday, August 14, 2013


What if you have ZFS deployed on SAN in a clustered environment and you require a dedicated SLOG? It would be really helpful if you could create a small LUN (2-4GB) directly out of disk array's cache. This would be perfect for SLOG. All reads and writes to such LUN would be serviced entirely from array's cache - meaning low latency and no double-writes to backend disks/SSDs for synchronous I/O. Some disk arrays actually do provide such feature, for example see Hitachi's Cache Residency Manager.