Recently I gave a talk on running OpenAFS services on top of Solaris 11 x86 / ZFS. The talk was split in two parts - first part about $$ benefits of transparent ZFS compression, when running on 3rd party x86 hardware (but it also makes sense when running on Sun/Oracle kit - in some cases even more so). This part also discusses some ideas about running AFS on internal disks instead of directly attached disk arrays, which again, thanks to ZFS built-in compression makes it worthwhile and delivers even more $$ savings.
The main message of this part is, that if your data compresses well (above 2x), running OpenAFS on ZFS can deliver similar or even better performance but most importantly it can save you lots of $$, both in acquisition costs, and in cost of running AFS plant. In most cases you should even be able to re-use the current x86 hardware you have. The beauty of AFS is, that we were able to migrate data from Linux to Solaris/ZFS, in-place, by re-using the same x86 HW, and all of this was completely transparent to all clients (keep in mind we are talking about PBs of data) - this is truly the cloud file system. I think OpenAFS is one of the under-appreciated technologies in the market.
The second part is about using DTrace, both in dev and in production systems, to find scalability and performance bottlenecks, and other bugs as well. Two easy and real-life examples are discussed, which resulted in considerable improvement in scalability and performance of some operations in OpenAFS, along with some other examples of D scripts which provide top-like output with some statistics (slide #32 is an example from a Solaris NFS server, serving VMWare clients and displaying different stats per VM from a single file system...). DTrace has proven to be a very powerful and helpful tool for us, although it is hard to put a specific $ value it brings.
The slides should be available here.