I see that many people are asking about ZFS + Sun Cluster solution. Soon Sun Cluster 3.2 should be released which does support ZFS (among many other new features). Now Solaris 10 is free, Sun Cluster is free. Additionally to install Sun Cluster it's just some clicks in GUI installer and voila! Then some other commands and we have ZFS pool under Sun Cluster management.
Below example (using new SC32 commands, old one are also available for backward compatibility) how to configure 2-node HA-NFS cluster with ZFS - as you can see it's really quick&easy.
Nodes: nfs-1 nfs-2
ZFS pool: files
# clresourcegroup create -n nfs-1,nfs-2 -p Pathprefix=/files/conf/ nfs-files
# clreslogicalhostname create -g nfs-files -h nfs-1 nfs-files-net
# clresourcetype register SUNW.HAStoragePlus
# clresource create -g nfs-files -t SUNW.HAStoragePlus -x Zpools=files nfs-files-hastp
# clresourcegroup online -e -m -M nfs-files
# mkdir /files/conf/SUNW.nfs
# vi /files/conf/SUNW.nfs/dfstab.nfs-files-shares
[put nfs shares here related to pool files]
# clresourcetype register SUNW.nfs
# clresource create -g nfs-files -t SUNW.nfs -p Resource_dependencies=nfs-files-hastp nfs-files-shares
ps. right now it's available as Sun Cluster 3.2 beta - I have already two SC32 beta clusters running with ZFS and must say it just works - there were so minor problems at the beginning but developers from Sun Cluster team helped so fast that I'm still impressed - thank you guys! Right now it works perfectly.
9 comments:
when Cluster 3.2 (final) may be released - any guesses?
The current plan of record for GA is early 2007.
Is the ZFS filesystem mounted and active on both nodes in the cluster
at the same time, or is this a failover cluster? I didn't know that ZFS could be shared in the same way that QFS and VXFS can be.
No, the ZFS filesystem is configured as a failover filesystem. Architectural discussions for a shared read/write ZFS is going on now, when that feature will be available is anyone's guess.
Can ZFS be setup in a single read/write, and multiple readers via SAN shared disk? Specifically I'm thinking backup, but multiple readers isn't uncommon in HPC applications as well, and ZFS could help the performance there hugely too..
No, it can't - at least right now. However see for 'zfs send' and 'zfs receive' commands. If you want multiple readers maybe you should try QFS.
Hi Robert,
how do you define the number of NFS threads for SC 3.2 - in other words - was this bug fixed?
Just wondering, does this also sync the data between the two or do you have to use a SAN for that to share the filesystem ?
napobo3: yes, it's fixed. All you have to do is to set maximum threads in /etc/default/nfsd
anon: ZFS is not shared file system, at least not yet. Putting it under cluster gives you automatic failover not sharing.
Post a Comment