OVERVIEW:The new feature should be available in build 131.
Some practices in data centers are built around the use of a volume
manager's ability to clone data. An administrator will attach a set of
disks to mirror an existing configuration, wait for the resilver to
complete, and then physically detach and remove those disks to a new
location.
Currently in zfs, the only way to achieve this is by using zpool offline
to disable a set of disks, zpool detach to permanently remove them after
they've been offlined, move the disks over to a new host, zpool
force-import of the moved disks, and then zpool detach the disks that were
left behind.
This is cumbersome and prone to error, and even then the new pool
cannot be imported on the same host as the original.
PROPOSED SOLUTION:
Introduce a "zpool split" command. This will allow an administrator to
extract one disk from each mirrored top-level vdev and use them to create
a new pool with an exact copy of the data. The new pool can then be
imported on any machine that supports that pool's version.
See implementation details.
6 comments:
The same could be achieved with the snapshot tools:
create a blank zpool on the new disk (/newdisk)
# zfs snapshot tank/home@split
# zfs send tank/home@split > /newdisk/split.save
# zfs receive /newdisk/home < /newdisk/split.save
# export zpool newdisk
then you can remove your disk and insert into whichever machine you want and re-import it.
(I'm pretty sure you can use the same tools for root zpools and without creating a new zpool on the new disk, but I don't have access to my test zone so working purely from memory).
I should have added, my method also allows you to have the same two splits live on the same host (unlike the mirror method)
Yes, that'll get you a copy of that dataset. However, using the split command:
# zpool split tank newdisk
Is all you need, and gets you ALL the datasets at once. It also doesn't require you to have double the space on newdisk (the space used to store the "split.save" zfs stream + the extracted data). It also relies on the mirroring code within the kernel to perform the copy when "zpool attach" is run, rather than copying it between userspace and kernel space twice, which would be a performance killer.
And it is possible to import it on the same host, simply by using the zpool import command.
I'd already stated that my method can be imported on the host system.
As for the other points: With some scripting, it would be pretty easy to export ALL the datasets at once too (and /without/ having to hardcode the dataset names into the script!) plus, depending on how clever you are with your script, you shouldn't need double the disk space.
I don't mean to state that "split" would be a useless tool - just that the previous method seems a slightly dangerous work-around for performing something that could already be done safely with standard ZFS functionality.
man zfs:
send/receive in a pipe
do you want a snapshot or a clone ?
Regards
Thanks...Very useful & informative
Post a Comment