One of the very useful features in Solaris is the ability to identify physical disk location on supported hardware (mainly Oracle x86 and SPARC servers). This not only makes it easier to identify a faulty disk to be replaced but also makes OS installation more robust, as you can actually specify physical disk locations in a given server model where OS should be installed.
Here is an example output from diskinfo tool on x5-2l server:
$ diskinfo D:devchassis-path c:occupant-compdev --------------------------- --------------------- /dev/chassis/SYS/HDD00/disk c0t5000CCA01D3A1A24d0 /dev/chassis/SYS/HDD01/disk c0t5000CCA01D2EB40Cd0 /dev/chassis/SYS/HDD02/disk c0t5000CCA01D30FD90d0 /dev/chassis/SYS/HDD03/disk c0t5000CCA032018CB4d0 ... /dev/chassis/SYS/RHDD0/disk c0t5000CCA01D34EB38d0 /dev/chassis/SYS/RHDD1/disk c0t5000CCA01D315288d0
The server supports 24x disks in front and another two disks in the back.
We use the front disks for data and the two disks in the back for OS.
In the past we uses RAID controller to mirror the two OS disks,
while all disks in the front were presented in pass-thru mode (JBOD) and managed by ZFS.
Recently I started looking into using ZFS for mirroring the OS disks as well. Notice in the above output that the two disks in the back of x5-2l server are identified as: SYS/RHDD0 SYS/RHDD1.
This is very useful as with SAS the CTD would be different for each disk and woudl also change if a disk was replaced, while the SYS/[R}HDDn location would always stay the same.
See also my older blog entry on how this information is presented in other subsystems (FMA or ZFS).
Below is a part of AI manifest which defines that OS should be installed on the two rear disks and mirrored by ZFS:
Recently I started looking into using ZFS for mirroring the OS disks as well. Notice in the above output that the two disks in the back of x5-2l server are identified as: SYS/RHDD0 SYS/RHDD1.
This is very useful as with SAS the CTD would be different for each disk and woudl also change if a disk was replaced, while the SYS/[R}HDDn location would always stay the same.
See also my older blog entry on how this information is presented in other subsystems (FMA or ZFS).
Below is a part of AI manifest which defines that OS should be installed on the two rear disks and mirrored by ZFS:
<target> <disk in_vdev="mirror" in_zpool="rpool" whole_disk="true"> <disk_name name="SYS/RHDD0" name_type="receptacle"> </disk_name> <disk in_vdev="mirror" in_zpool="rpool" whole_disk="true"> <disk_name name="SYS/RHDD1" name_type="receptacle"> </disk_name> <logical> <zpool is_root="true" name="rpool"> <vdev name="mirror" redundancy="mirror">
In our environment the AI manifest is generated per server from a configuration management system based on a host profile. This means that for x5-2l servers we generate AI manifest as shown above, but on some other servers we want OS to be installed on a RAID volume, and on a general server which doesn't fall into any specific category we install OS on boot_disk. So depending on the server we generate different sections in AI manifest. This is similar to derived manifests in AI but instead of being separate to a configuration management system in our case it is part of it.
No comments:
Post a Comment