Sunday, February 28, 2010

ReadyBoost

I didn't know that Windows has a similar technology to ZFS L2ARC which is called ReadyBoost. Nice.

I'm building my new home NAS server and I'm currently seriously considering putting OS on an USB pen drive leaving all sata disks for data only. It looks like with modern USB drives OS should actually boot faster than from a sata disks thanks to much better seek times. I'm planning on doing some experiments first.

Thursday, February 25, 2010

ZVOLs - Write Cache

When you create a ZFS volume its write cache is disabled by default meaning that all writes to the volume will be synchronous. Sometimes it might be handy though to be able to enable a write cache for a particular zvol. I wrote a small C program which allows you to check if WC is enabled or not. It also allows you to enable or disable write cache for a specified zvol.

First lets check if write cache is disabled for a zvol rpool/iscsi/vol1

milek@r600:~/progs# ./zvol_wce /dev/zvol/rdsk/rpool/iscsi/vol1
Write Cache: disabled

Now lets issue 1000 writes

milek@r600:~/progs# ptime ./sync_file_create_loop /dev/zvol/rdsk/rpool/iscsi/vol1 1000

real 12.013566363
user 0.003144874
sys 0.104826470

So it took 12s and I also confirmed that writes were actually being issued to a disk drive. Lets enable write cache now and repeat 1000 writes

milek@r600:~/progs# ./zvol_wce /dev/zvol/rdsk/rpool/iscsi/vol1 1
milek@r600:~/progs# ./zvol_wce /dev/zvol/rdsk/rpool/iscsi/vol1
Write Cache: enabled

milek@r600:~/progs# ptime ./sync_file_create_loop /dev/zvol/rdsk/rpool/iscsi/vol1 1000

real 0.239360231
user 0.000949655
sys 0.019019552

Worked fine.

The zvol_wce program is not idiot-proof and it doesn't check if operation succeeded or not. You should be able to compile it by issuing: gcc -o zvol_wce zwol_wce.c

milek@r600:~/progs# cat zvol_wce.c

/* Robert Milkowski
http://milek.blogspot.com
*/

#include <unistd.h>
#include <stropts.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stropts.h>
#include <sys/dkio.h>


int main(int argc, char **argv)
{
char *path;
int wce = 0;
int rc;
int fd;

path = argv[1];

if ((fd = open(path, O_RDONLY|O_LARGEFILE)) == -1)
exit(2);

if (argc>2) {
wce = atoi(argv[2]) ? 1 : 0;
rc = ioctl(fd, DKIOCSETWCE, &wce);
}
else {
rc = ioctl(fd, DKIOCGETWCE, &wce);
printf("Write Cache: %s\n", wce ? "enabled" : "disabled");
}

close(fd);
exit(0);
}

Wednesday, February 10, 2010

Dell - No 3rd Party Disk Drives Allowed

Third-party drives not permitted:
"[...]
Is Dell preventing the use of 3rd-party HDDs now?
[....]
Howard_Shoobe at Dell.com:

Thank you very much for your comments and feedback regarding exclusive use of Dell drives. It is common practice in enterprise storage solutions to limit drive support to only those drives which have been qualified by the vendor. In the case of Dell's PERC RAID controllers, we began informing customers when a non-Dell drive was detected with the introduction of PERC5 RAID controllers in early 2006. With the introduction of the PERC H700/H800 controllers, we began enabling only the use of Dell qualified drives. There are a number of benefits for using Dell qualified drives in particular ensuring a positive experience and protecting our data. While SAS and SATA are industry standards there are differences which occur in implementation. An analogy is that English is spoken in the UK, US and Australia. While the language is generally the same, there are subtle differences in word usage which can lead to confusion. This exists in storage subsystems as well. As these subsystems become more capable, faster and more complex, these differences in implementation can have greater impact. Benefits of Dell's Hard Disk and SSD drives are outlined in a white paper on Dell's web site at http://www.dell.com/downloads/global/products/pvaul/en/dell-hard-drives-pov.pdf"

I understand they won't support 3rd party disk drives but blocking a server (a RAID card) from using such disks is something new - an interesting comment here.

IBM 2010: Customers in Revolt

From my own experience their sales people are very aggressive with an attitude of sale first and let someone else worry later. While I always take any vendor claims with a grain of salt I learnt to double or even triple check any IBM's claims.

Tuesday, February 09, 2010

Power 7

Now lets wait for some benchmarks. I only wish Solaris was running on them as well as right now you need to go the legacy AIX route or not so mature Linux route - not an ideal choice.

Thursday, February 04, 2010

Data Corruption - ZFS saves the day, again

We came across an interesting issue with data corruption and I think it might be interesting to some of you. While preparing a new cluster deployment and filling it up with data we suddenly started to see below messages:

XXX cl_runtime: [ID 856360 kern.warning] WARNING: QUORUM_GENERIC: quorum_read_keys error:
Reading the registration keys failed on quorum device /dev/did/rdsk/d7s2 with error 22.

The d7 quorum device was marked as being offline and we could not bring it online again. There isn't much in documentation about the above message except that it is probably a firmware problem on a disk array and we should contact a vendor. But lets investigate first what is really going on.

By looking at the source code I found that the above message is printed from within quorum_device_generic_impl::quorum_read_keys() and it will only happen if quorum_pgre_key_read() returns with return code 22 (actually any other than 0 or EACCESS but from the syslog message we already suspect that the return code is 22).

The quorum_pgre_key_read() calls quorum_scsi_sector_read() and passes its return code as its own. The quorum_scsi_sector_read() will return with an error only if quorum_ioctl_with_retries() returns with an error or if there is a checksum mismatch.

This is the relevant source code:

406 int
407 quorum_scsi_sector_read(
[...]
449 error = quorum_ioctl_with_retries(vnode_ptr, USCSICMD, (intptr_t)&ucmd,
450 &retval);
451 if (error != 0) {
452 CMM_TRACE(("quorum_scsi_sector_read: ioctl USCSICMD "
453 "returned error (%d).\n", error));
454 kmem_free(ucmd.uscsi_rqbuf, (size_t)SENSE_LENGTH);
455 return (error);
456 }
457
458 //
459 // Calculate and compare the checksum if check_data is true.
460 // Also, validate the pgres_id string at the beg of the sector.
461 //
462 if (check_data) {
463 PGRE_CALCCHKSUM(chksum, sector, iptr);
464
465 // Compare the checksum.
466 if (PGRE_GETCHKSUM(sector) != chksum) {
467 CMM_TRACE(("quorum_scsi_sector_read: "
468 "checksum mismatch.\n"));
469 kmem_free(ucmd.uscsi_rqbuf, (size_t)SENSE_LENGTH);
470 return (EINVAL);
471 }
472
473 //
474 // Validate the PGRE string at the beg of the sector.
475 // It should contain PGRE_ID_LEAD_STRING[1|2].
476 //
477 if ((os::strncmp((char *)sector->pgres_id, PGRE_ID_LEAD_STRING1,
478 strlen(PGRE_ID_LEAD_STRING1)) != 0) &&
479 (os::strncmp((char *)sector->pgres_id, PGRE_ID_LEAD_STRING2,
480 strlen(PGRE_ID_LEAD_STRING2)) != 0)) {
481 CMM_TRACE(("quorum_scsi_sector_read: pgre id "
482 "mismatch. The sector id is %s.\n",
483 sector->pgres_id));
484 kmem_free(ucmd.uscsi_rqbuf, (size_t)SENSE_LENGTH);
485 return (EINVAL);
486 }
487
488 }
489 kmem_free(ucmd.uscsi_rqbuf, (size_t)SENSE_LENGTH);
490
491 return (error);
492 }

With a simple DTrace script I could verify if the quorum_scsi_sector_read() does indeed return with 22 and also I could print what else is going on within the function:

56 -> __1cXquorum_scsi_sector_read6FpnFvnode_LpnLpgre_sector_b_i_ 6308555744942019 enter
56 -> __1cZquorum_ioctl_with_retries6FpnFvnode_ilpi_i_ 6308555744957176 enter
56 <- __1cZquorum_ioctl_with_retries6FpnFvnode_ilpi_i_ 6308555745089857 rc: 0
56 -> __1cNdbg_print_bufIdbprintf6MpcE_v_ 6308555745108310 enter
56 -> __1cNdbg_print_bufLdbprintf_va6Mbpcrpv_v_ 6308555745120941 enter
56 -> __1cCosHsprintf6FpcpkcE_v_ 6308555745134231 enter
56 <- __1cCosHsprintf6FpcpkcE_v_ 6308555745148729 rc: 2890607504684
56 <- __1cNdbg_print_bufLdbprintf_va6Mbpcrpv_v_ 6308555745162898 rc: 1886718112
56 <- __1cNdbg_print_bufIdbprintf6MpcE_v_ 6308555745175529 rc: 1886718112
56 <- __1cXquorum_scsi_sector_read6FpnFvnode_LpnLpgre_sector_b_i_ 6308555745188599 rc: 22

From the above output we know that the quorum_ioctl_with_retries() returns with 0 so it must be a checksum mismatch! As CMM_TRACE() is being called above and there are only three of them in the code lets check with DTrace which one it is:

21 -> __1cNdbg_print_bufIdbprintf6MpcE_v_ 6309628794339298 quorum_scsi_sector_read: checksum mismatch.

So now I knew exactly what part of the code is casing the quorum device to be marked offline. The issue might have been caused by many things like: a bug in a disk array firmware, a problem on an SAN, a bug in a HBA's firmware, a bug in a qlc driver or a bug in SC software, or... However because the issue suggests a data corruption and we are loading the cluster with a copy of a database we might have a bigger issue that just an offline quorum device. The configuration is a such that we are using ZFS to mirror between two disks arrays. We have been restoring a couple of TBs of data into and we haven't read almost anything back. Thankfully it is ZFS so we might force a re-check off all data in the pool and I did. ZFS found 14 corrupted blocks and even identified which file is affected. The interesting thing here is that for all blocks both copies on both sides of the mirror were affected. This almost eliminates a possibility of a firmware problem on disk arrays and suggest that the issue was caused by something misbehaving on the host itself. There is still a possibility of an issue on SAN as well. It is very unlikely to be a bug in ZFS as the corruption affected reservation keys as well which has basically nothing to do with ZFS at all. Then we are still writing more and more data into the pool and I'm repeating scrubs and I'm not getting any new corrupted blocks nor quorum is misbehaving (I fixed it by temporarily adding another one, removing the original and re-adding it again while removing the temporary one).

While I still have to find what caused the data corruption the most important thing here is ZFS. Just think about it - what would happen if we were running on any other file system like: UFS, VxFS, ext3, ext4, JFS, XFS, ... Well, almost anything could have happened with them like some data of could be corrupted, some files lost, system could crash, fsck could be forced to run for many hours and still not being able to fix the filesystem and it definitely wouldn't be able to detect any data corruption withing files or everything would be running fine for days, months and then suddenly the system would panic, etc. when application would try to access the corrupted blocks for the first time. Thanks to ZFS what have actually happened? All corrupted blocks were identified, unfortunately both mirrored copies were affected so ZFS can't fix them but it did identified a single file which was affected by all these blocks. We can just remove the file which is only 2GB and restore it again. And all of these while the system was running and we haven't even stopped the restore or didn't have to start from the beginning. Most importantly there is no uncertainty about the state of the filesystem or data within it.

The other important conclusion is that DTrace is a sysadmin's best friend :)