Interesting statement in a searchdatacenter article about the IDC numbers“When you sell against Dell, you sell against price. When you sell against HP, you sell against technical stuff -- the feeds and speeds. When you're up against IBM, you're not selling against boxes but against solutions or business outcomes that happen to include hardware. But, when you get to the Sun guys, it's about religion. You can't get to those guys. One guy told me last year that he would get off his Sun box when he dies."
Friday, December 03, 2010
Religion in IT
Thursday, December 02, 2010
Linux, O_SYNC and Write Barriers
I stumbled across a very interesting discussion on a Linux kernel mailing list. It is dated August 2009 so you may have already read it.
There is a related RH bug.
I'm a little bit surprised by RH attitude in this ticket. IMHO they should have fixed it and maybe provide a tunable which would enable/disable new behavior instead of keeping the broken implementation. But at least in recent man pages they have clarified it in the Notes section of open(2):
"POSIX provides for three different variants of synchronized I/O, corresponding to the flags O_SYNC, O_DSYNC, and O_RSYNC. Currently (2.6.31), Linux only implements O_SYNC, but glibc maps O_DSYNC and O_RSYNC to the same numerical value as O_SYNC. Most Linux file systems don't actually implement the POSIX O_SYNC semantics, which require all metadata updates of a write to be on disk on returning to userspace, but only the O_DSYNC semantics, which require only actual file data and metadata necessary to retrieve it to be on disk by the time the system call returns."Then there is another even more interesting discussion about write barriers:
"All of them fail to commit drive caches under some circumstances;and also this one:
even fsync on ext3 with barriers enabled (because it doesn't
commit a journal record if there were writes but no inode change
with data=ordered)."
"No, fsync() doesn't always flush the drive's write cache. It oftenThis is really scary. I wonder how many developers knew about it especially when coding for Linux when data safety was paramount. Sometimes it feels that some Linux developers are coding to win benchmarks and do not necessarily care about data safety, correctness and standards like POSIX. What is even worse is that some of them don't even bother to tell you about it in official documentation (at least the O_SYNC/O_DSYNC issue is documented in the man page now).
does, any I think many people are under the impression it always does, but it doesn't.
Try this code on ext3:
fd = open ("test_file", O_RDWR | O_CREAT | O_TRUNC, 0666);
while (1) {
char byte;
usleep (100000);
pwrite (fd, &byte, 1, 0);
fsync (fd);
}
It will do just over 10 write ops per second on an idle system (13 on
mine), and 1 flush op per second.
That's because ext3 fsync() only does a journal commit when the inode has changed. The inode mtime is changed by write only with 1 second granularity. Without a journal commit, there's no barrier, which translates to not flushing disk write cache.
If you add "fchmod (fd, 0644); fchmod (fd, 0664);" between the write
and fsync, you'll see at least 20 write ops and 20 flush ops per
second, and you'll here the disk seeking more. That's because the
fchmod dirties the inode, so fsync() writes the inode with a journal
commit.
It turns out even _that_ is not sufficient according to the kernel
internals. A journal commit uses an ordered request, which isn't the
same as a flush potentially, it just happens to use flush in this
instance. I'm not sure if ordered requests are actually implemented
by any drivers at the moment. If not now, they will be one day.
We could change ext3 fsync() to always do a journal commit, and depend on the non-existence of block drivers which do ordered (not flush) barrier requests. But there's lots of things wrong with that. Not least, it sucks performance for database-like applications and virtual machines, a lot due to unnecessary seeks. That way lies wrongness.
Rightness is to make fdatasync() work well, with a genuine flush (or
equivalent (see FUA), only when required, and not a mere ordered
barrier), no inode write, and to make sync_file_range()[*] offer the
fancier applications finer controls which reflect what they actually
need."
Thursday, November 25, 2010
Monday, November 15, 2010
Solaris 11 Express
and it doesn't look like Oracle is treating it as beta - at least not officially.
But then why not call it Solaris 11? I guess it is partly due to marketing reasons and partly because it is not entirely ready yet and there are some components which require further work.
Monday, October 04, 2010
ZFS Encryption
It took much longer than expected. If I were to speculate I would say the integration was delayed on purpose so it gets integrated only after the public access to onnv gate was closed so 3rd parties like Nexenta could not take advantage of it before Oracle can. Hopefully once Solaris 11 is out we will see the source code as well. It also probably means that it will be in Solaris 11 Express as well.
Wednesday, September 29, 2010
Nexenta's take on ZFS, Oracle and NetApp
Friday, September 24, 2010
Oracle to buy NetApp?
"Oracle boss Larry Ellison said he'd love to have the 60 per cent of NetApp's business that plugs NetApp boxes into Oracle software, hinting that a NetApp purchase could be on his mind.
[...]A combination of Oracle and NetApp would be a truly formidable server/storage play, even if the IBM NetApp reseller deal subsequently collapsed. Will Oracle actually make an approach? Will NetApp respond positively? The key there is whether it truly believes an independent, best-of-breed stance is viable or whether the stack-in-a-box, centralised IT approach is a sustained tank attack that threatens to flatten its business.
A future inside Oracle in those circumstances might look a lot better than a slow fade into a Unisys-like state while EMC, buoyed up with VMware and products above raw storage arrays, grows and prospers."
Bloomberg also reports on Oracle acquisitions plans.
Wednesday, September 22, 2010
New SPARC T3 Systems
Oracle announced several new SPARC systems based on the SPARC T3 processor. SPARC T3 is the next iteration of SPARC in the throughput arena: 16 cores running at 1.65 GHz., so 128 threads per socket, on-chip PCIe 2.0, on-chip 10GBe ...The Register reports on the new systems as well.
It starts with one 16-core 1.65GHz SPARC T3 processor on the SPARC T3-1 Server . So you get 128 Threads packaged in a 2 rack units chassis.
Above this you will find the SPARC T3-2 with two SPARC T3 processors. It gives you 256 threads and 256 GB. You get this amount of power in 3 RU.
At the high-end is the T3-4 with four of the SPARC T3 CPU giving you 512 threads, 512 GB of main memory with 8 GB DIMMs and 16 PCI Express Modules. This system is somewhat larger: 5 RU.
At last there is a new blade for the Blade 6000 chassis: SPARC T3-1B. As the name suggests it provides 1 SPARC T3 processor, so 128 Threads on one blade.
If you are interested in a brief summary of what T3 is read Joerg post about it here.
Tuesday, September 14, 2010
OpenIndiana
So it has begun... Is it going to survive in the long term?
Friday, September 10, 2010
OpenIndiana
OpenIndiana is a continuation of the OpenSolaris operating system. It was conceived during the period of uncertainty following the Oracle takeover of Sun Microsystems, after several months passed with no binary updates made available to the public. The formation proved timely, as Oracle discontinued OpenSolaris soon after in favour of Solaris 11 Express, a binary distribution with a more closed development model to debut later this year.
OpenIndiana is part of the Illumos Foundation, and provides a true open source community alternative to Solaris 11 and Solaris 11 Express, with an open development model and full community participation.
Announcement Details
We will be holding a press conference in London (UK) at the JISC offices at 6:30pm UK Time (BST, GMT+1) to formally announce the project and provide full details, including information on gaining access to download our first development release. In addition, we will be broadcasting live on the internet – please see http://openindiana.org/announcement for details of attending in person or on the web.
We believe this announcement will deliver the distribution the community has long sought after.
Solaris 10 9/10
- Installation Enhancements
- SPARC: Support for ITU Construction Tools on SPARC Platforms - In this release, the itu utility has been modified to support booting a SPARC based system with the install-time updates (ITU) process.
- Oracle Solaris Auto Registration - before you ask ... you can disable it
- Oracle Solaris Upgrade Enhancement for Oracle Solaris Zone– Cluster Nodes
- Virtualization Enhancements for Oracle Solaris Zones
- Migrating a Physical Oracle Solaris 10 System Into a Zone - cool, my RfE found its way into Solaris 10
- Host ID Emulation
- Updating Packages by Using the New zoneadm attach -U Option
- Virtualization Enhancements for Oracle VM Server for SPARC (formerly known as LDOMs)
- Memory Dynamic Reconfiguration Capability
- Virtual Disk Multipathing Enhancements
- Static Direct I/O
- Virtual Domain Information Command and API - the
virtinfo
command - System Administration Enhancements
- Oracle Solaris ZFS Features and Enhancements
- ZFS device replacement enhancements - namely autoexpand
- some changes to the zpool list command
- Holding ZFS snapshots
- Triple parity RAID-Z (raidz3)
- The logbias property
- Log device removal - at last
- ZFS storage pool recovery
- New ZFS system process – In this release, each storage pool has an associated process, zpool-poolname
- Splitting a mirrored ZFS storage pool (zpool split)
- Fast Crash Dump
- x86: Support for the IA32_ENERGY_PERF_BIAS MSR
- Support for Multiple Disk Sector Size
- iSCSI Initiator Tunables
- Sparse File Support in the cpio Command
- x86: 64-Bit libc String Functions Improvements With SSE
- Automated Rebuilding of sendmail Configuration Files
- Automatic Boot Archive Recovery
- Security Enhancements
- net_access Privilege
- x86: Intel AES-NI Optimization
- Language Support Enhancements
- New Oracle Solaris Unicode Locales
- Device Management Enhancements
- iSCSI Boot
- iSER Initiator
- New Hot-Plugging Features
- AAC RAID Power Management
- Driver Enhancements
- x86: HP Smart Array HBA Driver
- x86: Support for Broadcom NetXtreme II 10 Gigabit Ethernet NIC Driver
- x86: New SATA HBA Driver, bcm_sata, for Broadcom HT1000 SATA Controllers
- Support for SATA/AHCI Port Multiplier
- Support for Netlogic NLP2020 PHY in the nxge Driver
- Freeware Enhancements
- GNU TAR Version 1.23
- Firefox 3.5
- Thunderbird 3
- Less Version 436
- Networking Enhancements
- BIND 9.6.1 for the Oracle Solaris 10 OS
- GLDv3 Driver APIs
- IPoIB Connected Mode
- Open Fabrics User Verbs Primary Kernel Components
- InfiniBand Infrastructure Enhancements
- X11 Windowing Enhancements
- Support for the setxkbmap Command
- New Chipset Support
- ixgbe Driver to Integrate Intel Shared Code Version 3.1.9
- Broadcom Support to bge Networking Driver
- x86: Fully Buffered DIMM Idle Power Enhancement
- Fault Management Architecture Enhancements
- FMA Support for AMD's Istanbul Based Systems
- Several Oracle Solaris FMA Enhancement
- Diagnostic Tools Enhancements
- Sun Validation Test Suite 7.0ps9
- Enhancements to the mdb Command to Improve the Debugging Capability of kmem and libumem
NetApp, Oracle Drop ZFS LawSuits
"NetApp Inc. and Oracle Corp. have agreed to dismiss patent lawsuits against each other, putting to bed a battle that had been ongoing since 2007.
The terms of the agreement weren't disclosed, and the companies said they are seeking to have the lawsuits dismissed without prejudice.
NetApp President and Chief Executive Tom Georgens said Thursday the companies would continue to collaborate in the future.
NetApp, which sells data-storage systems, and Sun Microsystems, which has since been bought by Oracle, began the fight in 2007, with each company alleging the other was infringing some of its patents.
Lawsuits were filed on both companies' behalf, with NetApp alleging that Sun's ZFS file-system-management technology infringed a number of its patents. Sun at the time also said NetApp products infringed its patents."
See also The Register article.
Saturday, August 28, 2010
The Future of Solaris
- Adam Leventhal's take on Solaris future
- Bryan Cantrill's comments on Illumos
- Garrett D'Amore - The Hand May Be Forced
- Garrett D'Amore - Illumos
- Ben Rockwood - Illumos Shines New Light
- Ben Rockwood - OpenSolaris R.I.P.
- Steve Stallion - OpenSolaris is Dead
- Jim Laurent - Solaris 11 in 2011
- The Register - Next Solaris prepped for 2011
Saturday, July 31, 2010
SMF/FMA Update
PSARC/2009/617 Software Events Notification Parameters CLI
PSARC/2009/618 snmp-notify: SNMP Notification Daemon for Software Events
PSARC/2009/619 smtp-notify: Email Notification Daemon for Software Events
PSARC/2010/225 fmd for non-global Solaris zones
PSARC/2010/226 Solaris Instance UUID
PSARC/2010/227 nvlist_nvflag(3NVPAIR)
PSARC/2010/228 libfmevent additions
PSARC/2010/257 sysevent_evc_setpropnvl and sysevent_evc_getpropnvl
PSARC/2010/265 FMRI and FMA Event Stabilty, 'ireport' category 1 event class, and the 'sw' FMRI scheme
PSARC/2010/278 FMA/SMF integration: instance state transitions
PSARC/2010/279 Modelling panics within FMA
PSARC/2010/290 logadm.conf upgrade
6392476 fmdump needs to pretty-print
6393375 userland ereport/ireport event generation interfaces
6445732 Add email notification agent for FMA and software events
6804168 RFE: Allow an efficient means to monitor SMF services status changes
6866661 scf_values_destroy(3SCF) will segfault if is passed NULL
6884709 Add snmp notification agent for FMA and software events
6884712 Add private interface to tap into libfmd_msg macro expansion capabilities
6897919 fmd to run in a non-global zone
6897937 fmd use of non-private doors is not safe
6900081 add a UUID to Solaris kernel image for use in crashdump identification
6914884 model panic events as a defect diagnosis in FMA
6944862 fmd_case_open_uuid, fmd_case_uuisresolved, fmd_nvl_create_defect
6944866 log legacy sysevents in fmd
6944867 enumerate svc scheme in topo
6944868 software-diagnosis and software-response fmd modules
6944870 model SMF maintenance state as a defect diagnosis in FMA
6944876 savecore runs in foreground for systems with zfs root and dedicated dump
6965796 Implement notification parameters for SMF state transitions and FMA events
6968287 SUN-FM-MIB.mib needs to be updated to reflect Oracle information
6972331 logadm.conf upgrade PSARC/2010/290
Thursday, July 29, 2010
Dell and HP Continue Supporting Solaris
- Oracle today announced Dell and HP will certify and resell Oracle Solaris, Oracle Enterprise Linux and Oracle VM on their respective x86 platforms.
- Customers will have full access to Oracle’s Premier Support for Oracle Solaris, Oracle Enterprise Linux and Oracle VM running on Dell and HP servers. This will enable fast and accurate issue resolution and reduced risk in a company’s operating environment.
- Customers who subscribe to Oracle Premier Support will benefit from Oracle’s continuing investment in Oracle Solaris, Oracle Enterprise Linux and Oracle VM and the resulting innovation in future updates.
Wednesday, July 21, 2010
Saturday, June 26, 2010
Wednesday, June 16, 2010
zpool scrub - 4GB/s
Open Solaris 2009.06 + entry level 2U x86 server Sun Fire x4270 + 1U xxxx array.
# iostat -xnzCM 1|egrep "device|c[0123]$"
[...]
extended device statistics
r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
8182.1 0.0 1022.1 0.0 0.1 152.8 0.0 18.7 0 1077 c0
8179.1 0.0 1021.7 0.0 0.1 148.7 0.0 18.2 0 1076 c1
8211.0 0.0 1025.9 0.0 0.1 162.8 0.0 19.8 0 1081 c2
8218.0 0.0 1026.8 0.0 0.1 164.5 0.0 20.0 0 1085 c3
extended device statistics
r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
8080.2 0.0 1010.0 0.0 0.1 168.3 0.0 20.8 0 1070 c0
8080.2 0.0 1010.0 0.0 0.1 167.6 0.0 20.7 0 1071 c1
8165.2 0.0 1020.3 0.0 0.1 166.0 0.0 20.3 0 1079 c2
8157.2 0.0 1019.3 0.0 0.1 151.4 0.0 18.6 0 1080 c3
extended device statistics
r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
8192.0 0.0 1023.4 0.0 0.1 174.6 0.0 21.3 0 1085 c0
8190.9 0.0 1023.1 0.0 0.1 174.2 0.0 21.3 0 1085 c1
8140.9 0.0 1016.9 0.0 0.1 145.5 0.0 17.9 0 1078 c2
8138.9 0.0 1016.7 0.0 0.1 142.7 0.0 17.5 0 1075 c3
extended device statistics
r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
8129.1 0.0 1015.6 0.0 0.1 153.0 0.0 18.8 0 1066 c0
8125.2 0.0 1015.1 0.0 0.1 155.1 0.0 19.1 0 1067 c1
8156.2 0.0 1018.8 0.0 0.1 162.1 0.0 19.9 0 1074 c2
8159.2 0.0 1019.2 0.0 0.1 162.0 0.0 19.9 0 1076 c3
extended device statistics
r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
8177.9 0.0 1022.0 0.0 0.1 165.0 0.0 20.2 0 1088 c0
8184.9 0.0 1022.9 0.0 0.1 165.2 0.0 20.2 0 1085 c1
8209.9 0.0 1026.1 0.0 0.1 162.4 0.0 19.8 0 1085 c2
8204.9 0.0 1025.5 0.0 0.1 161.8 0.0 19.7 0 1087 c3
extended device statistics
r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
8236.4 0.0 1029.2 0.0 0.1 170.1 0.0 20.7 0 1092 c0
8235.4 0.0 1029.0 0.0 0.1 170.2 0.0 20.7 0 1093 c1
8215.4 0.0 1026.4 0.0 0.1 165.3 0.0 20.1 0 1091 c2
8220.4 0.0 1027.0 0.0 0.1 164.9 0.0 20.1 0 1090 c3
Then with a small I/O it can sustain over 400k IOPS - more HBAs should deliver even more performance.
It is really amazing how fast technology is progressing.
To achieve above numbers 10 years ago it would have cost a small fortune.
Saturday, June 12, 2010
Heat Maps
Thursday, June 10, 2010
Friday, May 28, 2010
DTrace TCP and UDP providers
Last night two new DTrace providers were integrated.
They should be available in a build 142 of Open Solaris.
PSARC 2010/106 DTrace TCP and UDP providers"This case adds DTrace 'tcp' and 'udp' providers with probes
for send and receive events. These providers cover the TCP
and UDP protocol implementations in OpenSolaris respectively. In
addition the tcp provider contains probes for TCP state machine
transitions and significant events in connection processing
(connection request, acceptance, refusal etc). The udp provider
also contains probes which fire when a UDP socket is opened/closed.
This is intended for use by customers for network observability and
troubleshooting, and this work represents the second and third
components of a suite of planned providers for the network stack. The
first was described in PSARC/2008/302 DTrace IP Provider."
The tcp provider is described here:
http://wikis.sun.com/display/DTrace/tcp+Provider
...and the udp provider is described here:
http://wikis.sun.com/display/DTrace/udp+Provider
Tuesday, May 04, 2010
ZFS - synchronous vs. asynchronous IO
PSARC/2010/108 zil synchronicity
ZFS datasets now have a new 'sync' property to control synchronous behavior.
The zil_disable tunable to turn synchronous requests into asynchronous requests (disable the ZIL) has been removed. For systems that use that switch on upgrade you will now see a message on booting:
sorry, variable 'zil_disable' is not defined in the 'zfs' module
Please update your system to use the new sync property.
Here is a summary of the property:
-------
The options and semantics for the zfs sync property:
sync=standard
This is the default option. Synchronous file system transactions
(fsync, O_DSYNC, O_SYNC, etc) are written out (to the intent log)
and then secondly all devices written are flushed to ensure
the data is stable (not cached by device controllers).
sync=always
For the ultra-cautious, every file system transaction is
written and flushed to stable storage by a system call return.
This obviously has a big performance penalty.
sync=disabled
Synchronous requests are disabled. File system transactions
only commit to stable storage on the next DMU transaction group
commit which can be many seconds. This option gives the
highest performance. However, it is very dangerous as ZFS
is ignoring the synchronous transaction demands of
applications such as databases or NFS.
Setting sync=disabled on the currently active root or /var
file system may result in out-of-spec behavior, application data
loss and increased vulnerability to replay attacks.
This option does *NOT* affect ZFS on-disk consistency.
Administrators should only use this when these risks are understood.
The property can be set when the dataset is created, or dynamically, and will take effect immediately. To change the property, an administrator can use the standard 'zfs' command. For example:
# zfs create -o sync=disabled whirlpool/milek
# zfs set sync=always whirlpool/perrin
Have a fun!
Thursday, April 29, 2010
Gartner on Oracle/Sun New Support Model
"Oracle announces its new Sun support policy that has the potential to radically change the way in which OEMs offer support and may make third-party maintenance offerings for Sun hardware unprofitable."Read more.
Tuesday, April 27, 2010
Saturday, April 24, 2010
Oracle Solaris on HP ProLiant Servers
HP Oracle Solaris 10 Subscriptions and Support:
"Certifying Oracle Solaris on ProLiant servers since 1996, HP is expanding its relationship with Oracle to include selling Oracle Solaris 10 Operating System Subscriptions and support from HP Technology Services on certified ProLiant servers.HP will provide the subscriptions and support for the Oracle Solaris 10 Operating System on certified ProLiant servers and Oracle will provide patches and updates directly to HP's customers through Oracle SunSolve.As part of this expanded relationship HP and Oracle will work together to enhance the customer experience for Oracle Solaris on ProLiant servers and HP increase its participation in the OpenSolaris community."
And of course you can also run Open Solaris on any x86 hardware, including HP servers, entirely for free if you want. I wonder though if it would make sense for HP to also offer support for Open Solaris - more and more customers are deploying Open Solaris instead of Solaris 10 on their servers and Oracle already offers a support for it on their own servers.
Monday, March 29, 2010
ZFS diff
PSARC/2010/105:
There is a long-standing RFE for zfs to be able to describe
what has changed between the snapshots of a dataset.
To provide this capability, we propose a new 'zfs diff'
sub-command. When run with appropriate privilege the
sub-command describes what file system level changes have
occurred between the requested snapshots. A diff between the
current version of the file system and one of its snapshots is
also supported.
Five types of change are described:
o File/Directory modified
o File/Directory present in older snapshot but not newer
o File/Directory present in newer snapshot but not older
o File/Directory renamed
o File link count changed
zfs diff snapshot snapshot | filesystem
Gives a high level description of the differences between a
snapshot and a descendant dataset. The descendant may either
be a later snapshot of the dataset or the current dataset.
For each file system object that has undergone a change
between the original snapshot and the descendant, the type of
change is described along with the name of the file or
directory. In the case of a rename, both the old and new
names are shown.
The type of change is described with a single character:
+ Indicates the file/directory was added in the later dataset
- Indicates the file/directory was removed in the later dataset
M Indicates the file/directory was modified in the later dataset
R Indicates the file/directory was renamed in the later dataset
If the modification involved a change in the link count of a
file, the change will be expressed as a delta within
parentheses on the modification line. Example outputs are
below:
M /myfiles/
M /myfiles/link_to_me (+1)
R /myfiles/rename_me -> /myfiles/renamed
- /myfiles/delete_me
+ /myfiles/new_file
Saturday, March 27, 2010
Project Brussels Phase II
This project introduces a new CLI utility called ipadm(1M) that can be used to perform:
* IP interfaces management (creation/deletion)
* IP address management (add, delete, show) for static IPv4& IPv6 addresses, DHCP, stateless/stateful IPv6 Address configuration
* protocol (IP/TCP/UDP/SCTP/ICMP) tunable management (set, get, reset) global (ndd(1M)) tunables, as well as per-interface tunables
* provide persistence for all of the three features above so that on reboot the configuration is reappliedPlease see the case materials of PSARC 2010/080 for the latest design document and read ipadm(1M) man page for more information.
This has been integrated into Open Solaris.
Friday, March 26, 2010
CPU/MEM HotPlug on x86 in Open Solaris
The integration of:
PSARC/2009/104 Hot-Plug Support for ACPI-based Systems
PSARC/2009/550 PSMI extensions for CPU Hotplug
PSARC/2009/551 acpihpd ACPI Hotplug Daemon
PSARC/2009/591 Attachment Points for Hotpluggable x86 systems
6862510 provide support for cpu hot add on x86
6874842 provide support for memory hot add on x86
6883891 cmi interface needs to support dynamic reconfiguration
6884154 x2APIC and kmdb may not function properly during CPU hotplug event.
6904971 low priority acpi nexus code review feedback
6877301 lgrp should support memory hotplug flag in SRAT table
Introduces support for hot-adding cpus and memory to a running Xeon 7500 platform.
Sunday, February 28, 2010
ReadyBoost
Thursday, February 25, 2010
ZVOLs - Write Cache
First lets check if write cache is disabled for a zvol rpool/iscsi/vol1
milek@r600:~/progs# ./zvol_wce /dev/zvol/rdsk/rpool/iscsi/vol1
Write Cache: disabled
Now lets issue 1000 writes
milek@r600:~/progs# ptime ./sync_file_create_loop /dev/zvol/rdsk/rpool/iscsi/vol1 1000
real 12.013566363
user 0.003144874
sys 0.104826470
milek@r600:~/progs# ./zvol_wce /dev/zvol/rdsk/rpool/iscsi/vol1 1
milek@r600:~/progs# ./zvol_wce /dev/zvol/rdsk/rpool/iscsi/vol1
Write Cache: enabled
milek@r600:~/progs# ptime ./sync_file_create_loop /dev/zvol/rdsk/rpool/iscsi/vol1 1000
real 0.239360231
user 0.000949655
sys 0.019019552
Worked fine.
The zvol_wce program is not idiot-proof and it doesn't check if operation succeeded or not. You should be able to compile it by issuing: gcc -o zvol_wce zwol_wce.c
milek@r600:~/progs# cat zvol_wce.c
/* Robert Milkowski
http://milek.blogspot.com
*/
#include <unistd.h>
#include <stropts.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stropts.h>
#include <sys/dkio.h>
int main(int argc, char **argv)
{
char *path;
int wce = 0;
int rc;
int fd;
path = argv[1];
if ((fd = open(path, O_RDONLY|O_LARGEFILE)) == -1)
exit(2);
if (argc>2) {
wce = atoi(argv[2]) ? 1 : 0;
rc = ioctl(fd, DKIOCSETWCE, &wce);
}
else {
rc = ioctl(fd, DKIOCGETWCE, &wce);
printf("Write Cache: %s\n", wce ? "enabled" : "disabled");
}
close(fd);
exit(0);
}
Tuesday, February 16, 2010
Friday, February 12, 2010
Wednesday, February 10, 2010
Dell - No 3rd Party Disk Drives Allowed
"[...]
Is Dell preventing the use of 3rd-party HDDs now?
[....]
Howard_Shoobe at Dell.com:
Thank you very much for your comments and feedback regarding exclusive use of Dell drives. It is common practice in enterprise storage solutions to limit drive support to only those drives which have been qualified by the vendor. In the case of Dell's PERC RAID controllers, we began informing customers when a non-Dell drive was detected with the introduction of PERC5 RAID controllers in early 2006. With the introduction of the PERC H700/H800 controllers, we began enabling only the use of Dell qualified drives. There are a number of benefits for using Dell qualified drives in particular ensuring a positive experience and protecting our data. While SAS and SATA are industry standards there are differences which occur in implementation. An analogy is that English is spoken in the UK, US and Australia. While the language is generally the same, there are subtle differences in word usage which can lead to confusion. This exists in storage subsystems as well. As these subsystems become more capable, faster and more complex, these differences in implementation can have greater impact. Benefits of Dell's Hard Disk and SSD drives are outlined in a white paper on Dell's web site at http://www.dell.com/downloads/global/products/pvaul/en/dell-hard-drives-pov.pdf"
IBM 2010: Customers in Revolt
Tuesday, February 09, 2010
Power 7
Thursday, February 04, 2010
Data Corruption - ZFS saves the day, again
XXX cl_runtime: [ID 856360 kern.warning] WARNING: QUORUM_GENERIC: quorum_read_keys error:
Reading the registration keys failed on quorum device /dev/did/rdsk/d7s2 with error 22.
This is the relevant source code:
406 int
407 quorum_scsi_sector_read(
[...]
449 error = quorum_ioctl_with_retries(vnode_ptr, USCSICMD, (intptr_t)&ucmd,
450 &retval);
451 if (error != 0) {
452 CMM_TRACE(("quorum_scsi_sector_read: ioctl USCSICMD "
453 "returned error (%d).\n", error));
454 kmem_free(ucmd.uscsi_rqbuf, (size_t)SENSE_LENGTH);
455 return (error);
456 }
457
458 //
459 // Calculate and compare the checksum if check_data is true.
460 // Also, validate the pgres_id string at the beg of the sector.
461 //
462 if (check_data) {
463 PGRE_CALCCHKSUM(chksum, sector, iptr);
464
465 // Compare the checksum.
466 if (PGRE_GETCHKSUM(sector) != chksum) {
467 CMM_TRACE(("quorum_scsi_sector_read: "
468 "checksum mismatch.\n"));
469 kmem_free(ucmd.uscsi_rqbuf, (size_t)SENSE_LENGTH);
470 return (EINVAL);
471 }
472
473 //
474 // Validate the PGRE string at the beg of the sector.
475 // It should contain PGRE_ID_LEAD_STRING[1|2].
476 //
477 if ((os::strncmp((char *)sector->pgres_id, PGRE_ID_LEAD_STRING1,
478 strlen(PGRE_ID_LEAD_STRING1)) != 0) &&
479 (os::strncmp((char *)sector->pgres_id, PGRE_ID_LEAD_STRING2,
480 strlen(PGRE_ID_LEAD_STRING2)) != 0)) {
481 CMM_TRACE(("quorum_scsi_sector_read: pgre id "
482 "mismatch. The sector id is %s.\n",
483 sector->pgres_id));
484 kmem_free(ucmd.uscsi_rqbuf, (size_t)SENSE_LENGTH);
485 return (EINVAL);
486 }
487
488 }
489 kmem_free(ucmd.uscsi_rqbuf, (size_t)SENSE_LENGTH);
490
491 return (error);
492 }
With a simple DTrace script I could verify if the quorum_scsi_sector_read() does indeed return with 22 and also I could print what else is going on within the function:
56 -> __1cXquorum_scsi_sector_read6FpnFvnode_LpnLpgre_sector_b_i_ 6308555744942019 enter
56 -> __1cZquorum_ioctl_with_retries6FpnFvnode_ilpi_i_ 6308555744957176 enter
56 <- __1cZquorum_ioctl_with_retries6FpnFvnode_ilpi_i_ 6308555745089857 rc: 0
56 -> __1cNdbg_print_bufIdbprintf6MpcE_v_ 6308555745108310 enter
56 -> __1cNdbg_print_bufLdbprintf_va6Mbpcrpv_v_ 6308555745120941 enter
56 -> __1cCosHsprintf6FpcpkcE_v_ 6308555745134231 enter
56 <- __1cCosHsprintf6FpcpkcE_v_ 6308555745148729 rc: 2890607504684
56 <- __1cNdbg_print_bufLdbprintf_va6Mbpcrpv_v_ 6308555745162898 rc: 1886718112
56 <- __1cNdbg_print_bufIdbprintf6MpcE_v_ 6308555745175529 rc: 1886718112
56 <- __1cXquorum_scsi_sector_read6FpnFvnode_LpnLpgre_sector_b_i_ 6308555745188599 rc: 22
21 -> __1cNdbg_print_bufIdbprintf6MpcE_v_ 6309628794339298 quorum_scsi_sector_read: checksum mismatch.
Thursday, January 21, 2010
The EC gives a green light
"The European Commission has approved under the EU Merger Regulation the proposed acquisition of US hardware and software vendor Sun Microsystems Inc. by Oracle Corporation, a US enterprise software company. After an in-depth examination, launched in September 2009 (see IP/09/1271 ), the Commission concluded that the transaction would not significantly impede effective competition in the European Economic Area (EEA) or any substantial part of it."
Friday, January 15, 2010
MySQL TOP
- added NCQRS, NCQRS/s columns
- fixed issue with dtrace dropping variables if the script was running for extended time periods
- cosmetic changes re output
CLIENT IP CONN CONN/s QRS QRS/s NCQRS NCQRS/s TIME VTIME
10.10.10.35 10 0 61 0 32 0 0 0
10.10.10.30 17 0 73 0 73 0 0 0
10.10.10.100 52 0 90 0 90 0 0 0
xx-www-11.portal 92 0 249 0 48 0 0 0
xx-cms-1.portal 95 0 1795 5 1669 4 48 48
xx-www-9.portal 198 0 634 1 278 0 0 0
xx-www-13.portal 239 0 986 2 366 1 0 0
xx-www-3.portal 266 0 1028 2 455 1 1 0
xx-www-12.portal 266 0 1070 2 561 1 3 2
xx-www-5.portal 300 0 1431 3 593 1 2 2
xx-www-10.portal 333 0 1221 3 678 1 3 2
xx-www-6.portal 334 0 989 2 446 1 1 0
xx-www-8.portal 358 0 1271 3 497 1 1 0
xx-www-4.portal 395 1 1544 4 744 2 0 0
xx-www-2.portal 445 1 1729 4 764 2 3 2
xx-www-1.portal 962 2 3555 9 1670 4 22 21
xx-www-7.portal 1016 2 3107 8 1643 4 117 115
====== ===== ====== ===== ====== ===== ===== =====
5378 14 20833 58 10607 29 207 199
Running for 359 seconds.
CONN total number of connections
CONN/s average number of connections per second
QRS total number of queries
QRS/s average number of queries per second
NCQRS total number of executed queries
NCQRS/s average number of executed queries per second
TIME total clock time in seconds for all queries
VTIME total CPU time in seconds for all queries
./mysql_top.d 12345 10s
# cat mysql_top.d
#!/usr/sbin/dtrace -qCs
/*
Robert Milkowski
*/
#define CLIENTS self->client_ip == "10.10.10.11" ? "xx-www-1.portal" : \
self->client_ip == "10.10.10.12" ? "xx-www-2.portal" : \
self->client_ip == "10.10.10.13" ? "xx-www-3.portal" : \
self->client_ip == "10.10.10.14" ? "xx-www-4.portal" : \
self->client_ip == "10.10.10.15" ? "xx-www-5.portal" : \
self->client_ip == "10.10.10.16" ? "xx-www-6.portal" : \
self->client_ip == "10.10.10.17" ? "xx-www-7.portal" : \
self->client_ip == "10.10.10.18" ? "xx-www-8.portal" : \
self->client_ip == "10.10.10.19" ? "xx-www-9.portal" : \
self->client_ip == "10.10.10.20" ? "xx-www-10.portal" : \
self->client_ip == "10.10.10.21" ? "xx-www-11.portal" : \
self->client_ip == "10.10.10.22" ? "xx-www-12.portal" : \
self->client_ip == "10.10.10.23" ? "xx-www-13.portal" : \
self->client_ip == "10.10.10.29" ? "xx-cms-1.portal" : \
self->client_ip
#define LEGEND "\n \
CONN total number of connections \n \
CONN/s average number of connections per second \n \
QRS total number of queries \n \
QRS/s average number of queries per second \n \
NCQRS total number of executed queries \n \
NCQRS/s average number of executed queries per second \n \
TIME total clock time in seconds for all queries \n \
VTIME total CPU time in seconds for all queries\n"
BEGIN
{
start = timestamp;
total_queries = 0;
total_nc_queries = 0;
total_conn = 0;
total_time = 0;
total_vtime = 0;
}
syscall::getpeername:entry
/ pid == $1 /
{
self->in = 1;
self->arg0 = arg0; /* int s */
self->arg1 = arg1; /* struct sockaddr * */
self->arg2 = arg2; /* size_t len */
}
syscall::getpeername:return
/ self->in /
{
this->len = *(socklen_t *) copyin((uintptr_t)self->arg2, sizeof(socklen_t));
this->socks = (struct sockaddr *) copyin((uintptr_t)self->arg1, this->len);
this->hport = (uint_t)(this->socks->sa_data[0]);
this->lport = (uint_t)(this->socks->sa_data[1]);
this->hport <<= 8; this->port = this->hport + this->lport;
this->a1 = lltostr((uint_t)this->socks->sa_data[2]);
this->a2 = lltostr((uint_t)this->socks->sa_data[3]);
this->a3 = lltostr((uint_t)this->socks->sa_data[4]);
this->a4 = lltostr((uint_t)this->socks->sa_data[5]);
this->s1 = strjoin(this->a1, ".");
this->s2 = strjoin(this->s1, this->a2);
this->s1 = strjoin(this->s2, ".");
this->s2 = strjoin(this->s1, this->a3);
this->s1 = strjoin(this->s2, ".");
self->client_ip = strjoin(this->s1, this->a4);
@conn[CLIENTS] = count();
@conn_ps[CLIENTS] = count();
total_conn++;
self->arg0 = 0;
self->arg1 = 0;
self->arg2 = 0;
}
pid$1::*close_connection*:entry
/ self->in /
{
self->in = 0;
self->client_ip = 0;
}
pid$1::*mysql_parse*:entry
/ self->in /
{
self->t = timestamp;
self->vt = vtimestamp;
@query[CLIENTS] = count();
@query_ps[CLIENTS] = count();
total_queries++;
}
pid$1::*mysql_parse*:return
/ self->in /
{
@time[CLIENTS] = sum(timestamp-self->t);
@vtime[CLIENTS] = sum(vtimestamp-self->vt);
total_time += (timestamp - self->t);
total_vtime += (vtimestamp - self->vt);
self->t = 0;
self->vt = 0;
}
pid$1::*mysql_execute_command*:entry
/ self-> in /
{
@nc_query[CLIENTS] = count();
@nc_query_ps[CLIENTS] = count();
total_nc_queries++;
}
tick-$2
{
/* clear the screen and move cursor to top left corner */
printf("\033[H\033[J");
this->seconds = (timestamp - start) / 1000000000;
normalize(@conn_ps, this->seconds);
normalize(@query_ps, this->seconds);
normalize(@nc_query_ps, this->seconds);
normalize(@time, 1000000000);
normalize(@vtime, 1000000000);
printf("%-16s %s %s %s %s %s %s %s %s\n", \
"CLIENT IP", "CONN", "CONN/s", "QRS", "QRS/s", "NCQRS", "NCQRS/s", "TIME", "VTIME");
printa("%-16s %@6d %@5d %@6d %@5d %@6d %@5d %@5d %@5d\n", \
@conn, @conn_ps, @query, @query_ps, @nc_query, @nc_query_ps, @time, @vtime);
printf("%-16s %s %s %s %s %s %s %s %s\n", \
"", "======", "=====", "======", "=====", "======", "=====", "=====", "=====");
printf("%-16s %6d %5d %6d %5d %6d %5d %5d %5d\n", "", \
total_conn, total_conn/this->seconds, total_queries, total_queries/this->seconds, \
total_nc_queries, total_nc_queries/this->seconds, \
total_time/1000000000, total_vtime/1000000000);
/*
denormalize(@conn_ps);
denormalize(@query_ps);
denormalize(@nc_query_ps);
denormalize(@total_time);
denormalize(@total_vtime);
*/
printf("Running for %d seconds.\n", this->seconds);
printf(LEGEND);
}
Thursday, January 14, 2010
MySQL Query Time Histogram
1s resolution
value ------------- Distribution ------------- count
< 0 | 0
0 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 4700573
1 | 6366
2 | 35
3 | 23
4 | 39
5 | 8
6 | 6
7 | 5
8 | 7
9 | 4
>= 10 | 9
Running for 73344 seconds.
1s resolution
value ------------- Distribution ------------- count
< 0 | 0
0 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 4686051
1 | 2972
2 | 0
Running for 73024 seconds.
./mysql_query_time_distribution.d 12345 10s
# cat mysql_query_time_distribution.d
#!/usr/sbin/dtrace -qs
BEGIN
{
start=timestamp;
}
pid$1::*mysql_parse*:entry
{
self->t=timestamp;
}
pid$1::*mysql_parse*:return
/ self->t /
{
@["1s resolution"]=lquantize((timestamp-self->t)/1000000000,0,10);
self->t=0;
}
tick-$2
{
printa(@);
printf("Running for %d seconds.\n", (timestamp-start)/1000000000);
}
Thursday, January 07, 2010
Identifying a Full Table Scan in MySQL with Dtrace
# ./m2.d 60s 10000
### read() count: 64076 ###
SELECT * FROM clip WHERE parent_id=20967 AND type=4 ORDER BY quality ASC
### read() count: 64076 ###
SELECT * FROM clip WHERE parent_id=14319 AND type=4 ORDER BY quality ASC
### read() count: 64076 ###
SELECT * FROM clip WHERE parent_id=20968 AND type=4 ORDER BY quality ASC
There are about 60k entries form parent_id column which suggests that mysql is doing a full table scan when executing above queries. A quick check within mysql revealed that there was no index for parent_id column so mysql was doing full table scans. After the index was created:
# ./m2.d 60s 1
[filtered out all unrelated queries]
### read() count: 6 ###
SELECT * FROM clip WHERE parent_id=22220 AND type=4 ORDER BY quality ASC
### read() count: 8 ###
SELECT * FROM clip WHERE parent_id=8264 AND type=4 ORDER BY quality ASC
### read() count: 4 ###
SELECT * FROM clip WHERE parent_id=21686 AND type=4 ORDER BY quality ASC
### read() count: 4 ###
SELECT * FROM clip WHERE parent_id=21687 AND type=4 ORDER BY quality ASC
# cat m2.d
#!/usr/sbin/dtrace -qs
#pragma D option strsize=8192
pid13550::*mysql_parse*:entry
{
self->a=1;
self->query=copyinstr(arg1);
self->count=0;
}
pid13550::*mysql_parse*:return
/ self->a && self->count > $2 /
{
printf("### read() count: %d ###\n%s\n\n", self->count, self->query);
self->a=0;
self->query=0;
}
pid13550::*mysql_parse*:return
/ self->a /
{
self->a=0;
self->query=0;
}
syscall::*read*:entry
/ self->a /
{
self->count++;
}
tick-$1
{
exit(0);
}
Tuesday, January 05, 2010
zpool split
OVERVIEW:The new feature should be available in build 131.
Some practices in data centers are built around the use of a volume
manager's ability to clone data. An administrator will attach a set of
disks to mirror an existing configuration, wait for the resilver to
complete, and then physically detach and remove those disks to a new
location.
Currently in zfs, the only way to achieve this is by using zpool offline
to disable a set of disks, zpool detach to permanently remove them after
they've been offlined, move the disks over to a new host, zpool
force-import of the moved disks, and then zpool detach the disks that were
left behind.
This is cumbersome and prone to error, and even then the new pool
cannot be imported on the same host as the original.
PROPOSED SOLUTION:
Introduce a "zpool split" command. This will allow an administrator to
extract one disk from each mirrored top-level vdev and use them to create
a new pool with an exact copy of the data. The new pool can then be
imported on any machine that supports that pool's version.
See implementation details.