But once we migrated to Solaris we quickly tried dtrace. And the result was really surprising - from all those things the application (by name) which eats most of the cpu is ldapsearch utility. It's been used by some scripts by no one expected it to be the top application. As many of those scripts are written in Perl we tried to use ldap perl module instead and once we did it for some scripts their CPU usage dropped considerably being somewhere in a noise of all the other applications.
How hard was it to get the conclusion?
dtrace -n sched:::on-cpu'{self->t=timestamp;}' \
-n sched:::off-cpu'/self->t/{@[execname]=sum(timestamp-self->t);self->t=0;}'
Another interesting thing is that when we use built-in zfs compression (ljzb) we get about 3.6 compression ratio for all our collected data - yes that means the data are reduced over 3 times in size on disk without changing any applications.
Sweet dtrace-script, I'm gonna use it on some of my production systems. Dtrace rocks, jsut gotta take the time to learn how to use it more extensivly than I'm currently doing...
ReplyDelete