Thursday, January 07, 2010

Identifying a Full Table Scan in MySQL with Dtrace

Yesterday I was looking at some performance issues with a mysql database. The database is version 5.1.x so no built-in DTrace SDT probes but still much can be done even without them. What I quickly noticed is that mysql was issuing several hundred thousands syscalls per second and most of them were pread()s and read()s. The databases are using MyISAM engine so mysql does not have a data buffer cache and leaves all the caching to a filesystem. I was interested in how many reads were performed per given query so I wrote a small dtrace script. The script takes as arguments a time after which it will exit and a threshold which represents minimum number of [p]read()s per query to query be printed.

So lets see an example output where we are interested only in queries which causes at least 10000 reads to be issued:
# ./m2.d 60s 10000
### read() count: 64076 ###
SELECT * FROM clip WHERE parent_id=20967 AND type=4 ORDER BY quality ASC

### read() count: 64076 ###
SELECT * FROM clip WHERE parent_id=14319 AND type=4 ORDER BY quality ASC

### read() count: 64076 ###
SELECT * FROM clip WHERE parent_id=20968 AND type=4 ORDER BY quality ASC

There are about 60k entries form parent_id column which suggests that mysql is doing a full table scan when executing above queries. A quick check within mysql revealed that there was no index for parent_id column so mysql was doing full table scans. After the index was created:

# ./m2.d 60s 1
[filtered out all unrelated queries]
### read() count: 6 ###
SELECT * FROM clip WHERE parent_id=22220 AND type=4 ORDER BY quality ASC

### read() count: 8 ###
SELECT * FROM clip WHERE parent_id=8264 AND type=4 ORDER BY quality ASC

### read() count: 4 ###
SELECT * FROM clip WHERE parent_id=21686 AND type=4 ORDER BY quality ASC

### read() count: 4 ###
SELECT * FROM clip WHERE parent_id=21687 AND type=4 ORDER BY quality ASC

So now each query is issuing 5 orders of magnitude less reads()!

Granted, all these reads were satisfied from ZFS ARC cache but still it saves hundreds of thousands unnecessary context switches and memory copying s making the queries *much* more quicker to execute and saving valuable CPU cycles. The real issue I was working on was a little bit more complicated but you get the idea.

The point I'm trying to make here is that although MySQL lacks good tools to analyze its workload you have a very powerful tool called dtrace which allows you to relatively quickly identify what queries are causing an issue and why. And all of that on a running live service without having to reconfigure or restart mysql. I know there is the MySQL Query Analyzer (or whatever it is called) but it requires a mysql proxy to be deployed... In this case it was much quicker and easier to use dtrace.

Below you find the script. Please notice that I had hard-coded the PID of the database and the script could be clean up, etc. - it is the working copy I used. The script can be easily modified to provide lots of additional useful information or it can be limited to only a specific myisam file, etc.

# cat m2.d
#!/usr/sbin/dtrace -qs

#pragma D option strsize=8192


pid13550::*mysql_parse*:entry
{
self->a=1;
self->query=copyinstr(arg1);
self->count=0;

}

pid13550::*mysql_parse*:return
/ self->a && self->count > $2 /
{
printf("### read() count: %d ###\n%s\n\n", self->count, self->query);

self->a=0;
self->query=0;

}

pid13550::*mysql_parse*:return
/ self->a /
{
self->a=0;
self->query=0;
}

syscall::*read*:entry
/ self->a /
{
self->count++;
}

tick-$1
{
exit(0);
}

No comments:

Post a Comment