A Brief Introduction to MySQL Performance Tuning

Here are some common performance tuning concepts that I frequently run into. Please note that this really is only a basic introduction to performance tuning. For more in-depth tuning, it strongly depends on your systems, data and usage.

Server Variables

For tuning InnoDB performance, your primary variable is innodb_buffer_pool_size. This is the chunk of memory that InnoDB uses for caching data, indexes and various pieces of information about your database. The bigger, the better. If you can cache all of your data in memory, you’ll see significant performance improvements.

For MyISAM, there is a similar buffer defined by key_buffer_size, though this is only used for indexes, not data. Again, the bigger, the better.

Other variables that are worth investigating for performance tuning are:

query_cache_size – This can be very useful if you have a small number of read queries that are repeated frequently, with no write queries in between. There have been problems with too large a query cache locking up the server, so you will need to experiment to find a value that’s right for you.

innodb_log_file_size – Don’t fall into the trap of setting this to be too large. A large InnoDB log file group is necessary if you have lots of large, concurrent transactions, but comes at the expense of slowing down InnoDB recover, in event of a crash.

sort_buffer_size – Another one that shouldn’t be set too large. Peter Zaitsev did some testing a while back showing that increasing sort_buffer_size can in fact reduce the speed of the query.

Server Hardware

There are a few solid recommendations for improving the performance of MySQL by upgrading your hardware:

  • Use a 64-bit processor, operating system and MySQL binary. This will allow you to address lots of RAM. At this point in time, InnoDB does have issues scaling past 8 cores, so you don’t need to go out of your way to have lots of processors.
  • Speaking of RAM, buy lots of it. Enough to fit all of your data and indexes, if you can.
  • If you can’t fit all of your data into RAM, you’ll need fast disks, RAID if you can. Have multiple disks, so you can seperate your data files, OS files and log files onto different physical disks.

Query Tuning

Finally, though probably the most important, we look at tuning queries. In particular, we make sure that they’re using indexes, and they’re running quickly. To do so, turn on the Slow Query Log for a day, with log_queries_not_using_indexes enabled as well. Run the resulting log through mysqldumpslow, which will produce a summary of the log. This will help you prioritize which queries to tackle first. Then, you can use EXPLAIN to find out what they’re doing, and adjust your indexes accordingly.

Have fun!

Don’t Quote Your Numbers

It’s a fairly simple rule, and something that should be obeyed for your health and sanity.

There are a couple of bugs which you could run into, when quoting large numbers. First of all, Bug #34384. This is concerning quoting large INTs in the WHERE condition of an UPDATE or DELETE. It seems that this will cause a table scan, which is going to be slooooow on big tables.

Similarly, there is the more recently discovered Bug #43319. You can run into this if you quote large INTs in the IN clause of a SELECT … WHERE. For example:

mysql> EXPLAIN SELECT * FROM a WHERE a IN('9999999999999999999999')\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: NULL
         type: NULL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: NULL
        Extra: Impossible WHERE noticed after reading const tables
1 row in set (0.00 sec)

mysql> EXPLAIN SELECT * FROM a WHERE a IN('9999999999999999999999', '9999999999999999999999')\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: a
         type: ALL
possible_keys: PRIMARY
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 655360
        Extra: Using where
1 row in set (0.00 sec)

Note that you only run into it when you quote multiple large numbers.

Anyway, the long and the short of this post is: if at all possible, don’t quote numbers. MySQL will love you for it.

The \G modifier in the MySQL command line client

A little publicized, but exceedingly useful feature of the MySQL command line client is the \G modifier. It formats the query output nicely, so you can read through it easier. To use it, you just replace the semi-colon at the end of the query with ‘\G’.

For example, checking the master status:

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
| mysql-bin.000193 |     7061 |              |                  |
1 row in set (0.00 sec)

*************************** 1. row ***************************
            File: mysql-bin.000193
        Position: 7061
1 row in set (0.00 sec)

Now try this for the much larger SHOW SLAVE STATUS. Or for the enormous SHOW ENGINE INNODB STATUS.

As you can see, this is a handy option to make your console output much easier to read.

Our Valve Overlords

So, it seems that Valve Software are yet again trying to stop people from gaining new weapons in Team Fortress 2. The Scout update was released yesterday, and people want to try out the new weapons a quickly as possible. As per normal, the targets were those who used the Steam Achievement Manager.

Dear Valve, here’s a hint: if I wanted to grind away for hours to get new weapons, I’d be playing World of Warcraft.

As with last time, only a handful of people are reporting their weapons being taken away from them. This isn’t a deterrent, it barely rates as news.

Valve, I implore you, don’t go down this road. We know you’re trying to encourage people to play more, that you want to reward regular players. The fact is, not all of us have copious quantities of spare time to devote to playing each class. We just want to try out the new weapons, have a bit of a mess around, then go about our lives.

A recurring comment is that you want people to gain achievements through their regular game play, that it should come as a surprise. I ask you, then, what is more in line with your philosophy: unlocking just the weapons using the Steam Achievement Unlocker, or grinding them out on achievement servers, blowing the fun of the achievements on repetitive work, rather than fun?

For reference, I used the unlocker, and I still have my weapons. Same as every other pack.

And for those wondering why I’m posting this on my blog that almost certainly isn’t being read by the TF2 team, it’s just a public copy of a similar email I’ve sent to Valve. They’ve been good about listening to public feedback in the past, I’m hoping that this time is no exception. If you feel the same, send an email to Gabe Newell.

Upgrading MySQL with minimal downtime through Replication


With the release of MySQL 5.1, many DBAs are going to be scheduling downtime to upgrade their MySQL Server. As with all upgrades between major version numbers, it requires one of two upgrade paths:

  • Dump/reload: The safest method of upgrading, but it takes out your server for quite some time, especially if you have a large data set.
  • mysql_upgrade: A much faster method, but it can still be slow for very large data sets.

I’m here to present a third option. It requires minimal application downtime, and is reasonably simple to prepare for and perform.


First of all, you’re going to need a second server (which I’ll refer to as S2). It will act as a ‘stand-in’, while the main server (which I’ll refer to as S1) is upgraded. Once S2 is ready to go, you can begin the preparation:

  • If you haven’t already, enable Binary Logging on S1. We will need it to act as a replication Master.
  • Add an extra bit of functionality to your backup procedure. You will need to store the Binary Log position from when the backup was taken.
    • If you’re using mysqldump, simply add the –master-data option to your mysqldump call.
    • If you’re using InnoDB Hot Backup, there’s no need to make a change.  The Binary Log position is shown when you restore the backup.
    • For other backup methods, you will probably need to get the Binary Log position manually:
      mysql> SHOW MASTER STATUS;
      (Perform backup now...)
      mysql> UNLOCK TABLES;

    Once you have a backup with the corresponding Binary Log position, you can setup S2:

    • Install MySQL 5.1 on S2.
    • Restore the backup from S1 to S2.
    • Create the Slave user on S1.
    • Enter the Slave settings on S2. You should familiarise yourself with the Replication documentation.
    • Enable Binary Logging on S2. We’ll need this during the upgrade process.
    • Setup S2 as a Slave of S1:
      • If you used mysqldump for the backup, you will need to run the following query:
        mysql> CHANGE MASTER TO MASTER_HOST='S2.ip.address', MASTER_USER='repl_user', MASTER_PASSWORD='repl_password';
      • For any other method, you’ll need to specify the Binary Log position as well:
        mysql> CHANGE MASTER TO MASTER_HOST='S2.ip.address', MASTER_USER='repl_user', MASTER_PASSWORD='repl_password', MASTER_LOG_FILE='mysql-bin.nnnnnnn', MASTER_LOG_POS=mmmmmmmm;
    • Start the Slave on S2:
      mysql> START SLAVE;

    The major pre-upgrade work is now complete.


    Just before beginning the upgrade, take a backup of S2. For speed, I’d recommend running the following queries, then shutting down the MySQL server and copying the data files for the backup.

    mysql> STOP SLAVE;

    Once the backup is complete, restart S2 and let it catch up with S1 again.

    When you’re ready to begin the upgrade, you will need a minor outage. Stop your application, and let S2 catch up with S1. Once it has caught up, they will have identical data. So, switch your application to using S2 instead of S1. Your application can continue running unaffected while you upgrade S1 server.

    • Stop the Slave process on S2:
      mysql> STOP SLAVE;
    • Stop S1.
    • Upgrade S1 to MySQL 5.1.
    • Move the S1 data files to a backup location.
    • Move the backup from S2 into S1’s data directory.
    • Start S2.
    • Setup S1 as a Slave to S2, same as when we made S2 a Slave of S1.
    • Let S1 catch up with S2. When it has caught up, stop your application, and make sure S1 is still caught up with S2.
    • Switch your application back to using S1.

    Complete! Hooray! You just need to run a couple of queries on S1 to clean up the Slave settings:

    mysql> STOP SLAVE;


    You can keep the outage to only a few minutes while performing this upgrade, removing the need for potentially expensive downtime. If you need the downtime to be zero, you probably want to be looking at a Circular Replication system, though that’s getting a little outside of this blog post.