At Days of Wonder we are huge fans of MySQL (and since about a year of the various Open Query, Percona, Google or other community patches), up to the point we’re using MySQL for about everything in production.
But since we moved to 5.0, back 3 years ago our production databases which hold our website and online game systems has a unique issue: the mysqld process uses more and more RAM, up to the point where the kernel OOM decide to kill the process.
You’d certainly think we are complete morons because we didn’t do anything in the last 3 years to fix the issue :–)
Unfortunately, I never couldn’t replicate the issue in the lab, mainly because it is difficult to replicate the exact same load the production server sees (mainly because of the online games activity).
During those 3 years, I tried everything I could, from using other allocators, valgrind, debug builds and so on, without any success.
For instance it shows
1 2 3 4 5 6 7 8 9
Back several month ago, I analyzed this output just to see what figures were growing, and found that the Dictionary Cache variable part was increasing (slowly but definitely).
Sure fine MySQL experts would have been able to tell me exactly what, when and where the problem was, but since I’m not familiar with the code-base, I looked up what this number was and where it was increased (all in dict0dict.c) and added some logs each time it was increased.
I then installed this version for a quite long time (just to check it wouldn’t crash on production) on a slave server. But this server didn’t print anything interesting because it doesn’t see the exact same load the production masters.
A couple of months after that, I moved this code to one of the master and bingo! I found the operation and the tables exhibiting an increase:
As soon as I saw the operation and table (ie rank), I found what the culprit is. We have a daemon that every 10s computes the player ranks for our online games.
To do this, we’re using the following pattern:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
You might ask why I’m doing a so much convoluted system, especially the SELECT INTO OUTFILE and the LOAD DATA. It’s just because INSERT … SELECT with innodb and binlog enabled can produce transactions abort (which we were getting tons of).
Back to the original issue, apparently the issue lies in the RENAME part of the daemon.
Looking at the dict0dict.c dict_table_rename_in_cache function we see:
1 2 3 4 5 6 7
Looking to mem_heap stuff, I discovered that each table has a heap associated in which InnoDB allocates various things. This heap can only grow (by block of 8112 bytes it seems), since the allocator is not a real one. This is done for performance reasons.
So each time we rename a table, the old name (why? since it is already allocated) is duplicated, along with the new name. Each time.
This heap is freed when the table is dropped, so there is a possibility to reclaim the used memory. That means this issue is not a memory leak per-se.
By the way, I’ve filed this bug on mysql bug system.
One work-around, beside fixing the code itself, would be to drop the rank table instead of truncating it. The issue with dropping/creating InnoDB table on a fast pace is that the dictionary cache itself will grow, because it can only grow as there is no way to purge it from old tables (except running one of the Percona patches). So the more tables we create the more we’ll use memory – back to square 0, but worst.
So right now, I don’t really have any idea on how to really fix the issue. Anyone having an idea, please do not hesitate to comment on this blog post :–)
And please, don’t tell me to move to MyISAM…