Recent Posts

Nginx upload progress module v0.8!

less than 1 minute read

Yes, I know… I released v0.7 less than a month ago. But this release was crippled by a crash that could happen at start or reload.

Changes

Bonus in this new version, brought to you by Tizoc:

  • JSONP support
  • Long awaited fix for X-Progress-ID to be the last parameter in the request parameter

If you wonder what JSONP is (as I did when I got the merge request), you can check the original blog post that lead to it.

To activate JSONP you need:

  1. to use the upload_progress_jsonp_output in the progress probe location
  2. declare the JSONP parameter with the upload_progress_jsonp_parameter

This version has been tested with 0.7.64 and 0.8.30.

How do you get it?

Easy, download the tarball from the nginx upload progress module github repository download section.

If you want to report a bug, please use the Github issue section.

Nginx upload progress module v0.7!

1 minute read

I’m proud to announce the release of Nginx Upload Progress module v0.7

This version sees a crash fix and various new features implemented by Valery Kholodkov (the author of the famous Nginx Upload Module).

This version has been tested with Nginx 0.7.64.

Changes

  • fixed segfault when uploads are aborted (thanks to Markus Doppelbauer for his bug report)
  • session ID header name is now configurable (thanks to Valery Kholodkov)
  • Added directive to format output as pure json (thanks to Valery Kholodkov)
  • Added directive to format output with configurable template (thanks to Valery Kholodkov)
  • Added directive to set a probe response content-type (thanks to Valery Kholodkov)
  • Added upload status variables (needs a status patch) (thanks to Valery Kholodkov)

What’s now cool!

What is cool is that now with only one directive (upload_progress_json_output) the responses are sent in pure Json and not in javascript mix as it was before.

Another cool feature is the possibility to use templates to send progress information. That means with a simple configuration change nginx can now return XML:

upload_progress_content_type 'text/xml';
upload_progress_template starting '<upload><state>starting</state></upload>';
upload_progress_template uploading '<upload><state>uploading</state><size>$uploadprogress_length</size><uploaded>$uploadprogress_received</uploaded></upload>';
upload_progress_template done '<upload><state>done</state></upload>';
upload_progress_template error '<upload><state>error</state></upload>``$uploadprogress_status``';

Refer to the README in the distribution for more information.

How do you get it?

Easy, download the tarball from the nginx upload progress module github repository download section.

How can I use it?

Normally you have to use your own client code to display the progress bar and contact nginx to get the progress information.

But some nice people have created various javascript libraries doing this for you:

Happy uploads!

MySQL InnoDB and table renaming don’t play well…

4 minute read

At Days of Wonder we are huge fans of MySQL (and since about a year of the various Open Query, Percona, Google or other community patches), up to the point we’re using MySQL for about everything in production.

But since we moved to 5.0, back 3 years ago our production databases which hold our website and online game systems has a unique issue: the mysqld process uses more and more RAM, up to the point where the kernel OOM decide to kill the process.

You’d certainly think we are complete morons because we didn’t do anything in the last 3 years to fix the issue :-)

Unfortunately, I never couldn’t replicate the issue in the lab, mainly because it is difficult to replicate the exact same load the production server sees (mainly because of the online games activity).

During those 3 years, I tried everything I could, from using other allocators, valgrind, debug builds and so on, without any success.

What is nice, is that we moved to an OurDelta build about a year ago, where InnoDB is able to print more memory statistics than the default MySQL version.

For instance it shows

Internal hash tables (constant factor + variable factor)
    Adaptive hash index 1455381240      (118999688 + 1336381552)
    Page hash           7438328
    Dictionary cache    281544240       (89251896 + 192292344)
    File system         254712  (82672 + 172040)
    Lock system         18597112        (18594536 + 2576)
    Recovery system     0       (0 + 0)
    Threads             408056  (406936 + 1120)
    innodb_io_pattern   0       (0 + 0)

Back several month ago, I analyzed this output just to see what figures were growing, and found that the _Dictionary Cache variable part _was increasing (slowly but definitely).

Sure fine MySQL experts would have been able to tell me exactly what, when and where the problem was, but since I’m not familiar with the code-base, I looked up what this number was and where it was increased (all in dict0dict.c) and added some logs each time it was increased.

I then installed this version for a quite long time (just to check it wouldn’t crash on production) on a slave server. But this server didn’t print anything interesting because it doesn’t see the exact same load the production masters.

A couple of months after that, I moved this code to one of the master and bingo! I found the operation and the tables exhibiting an increase:

mysqld[8131]: InnoDB: dict_table_rename_in_cache production/rank_tmp2 193330680 + 8112
mysqld[8131]: InnoDB: dict_table_rename_in_cache production/rank 193338792 + 8112

As soon as I saw the operation and table (ie rank), I found what the culprit is. We have a daemon that every 10s computes the player ranks for our online games.

To do this, we’re using the following pattern:

-- compute the ranks
SELECT NULL, playerID
FROM game_score as g
ORDER BY g.rankscore DESC
INTO OUTFILE "/tmp/rank_tmp.tmp"

-- load back the scores
LOAD DATA INFILE "/tmp/rank_tmp.tmp" INTO TABLE rank_tmp

-- swap tables so that clients see new ranks atomatically
RENAME TABLE rank TO rank_tmp2 , rank_tmp TO rank, rank_tmp2 TO rank_tmp

-- truncate the old ranks for a new pass
TRUNCATE TABLE rank_tmp

-- go back to the select above

You might ask why I’m doing a so much convoluted system, especially the SELECT INTO OUTFILE and the LOAD DATA. It’s just because INSERT … SELECT with innodb and binlog enabled can produce transactions abort (which we were getting tons of).

Back to the original issue, apparently the issue lies in the RENAME part of the daemon.

Looking at the dict0dict.c dict_table_rename_in_cache function we see:


ibool
dict_table_rename_in_cache(...)
...
  old_name = mem_heap_strdup(table->heap, table->name);
  table->name = mem_heap_strdup(table->heap, new_name);
...
}

Looking to mem_heap stuff, I discovered that each table has a heap associated in which InnoDB allocates various things. This heap can only grow (by block of 8112 bytes it seems), since the allocator is not a real one. This is done for performance reasons.

So each time we rename a table, the old name (why? since it is already allocated) is duplicated, along with the new name. Each time.

This heap is freed when the table is dropped, so there is a possibility to reclaim the used memory. That means this issue is not a memory leak per-se.

By the way, I’ve filed this bug on mysql bug system.

One work-around, beside fixing the code itself, would be to drop the rank table instead of truncating it. The issue with dropping/creating InnoDB table on a fast pace is that the dictionary cache itself will grow, because it can only grow as there is no way to purge it from old tables (except running one of the Percona patches). So the more tables we create the more we’ll use memory - back to square 0, but worst.

So right now, I don’t really have any idea on how to really fix the issue. Anyone having an idea, please do not hesitate to comment on this blog post :-)

And please, don’t tell me to move to MyISAM…