One of the most painful troubleshooting tasks with MySQL is troubleshooting memory usage. The problem usually starts like this – you have configured MySQL to use reasonable global buffers, such as
innodb_buffer_size, key_buffer_size etc, you have reasonable amount of connections but yet MySQL takes much more memory than you would expect, causing swapping or other problems.
This simple problem on the surface becomes challenge with MySQL because there are no clear resource usage metrics available, and so in most cases you do not know where exactly memory is allocated. This was not much of the problem in MySQL 3.23 when there would only handful of places where memory could be allocated but it is a lot larger problems with MySQL 5.5 with addition of user variables, stored procedures, prepared statements etc which can be a memory hog.
My intent with this post is dual. I would encourage MySQL Team at Oracle, MariaDB team or Drizzle team to take a look into solving this problem. You will be thanked by a lot of people running MySQL in wild. I also wanted to share some troubleshooting techniques I use.
Plot Memory Usage
First I would like to see MySQL memory consumption plotted. I use “VSZ” columns from “ps” output on Linux. It helps me to understand how this memory allocation happens – does it grows slowly and never ending which would correspond to memory leak or resource leak, does it spikes at certain times which I may be able to correlate to some events ?
If you can’t get fancy graph quickly you can use this simple script:
while true do date >> ps.log ps aux | grep mysqld >> ps.log sleep 60 doneCheck for Table Cache Related Allocations
There are cases when MySQL will allocate a lot of memory for table cache, especially if you’re using large blobs. It is easy to check though. Run “FLUSH TABLES” and see whenever memory usage goes down. Note though because of how memory is allocated from OS you might not see “VSZ” going down. What you might see instead is flushing tables regularly or reducing table cache reduces memory consumption to be withing the reason.
Connection Related Allocations
Another set of buffers correspond to connections – orphaned prepared statements, user variables, huge network buffer (can grow up to max_packet_size per connection) are all connection buffers and so if you close connection MySQL can clean them up. Killing connections (or stopping related applications) and observing whenever memory usage shrinks can be very helpful in diagnostics. If you have many applications it might make sense to shut them down one by one so it is easier to understand which of the applications were responsible for memory allocation.
If you figured out it is related to connections and identified application which causes excessive memory usage you might look at how it uses MySQL to identify potential causes. Is it working with large blobs ? Using user variables ? Prepared Statements ? memory tables ? In a lot of cases you have to guess and test, you can’t get information of how much memory connection uses and for which purposes it is allocated. Reducing various per connection variables is how you can test it, though it does not cover everything. For prepared statements you might want to look at Prepared_stmt_count to see how many prepared statements are allocated on server and Com_stmt_send_long_data to see whenever sending long data feature is used as it can increase amount of resources prepared statements take on server. There is no comparable variables of how many user variables are allocated (and how much memory they use).
Memory Tables
MEMORY tables can take memory. There are implicit MEMORY tables which are allocated for query execution, which size can be controlled by tmp_table_size and which also only exist for duration of query execution so it is usually easy to catch them. There are also explicit MEMORY tables you can create both as permanent and temporary. There is a max_heap_table_size variable which allows you to limit size of MEMORY tables (the limit applies both to implicit and explicit ones) but as there is no control of how many tables application can create it does not really allows to restrict memory usage. For Permanent tables it is easy. We can look at information_schema to see how much memory is being used by current MEMORY tables:
mysql> select sum(data_length+index_length) from information_schema.tables where engine='memory'; +-------------------------------+ | sum(data_length+index_length) | +-------------------------------+ | 126984 | +-------------------------------+ 1 row in set (0.98 sec)This however will not cover TEMPORARY tables some of your connections might have created and forgot to remove (or still use for processing). Of course you will see these tables going away if you close connection. In Percona Server you can do better as you can query temporary tables too:
mysql> select sum(data_length+index_length) from information_schema.global_temporary_tables where engine='memory'; +-------------------------------+ | sum(data_length+index_length) | +-------------------------------+ | 126984 | +-------------------------------+ 1 row in set (0.00 sec)You can even go deeper to check which sessions have created which temporary tables (both in memory and not):
mysql> select * from information_schema.global_temporary_tables \G *************************** 1. row *************************** SESSION_ID: 7234 TABLE_SCHEMA: test TABLE_NAME: my ENGINE: InnoDB NAME: #sql516_1c42_2 TABLE_ROWS: 0 AVG_ROW_LENGTH: 0 DATA_LENGTH: 16384 INDEX_LENGTH: 0 CREATE_TIME: NULL UPDATE_TIME: NULL *************************** 2. row *************************** SESSION_ID: 7234 TABLE_SCHEMA: test TABLE_NAME: tmp ENGINE: MEMORY NAME: #sql516_1c42_1 TABLE_ROWS: 2 AVG_ROW_LENGTH: 257 DATA_LENGTH: 126984 INDEX_LENGTH: 0 CREATE_TIME: 2012-03-21 05:52:02 UPDATE_TIME: NULL *************************** 3. row *************************** SESSION_ID: 7231 TABLE_SCHEMA: test TABLE_NAME: z ENGINE: InnoDB NAME: #sql516_1c3f_0 TABLE_ROWS: 0 AVG_ROW_LENGTH: 0 DATA_LENGTH: 16384 INDEX_LENGTH: 0 CREATE_TIME: NULL UPDATE_TIME: NULL 3 rows in set (0.00 sec)Innodb Memory Usage
Finally it is often helpful to check how much memory Innodb has allocated. In fact this is often one of the first things I do as it is least intrusive. Run SHOW ENGINE INNODB STATUS and look for memory information block, which can use like this:
---------------------- BUFFER POOL AND MEMORY ---------------------- Total memory allocated 132183490560; in additional pool allocated 0 Internal hash tables (constant factor + variable factor) Adaptive hash index 4422068288 (2039977928 + 2382090360) Page hash 127499384 Dictionary cache 512619219 (509995888 + 2623331) File system 294352 (82672 + 211680) Lock system 318875832 (318747272 + 128560) Recovery system 0 (0 + 0) Threads 425080 (406936 + 18144) Dictionary memory allocated 2623331 Buffer pool size 7864319 Buffer pool size, bytes 128849002496 Free buffers 1 Database pages 8252672 Old database pages 3046376 Modified db pages 23419I’m using the information from Percona Server again which provides a lot more insight about how memory is used. I would note though the output is a bit confusing as “Total Memory Allocated” is really not the total any more as Innodb has moved allocating memory from operation system directly not from addition memory pool (hence 0). Such allocations are not seen in “Total memory allocated” line yet they are seen in one of the lines below. The most likely cause for Innodb run away memory allocation is Dictionary cache though there could be other reasons too.
Memory Leaks
There are many kinds of memory leaks and I would say these are rather rare in MySQL. Most suspected memory leaks end up being some run away resource usage, though these can happen. Some memory leaks might happen per connection and they will be gone when connection is closed, other correspond to global memory allocation and will result in increased memory allocation until server is restarted. I would suspect memory leak when you see memory usage growing which can’t be connected to any known resource use. For example for global memory leaks you would see memory usage continues to grow even if you close connections and tables regularly. Another common thing about memory leaks is because it is memory which is allocated and forgotten about, unless it is very small blocks, it should be just swapped out and when never needed again. So if you see swap file used space gradually growing and there are “swap outs” but a lot less “swap ins” chances are it is caused by memory leak.
Dealing with memory leaks is rather complicated as good tools to detect memory leaks like valgrind are too slow to run in production. So the best thing to do in this case is see whenever you can create isolated repeatable test cases based on your application, which can illustrate memory leak and when it can be found and fixed. This is where your MySQL Support contract can be handy.
Conclusion
Understanding where MySQL can allocate memory can help us to find the cause in most cases. It is not as straightforward as it should be and I’m very hopeful future releases of MySQL, MariaDB or Drizzle bring improvements in this space allowing us to see directly for what purpose memory is allocated and so detect all kinds of memory usage problems easier.
PlanetMySQL Voting:
Vote UP /
Vote DOWN
No comments:
Post a Comment
Thank's!