I got the message in the morning today about the bug being fixed in MySQL 5.6.6…. which I reported in Early 2006 (while still being with MySQL) and running MySQL 4.1 I honestly thought this issue was fixed long ago as it was indeed pretty annoying. I must say I’m very impressed with Oracle team going and cleaning up such very old bugs. Here is a description from the bug:
If you perform match of constant which is too large to the column
instead of simply responding with empty set MySQL truncates the
constant, performs the lookup and only when discards results:
CREATE TABLE `trunc` (
`i` int(11) NOT NULL default '0',
KEY `i` (`i`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
mysql> select i,count(*) cnt from trunc group by i order by cnt desc
limit 10;
+------------+-------+
| i | cnt |
+------------+-------+
| 2147483647 | 76047 |
| 1421638051 | 3 |
| 985505567 | 3 |
| 1046160975 | 2 |
| 141017389 | 2 |
| 848130626 | 2 |
| 888665819 | 2 |
| 1001437915 | 2 |
| 118824892 | 2 |
| 2104712727 | 2 |
+------------+-------+
10 rows in set (0.34 sec)
(Just some ranfom data. The only row we really need is with
2147483647)
mysql> explain select count(*) from trunc where i=4147483647;
+----+-------------+-------+------+---------------+------+---------+-------+-------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len |
ref | rows | Extra |
+----+-------------+-------+------+---------------+------+---------+-------+-------+--------------------------+
| 1 | SIMPLE | trunc | ref | i | i | 4 |
const | 81602 | Using where; Using index |
+----+-------------+-------+------+---------------+------+---------+-------+-------+--------------------------+
1 row in set, 1 warning (0.01 sec)
4bil is out of range for unsigned column and I would expect "Impossible
Where clause" here
Lets look at query execution:
| Handler_read_next | 1305742982 |
mysql> select count(*) from trunc where i=4147483647;
+----------+
| count(*) |
+----------+
| 0 |
+----------+
1 row in set, 1 warning (0.04 sec)
| Handler_read_next | 1305819030 |
As you can see there were over 70000 row reads performed
I came across the bug in the real application which would use multiple table and the column type was inconsistent between them… so inserts into one table would happen with correct value, such as 3000000000, while inserting the same value in the different one will be truncated to 2147483647. This caused a lot of rows to have value of 2147483647 very quickly and select queries for values over 2bil becoming increasingly more expensive. Because there would be more and more queries for values over 2bil as data continued to be inserted the system essentially collapsed in matter of hours.
Thank you, Oracle team, for having this bug finally fixed (even though you’re about 6 years over my expectations while reporting this bug). There are also couple of advises to prevent problems like this to happen:
Use consistent data types Make sure you’re using consistent data types for the same values. Often it is just better to standardize on a few and not take decision in every single case. I for example use “int unsigned not null” for not overly large positive integers.
Consider using strict mode Storing different data when your application requests without throwing the error is not good idea in most cases, yet MySQL continues to be very loose by default. Consider running with sql_mode=strict_all_tables which will report error when data is truncated, together with using transactional tables for any data you value.
DIGITAL JUICE
No comments:
Post a Comment
Thank's!