您可以使用一个变量-它比任何联接都快得多:
SELECt id, size, @total := @total + size AS cumulativeSize,FROM table, (SELECt @total:=0) AS t;
这是在运行Debian 5.0并具有128MB RAM的奔腾III上的快速测试案例:
创建表:
DROP TABLE IF EXISTS `table1`;CREATE TABLE `table1` ( `id` int(11) NOT NULL auto_increment, `size` int(11) NOT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB;
填写20,000个随机数:
DELIMITER //DROP PROCEDURE IF EXISTS autofill//CREATE PROCEDURE autofill()BEGIN DECLARE i INT DEFAULT 0; WHILE i < 20000 DO INSERT INTO table1 (size) VALUES (FLOOR((RAND() * 1000))); SET i = i + 1; END WHILE;END;//DELIMITER ;CALL autofill();
检查行数:
SELECT COUNT(*) FROM table1;+----------+| COUNT(*) |+----------+| 20000 |+----------+
运行累积总数查询:
SELECt id, size, @total := @total + size AS cumulativeSizeFROM table1, (SELECt @total:=0) AS t;+-------+------+----------------+| id | size | cumulativeSize |+-------+------+----------------+| 1 | 226 | 226 || 2 | 869 |1095 || 3 | 668 |1763 || 4 | 733 |2496 |...| 19997 | 966 | 10004741 || 19998 | 522 | 10005263 || 19999 | 713 | 10005976 || 20000 | 0 | 10005976 |+-------+------+----------------+20000 rows in set (0.07 sec)
更新
我错过了原始问题中的groupId分组,这肯定使事情变得有些棘手。然后,我写了一个使用临时表的解决方案,但我不喜欢它-混乱且过于复杂。我走了,做了更多的研究,并且想出了更简单,更快捷的方法。
我不能说所有这鈥攊ñ其实功劳,我可以勉强主张任何可言,因为它仅仅是修改后的版本仿效行数从常见的MySQL查询。
它非常简单,优雅且非常快速:
SELECT fileInfoId, groupId, name, size, cumulativeSizeFROM ( SELECt fileInfoId, groupId, name, size, @cs := IF(@prev_groupId = groupId, @cs+size, size) AS cumulativeSize, @prev_groupId := groupId AS prev_groupId FROM fileInfo, (SELECt @prev_groupId:=0, @cs:=0) AS vars ORDER BY groupId) AS tmp;
SELECT ... AS tmp如果您不介意
prev_groupID返回列,则可以删除外部。我发现没有它,它的运行速度会稍快一些。
这是一个简单的测试用例:
INSERT INTO `fileInfo` VALUES( 1, 3, 'name0', '10'),( 5, 3, 'name1', '10'),( 7, 3, 'name2', '10'),( 8, 1, 'name3', '10'),( 9, 1, 'name4', '10'),(10, 2, 'name5', '10'),(12, 4, 'name6', '10'),(20, 4, 'name7', '10'),(21, 4, 'name8', '10'),(25, 5, 'name9', '10');SELECT fileInfoId, groupId, name, size, cumulativeSizeFROM ( SELECt fileInfoId, groupId, name, size, @cs := IF(@prev_groupId = groupId, @cs+size, size) AS cumulativeSize, @prev_groupId := groupId AS prev_groupId FROM fileInfo, (SELECT @prev_groupId := 0, @cs := 0) AS vars ORDER BY groupId) AS tmp;+------------+---------+-------+------+----------------+| fileInfoId | groupId | name | size | cumulativeSize |+------------+---------+-------+------+----------------+| 8 | 1 | name3 | 10 | 10 || 9 | 1 | name4 | 10 | 20 || 10 | 2 | name5 | 10 | 10 || 1 | 3 | name0 | 10 | 10 || 5 | 3 | name1 | 10 | 20 || 7 | 3 | name2 | 10 | 30 || 12 | 4 | name6 | 10 | 10 || 20 | 4 | name7 | 10 | 20 || 21 | 4 | name8 | 10 | 30 || 25 | 5 | name9 | 10 | 10 |+------------+---------+-------+------+----------------+
这是20,000行表中最后几行的示例:
| 19481 | 248 | 8CSLJX22RCO | 1037469 | 51270389 || 19486 | 248 | 1IYGJ1UVCQE | 937150 | 52207539 || 19817 | 248 | 3FBU3EUSE1G | 616614 | 52824153 || 19871 | 248 | 4N19QB7PYT | 153031 | 52977184 || 132 | 249 | 3NP9UGMTRTD | 828073 | 828073 || 275 | 249 | 86RJM39K72K | 860323 | 1688396 || 802 | 249 | 16Z9XADLBFI | 623030 | 2311426 |...| 19661 | 249 | ADZXKQUI0O3 | 837213 | 39856277 || 19870 | 249 | 9AVRTI3QK6I | 331342 | 40187619 || 19972 | 249 | 1MTAEE3LLEM | 1027714 | 41215333 |+------------+---------+-------------+---------+----------------+20000 rows in set (0.31 sec)
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)