PostgreSQL查询运行速度更快,索引扫描,但引擎选择哈希连接

PostgreSQL查询运行速度更快,索引扫描,但引擎选择哈希连接,第1张

概述查询: SELECT "replays_game".*FROM "replays_game"INNER JOIN "replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id"WHERE "replays_playeringame"."player_id" = 50027 如果我设置SET e 查询:
SELECT "replays_game".*FROM "replays_game"INNER JOIN "replays_playeringame" ON "replays_game"."ID" = "replays_playeringame"."game_ID"WHERE "replays_playeringame"."player_ID" = 50027

如果我设置SET enable_seqscan = off,那么它会做得很快,这是:

query PLAN-------------------------------------------------------------------------------------------------------------------------------------------------------------------- nested Loop  (cost=0.00..27349.80 rows=3395 wIDth=72) (actual time=28.726..65.056 rows=3398 loops=1)   ->  Index Scan using replays_playeringame_player_ID on replays_playeringame  (cost=0.00..8934.43 rows=3395 wIDth=4) (actual time=0.019..2.412 rows=3398 loops=1)         Index Cond: (player_ID = 50027)   ->  Index Scan using replays_game_pkey on replays_game  (cost=0.00..5.41 rows=1 wIDth=72) (actual time=0.017..0.017 rows=1 loops=3398)         Index Cond: (ID = replays_playeringame.game_ID) Total runtime: 65.437 ms

但是没有可怕的enable_seqscan,它选择做一个较慢的事情:

query PLAN-------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join  (cost=7330.18..18145.24 rows=3395 wIDth=72) (actual time=92.380..535.422 rows=3398 loops=1)   Hash Cond: (replays_playeringame.game_ID = replays_game.ID)   ->  Index Scan using replays_playeringame_player_ID on replays_playeringame  (cost=0.00..8934.43 rows=3395 wIDth=4) (actual time=0.020..2.899 rows=3398 loops=1)         Index Cond: (player_ID = 50027)   ->  Hash  (cost=3668.08..3668.08 rows=151208 wIDth=72) (actual time=90.842..90.842 rows=151208 loops=1)         Buckets: 1024  Batches: 32 (originally 16)  Memory Usage: 1025kB         ->  Seq Scan on replays_game  (cost=0.00..3668.08 rows=151208 wIDth=72) (actual time=0.020..29.061 rows=151208 loops=1) Total runtime: 535.821 ms

以下是相关指标:

Index "public.replays_game_pkey" Column |  Type   | DeFinition--------+---------+------------ ID     | integer | IDprimary key,btree,for table "public.replays_game"Index "public.replays_playeringame_player_ID"  Column   |  Type   | DeFinition-----------+---------+------------ player_ID | integer | player_IDbtree,for table "public.replays_playeringame"

所以我的问题是,我在做错什么是Postgres错误估计两种加入方式的相对成本?我在成本估计中看到,它认为哈希加入会更快.而其对索引加入成本的估计则减少了500.

如何给Postgres更多的线索?在运行所有上述 *** 作之前,我已经运行了一个VACUUM ANALYZE.

有趣的是,如果我为具有较小游戏次数的玩家运行此查询,则Postgres会选择进行索引扫描嵌套循环.所以关于大型游戏的一些事情会引发这种不必要的行为,其中相对估计的成本与实际的估计成本不一致.

最后,我应该使用Postgres吗?我不希望成为数据库调优的专家,所以我正在寻找一个数据库,它将以良好的开发人员的关注程度来表现得相当出色,而不是专门的DBA.恐怕如果我坚持使用Postgres,我将会有一个稳定的问题,这将使我成为一名Postgres专家,也许另一个DB将更加宽容一个更随意的方法.

Postgres专家(RhodiumToad)审查了我的完整数据库设置(http://pastebin.com/77QuiQSp),并建议设置cpu_tuple_cost = 0.1.这给了一个戏剧性的加速:http://pastebin.com/nTHvSHVd

或者,切换到MysqL也很好地解决了这个问题.我在我的OS X盒子上默认安装了MysqL和Postgres,MysqL的速度要快两倍,通过反复执行查询来比较“加热”的查询.在“冷”查询中,即第一次执行给定的查询时,MysqL的速度是5到150倍.冷查询的执行对于我的特定应用来说是非常重要的.

就我而言,最大的问题仍然很出色 – Postgres需要比MysqL更好的配置和运行方式吗?例如,考虑到这里的评论者没有提出任何建议.

我的猜测是你使用的是default_page_cost = 4,这太高了,使得索引扫描太贵了.

我尝试用这个脚本重建2个表:

CREATE table replays_game (    ID integer NOT NulL,PRIMARY KEY (ID));CREATE table replays_playeringame (    player_ID integer NOT NulL,game_ID integer NOT NulL,PRIMARY KEY (player_ID,game_ID),CONSTRAINT replays_playeringame_game_fkey        FOREIGN KEY (game_ID) REFERENCES replays_game (ID));CREATE INDEX ix_replays_playeringame_game_ID    ON replays_playeringame (game_ID);-- 150k gamesINSERT INTO replays_gameSELECT generate_serIEs(1,150000);-- ~150k players,~2 games eachINSERT INTO replays_playeringameselect trunc(random() * 149999 + 1),generate_serIEs(1,150000);INSERT INTO replays_playeringameSELECT *FROM    (        SELECT            trunc(random() * 149999 + 1) as player_ID,150000) as game_ID    ) AS tWHERE    NOT EXISTS (        SELECT 1        FROM replays_playeringame        WHERE            t.player_ID = replays_playeringame.player_ID            AND t.game_ID = replays_playeringame.game_ID    );-- the heavy player with 3000 gamesINSERT INTO replays_playeringameselect 999999,3000);

默认值为4:

game=# set random_page_cost = 4;SETgame=# explain analyse SELECT "replays_game".*FROM "replays_game"INNER JOIN "replays_playeringame" ON "replays_game"."ID" = "replays_playeringame"."game_ID"WHERE "replays_playeringame"."player_ID" = 999999;                                                                     query PLAN                                                                      ----------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join  (cost=1483.54..4802.54 rows=3000 wIDth=4) (actual time=3.640..110.212 rows=3000 loops=1)   Hash Cond: (replays_game.ID = replays_playeringame.game_ID)   ->  Seq Scan on replays_game  (cost=0.00..2164.00 rows=150000 wIDth=4) (actual time=0.012..34.261 rows=150000 loops=1)   ->  Hash  (cost=1446.04..1446.04 rows=3000 wIDth=4) (actual time=3.598..3.598 rows=3000 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 106kB         ->  Bitmap Heap Scan on replays_playeringame  (cost=67.54..1446.04 rows=3000 wIDth=4) (actual time=0.586..2.041 rows=3000 loops=1)               Recheck Cond: (player_ID = 999999)               ->  Bitmap Index Scan on replays_playeringame_pkey  (cost=0.00..66.79 rows=3000 wIDth=0) (actual time=0.560..0.560 rows=3000 loops=1)                     Index Cond: (player_ID = 999999) Total runtime: 110.621 ms

降低到2:

game=# set random_page_cost = 2;SETgame=# explain analyse SELECT "replays_game".*FROM "replays_game"INNER JOIN "replays_playeringame" ON "replays_game"."ID" = "replays_playeringame"."game_ID"WHERE "replays_playeringame"."player_ID" = 999999;                                                                  query PLAN                                                                   ----------------------------------------------------------------------------------------------------------------------------------------------- nested Loop  (cost=45.52..4444.86 rows=3000 wIDth=4) (actual time=0.418..27.741 rows=3000 loops=1)   ->  Bitmap Heap Scan on replays_playeringame  (cost=45.52..1424.02 rows=3000 wIDth=4) (actual time=0.406..1.502 rows=3000 loops=1)         Recheck Cond: (player_ID = 999999)         ->  Bitmap Index Scan on replays_playeringame_pkey  (cost=0.00..44.77 rows=3000 wIDth=0) (actual time=0.388..0.388 rows=3000 loops=1)               Index Cond: (player_ID = 999999)   ->  Index Scan using replays_game_pkey on replays_game  (cost=0.00..0.99 rows=1 wIDth=4) (actual time=0.006..0.006 rows=1 loops=3000)         Index Cond: (ID = replays_playeringame.game_ID) Total runtime: 28.542 ms(8 rows)

如果使用SSD,我会进一步降低到1.1.

至于你最后一个问题,我真的认为你应该坚持postgresql.我有postgresql和mssql的经验,我需要把三分之一的努力放在后面,以便执行一半以及前者.

总结

以上是内存溢出为你收集整理的PostgreSQL查询运行速度更快,索引扫描,但引擎选择哈希连接全部内容,希望文章能够帮你解决PostgreSQL查询运行速度更快,索引扫描,但引擎选择哈希连接所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/sjk/1169758.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-06-02
下一篇 2022-06-02

发表评论

登录后才能评论

评论列表(0条)

保存