PostgreSQL:LIMIT越低,查询越慢

PostgreSQL:LIMIT越低,查询越慢,第1张

概述我有以下查询 SELECT translation.idFROM "TRANSLATION" translation INNER JOIN "UNIT" unit ON translation.fk_id_unit = unit.id INNER JOIN "DOCUMENT" document ON unit.fk_id_document = document. 我有以下查询
SELECT translation.IDFROM "TRANSLATION" translation   INNER JOIN "UNIT" unit     ON translation.fk_ID_unit = unit.ID   INNER JOIN "document" document     ON unit.fk_ID_document = document.IDWHERE document.fk_ID_job = 3665ORDER BY translation.ID ascliMIT 50

它运行了可怕的110秒.

表格大小:

+----------------+-------------+| table          | Records     |+----------------+-------------+| TRANSLATION    |  6,906,679  || UNIT           |  6,679  || document       |     42,321  |+----------------+-------------+

但是,当我将liMIT参数从50更改为1000时,查询将在2秒内完成.

这是慢速查询计划

limit (cost=0.00..146071.52 rows=50 wIDth=8) (actual time=111916.180..111917.626 rows=50 loops=1)  ->  nested Loop (cost=0.00..50748166.14 rows=17371 wIDth=8) (actual time=111916.179..111917.624 rows=50 loops=1)      Join Filter: (unit.fk_ID_document = document.ID)    ->  nested Loop (cost=0.00..39720545.91 rows=5655119 wIDth=16) (actual time=0.051..15292.943 rows=5624514 loops=1)          ->  Index Scan using "TRANSLATION_pkey" on "TRANSLATION" translation (cost=0.00..7052806.78 rows=5655119 wIDth=16) (actual time=0.039..1887.757 rows=5624514 loops=1)          ->  Index Scan using "UNIT_pkey" on "UNIT" unit (cost=0.00..5.76 rows=1 wIDth=16) (actual time=0.002..0.002 rows=1 loops=5624514)              Index Cond: (unit.ID = translation.fk_ID_translation_unit)    ->  Materialize  (cost=0.00..138.51 rows=130 wIDth=8) (actual time=0.000..0.006 rows=119 loops=5624514)          ->  Index Scan using "document_IDx_job" on "document" document (cost=0.00..137.86 rows=130 wIDth=8) (actual time=0.025..0.184 rows=119 loops=1)              Index Cond: (fk_ID_job = 3665)

对于快速的人

limit (cost=523198.17..523200.67 rows=1000 wIDth=8) (actual time=2274.830..2274.988 rows=1000 loops=1)  ->  Sort (cost=523198.17..523241.60 rows=17371 wIDth=8) (actual time=2274.829..2274.895 rows=1000 loops=1)      Sort Key: translation.ID      Sort Method:  top-N heapsort  Memory: 95kB      ->  nested Loop (cost=139.48..522245.74 rows=17371 wIDth=8) (actual time=0.095..2252.710 rows=97915 loops=1)          ->  Hash Join (cost=139.48..420861.93 rows=17551 wIDth=8) (actual time=0.079..2005.238 rows=97915 loops=1)              Hash Cond: (unit.fk_ID_document = document.ID)              ->  Seq Scan on "UNIT" unit  (cost=0.00..399120.41 rows=5713741 wIDth=16) (actual time=0.008..1200.547 rows=6908070 loops=1)              ->  Hash (cost=137.86..137.86 rows=130 wIDth=8) (actual time=0.065..0.065 rows=119 loops=1)                  Buckets: 1024  Batches: 1  Memory Usage: 5kB                  ->  Index Scan using "document_IDx_job" on "document" document (cost=0.00..137.86 rows=130 wIDth=8) (actual time=0.009..0.041 rows=119 loops=1)                      Index Cond: (fk_ID_job = 3665)          ->  Index Scan using "TRANSLATION_IDx_unit" on "TRANSLATION" translation (cost=0.00..5.76 rows=1 wIDth=16) (actual time=0.002..0.002 rows=1 loops=97915)              Index Cond: (translation.fk_ID_translation_unit = unit.ID)

显然,执行计划非常不同,第二个执行计划的查询速度提高了50倍.

我在查询中涉及的所有字段都有索引,并且在运行查询之前在所有表上运行了ANALYZE.

有人可以看到第一个查询有什么问题吗?

更新:表定义

CREATE table "public"."TRANSLATION" (  "ID" BIGINT NOT NulL,"fk_ID_translation_unit" BIGINT NOT NulL,"translation" TEXT NOT NulL,"fk_ID_language" INTEGER NOT NulL,"relevance" INTEGER,CONSTRAINT "TRANSLATION_pkey" PRIMARY KEY("ID"),CONSTRAINT "TRANSLATION_fk" FOREIGN KEY ("fk_ID_translation_unit")    REFERENCES "public"."UNIT"("ID")    ON DELETE CASCADE    ON UPDATE NO ACTION    DEFERRABLE    INITIALLY DEFERRED,CONSTRAINT "TRANSLATION_fk1" FOREIGN KEY ("fk_ID_language")    REFERENCES "public"."LANGUAGE"("ID")    ON DELETE NO ACTION    ON UPDATE NO ACTION    NOT DEFERRABLE) WITHOUT OIDS;CREATE INDEX "TRANSLATION_IDx_unit" ON "public"."TRANSLATION"  USING btree ("fk_ID_translation_unit");CREATE INDEX "TRANSLATION_language_IDx" ON "public"."TRANSLATION"  USING hash ("translation");
CREATE table "public"."UNIT" (  "ID" BIGINT NOT NulL,"text" TEXT NOT NulL,"fk_ID_document" BIGINT NOT NulL,"word_count" INTEGER DEFAulT 0,CONSTRAINT "UNIT_pkey" PRIMARY KEY("ID"),CONSTRAINT "UNIT_fk" FOREIGN KEY ("fk_ID_document")    REFERENCES "public"."document"("ID")    ON DELETE CASCADE    ON UPDATE NO ACTION    NOT DEFERRABLE,CONSTRAINT "UNIT_fk1" FOREIGN KEY ("fk_ID_language")    REFERENCES "public"."LANGUAGE"("ID")    ON DELETE NO ACTION    ON UPDATE NO ACTION    NOT DEFERRABLE) WITHOUT OIDS;CREATE INDEX "UNIT_IDx_document" ON "public"."UNIT"  USING btree ("fk_ID_document");CREATE INDEX "UNIT_text_IDx" ON "public"."UNIT"  USING hash ("text");
CREATE table "public"."document" (  "ID" BIGINT NOT NulL,"fk_ID_job" BIGINT,CONSTRAINT "document_pkey" PRIMARY KEY("ID"),CONSTRAINT "document_fk" FOREIGN KEY ("fk_ID_job")    REFERENCES "public"."JOB"("ID")    ON DELETE SET NulL    ON UPDATE NO ACTION    NOT DEFERRABLE   ) WITHOUT OIDS;

更新:数据库参数

shared_buffers = 2048MBeffective_cache_size = 4096MBwork_mem = 32MBTotal memory: 32GBcpu: Intel Xeon X3470 @ 2.93 GHz,8MB cache
这是ANALYZE官方文档的一个有趣部分.

For large tables,ANALYZE takes a random sample of the table contents,rather than examining every row.
[…]
The extent of analysis can be controlled by adjusting the default_statistics_target configuration variable,or on a column-by-column basis by setting the per-column statistics target with ALTER table … ALTER ColUMN … SET STATISTICS.

显然,这是改善错误查询计划的常用方法.分析会慢一点,但查询计划可能会更好.

ALTER TABLE

总结

以上是内存溢出为你收集整理的PostgreSQL:LIMIT越低,查询越慢全部内容,希望文章能够帮你解决PostgreSQL:LIMIT越低,查询越慢所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/sjk/1166294.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-06-01
下一篇 2022-06-01

发表评论

登录后才能评论

评论列表(0条)

保存