我面临的问题是seqscan在这种情况下真的不是最好的选择,由于某些原因它获得了非常好的分数,但事实是seqscan只有在之前被查询并且它可以加载所有内容时才能快速运行来自缓冲区/缓存.
与seqscan相比,索引扫描可能稍微慢一点,如果它们的两个都在缓冲区上,但这很少发生,当两个查询都很冷时,索引扫描仍然更快(ms vs秒).
请注意,索引扫描是优越的,因为我正在使用限制子句,因此它应该能够非常快速地获取这几行.
我已将统计值设置为1000(默认值为100)并且为了以防真空吸尘,但是相同的故事.
TLDR:Seq扫描与低选择性索引上的索引扫描,seqscan是首选,但规划者是错误的,seqscan只有在缓存时才会更好,否则会更糟.
查询和计划,请注意索引一是从缓冲区加载而seqscan不完全.
explain (analyze,buffers)select *from IDentitIEs_IDentitywhere email_domain = 'live.com'limit 100'limit (cost=0.00..63.50 rows=100 wIDth=573) (actual time=75215.573..75215.640 rows=100 loops=1)'' Buffers: shared hit=75113 read=588870'' -> Seq Scan on IDentitIEs_IDentity (cost=0.00..2980008.00 rows=4692733 wIDth=573) (actual time=75215.571..75215.604 rows=100 loops=1)'' Filter: ((email_domain)::text = 'live.com'::text)'' Rows Removed by Filter: 54464136'' Buffers: shared hit=75113 read=588870''Planning time: 0.097 ms''Execution time: 75215.675 ms''limit (cost=0.57..187.26 rows=100 wIDth=573) (actual time=0.027..0.090 rows=100 loops=1)'' Buffers: shared hit=6'' -> Index Scan using IDentitIEs_IDentity_email_domain_9056bd28 on IDentitIEs_IDentity (cost=0.57..8760978.66 rows=4692733 wIDth=573) (actual time=0.026..0.057 rows=100 loops=1)'' Index Cond: ((email_domain)::text = 'live.com'::text)'' Buffers: shared hit=6''Planning time: 0.078 ms''Execution time: 0.124 ms'
更新:
表def(电子邮件和email_domain上的索引,标准和varchar_pattern_ops)
CREATE table public.IDentitIEs_IDentity( ID bigint NOT NulL DEFAulT nextval('IDentitIEs_IDentity_ID_seq'::regclass),email character varying(1000) ColLATE pg_catalog."default",email_domain character varying(1000) ColLATE pg_catalog."default",leak_ID bigint NOT NulL,CONSTRAINT IDentitIEs_IDentity_pkey PRIMARY KEY (ID),CONSTRAINT IDentitIEs_IDentity_leak_ID_87e1ae4e_fk_IDentitIEs_leak_ID FOREIGN KEY (leak_ID) REFERENCES public.IDentitIEs_leak (ID) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED)
表统计(真空分析后)
attname,avg_wIDth,n_distinct,correlation'ID',8,'-1','0.999988''email',23,'-0.636853','-0.020479''email_domain',10,'3876','0.696452''leak_ID','1','1'您可以使用平均技巧来强制进行索引扫描:
SELECT *FROM IDentitIEs_IDentityWHERE email_domain IN ('live.com',NulL)ORDER BY email_domainliMIT 100;
如果Postgresql必须排序,使用索引总是会更便宜.
如果你有WHERE email_domain =’live.com’,Postgresql很聪明,知道它不需要排序,这就是为什么我添加了第二个无用的项目来欺骗它.
总结以上是内存溢出为你收集整理的Postgresql索引seq扫描1亿行全部内容,希望文章能够帮你解决Postgresql索引seq扫描1亿行所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)