mailing List: pgsql-admin.postgresql.org
总结from: Johann SpIEs
..loaded about 4,900,000,000 in one of two tables with 7200684 in the second table in database ‘firewall’,built one index using one date-fIEld (which took a few days) and used that index to copy about 3,800,000 of those records from the first to a third table,deleted those copIEd record from the first table and dropped the third table.
This took about a week on a 2xcpu quadcore server with 8Gb RAM..—
table paritioning is need.
distribute tables across different disks through tablespaces.Tweak the shared buffers and work_mem settings.
RAID5/6 are very,very slow when it comes to small disk *writes*.
At least a harDWare RAID controller with RAID 0 or 10 should be used,with 10krpm or 15krpm drives. SAS preferred.
as on SATA the only quick disks are Western Digital Raptor.
look at a vIEw called pg_stat_activity. Do: select * from pg_stat_activity;
以上是内存溢出为你收集整理的How to handling large volumes of data on PostgreSQL?全部内容,希望文章能够帮你解决How to handling large volumes of data on PostgreSQL?所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)