– 错误的 *** 作导致保存了1TB以上的csv,要对csv重新读取处理,直接使用read_csv()不带任何参数,会把RAM撑爆。
– 所以使用chunksize:不一次性将文件读入内存(RAM)中,而是分多次。
官方示例: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#io-chunking
import pandas as pd
import time
start = time.perf_counter() # calculate running time
chunksize = 5000000
reader = pd.read_csv('absolute_path/filename.csv',
encoding='utf-8',
on_bad_lines='skip',
chunksize=chunksize,
iterator=True)
# 'on_bad_lines' is to solve: Error tokenizing data. C error: Expected 1 fields in line 3, saw 2
for i, ck in enumerate(reader):
print(i, '', len(ck))
ck.to_csv('../data/a_' + str(i) + '.csv', index=False, mode='a+', header=['Source', 'URL', 'Source_title'])
end = time.perf_counter()
print('using time: %s seconds' % (end-start)
Ref:
https://blog.csdn.net/w55100/article/details/90111254
https://blog.csdn.net/qq_30031221/article/details/109446157
https://blog.csdn.net/qq_42363032/article/details/115178869
https://blog.csdn.net/sinolzeng/article/details/113970949
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)