它非常快,对于小型阵列(<2GB)也很容易使用。对于像您的示例这样的易于压缩的数据,通常可以更快地压缩数据以进行IO *** 作。(SATA-SSD:大约500
MB / s,PCIe-SSD:最高3500MB / s)在解压缩步骤中,阵列分配是最昂贵的部分。如果图像的形状相似,则可以避免重复分配内存。
例
对于以下示例,假定使用连续数组。
import bloscimport pickledef compress(arr,Path): #c = blosc.compress_ptr(arr.__array_interface__['data'][0], arr.size, arr.dtype.itemsize, clevel=3,cname='lz4',shuffle=blosc.SHUFFLE) c = blosc.compress_ptr(arr.__array_interface__['data'][0], arr.size, arr.dtype.itemsize, clevel=3,cname='zstd',shuffle=blosc.SHUFFLE) f=open(Path,"wb") pickle.dump((arr.shape, arr.dtype),f) f.write(c) f.close() return c,arr.shape, arr.dtypedef decompress(Path): f=open(Path,"rb") shape,dtype=pickle.load(f) c=f.read() #array allocation takes most of the time arr=np.empty(shape,dtype) blosc.decompress_ptr(c, arr.__array_interface__['data'][0]) return arr#Pass a preallocated array if you have many similar imagesdef decompress_pre(Path,arr): f=open(Path,"rb") shape,dtype=pickle.load(f) c=f.read() #array allocation takes most of the time blosc.decompress_ptr(c, arr.__array_interface__['data'][0]) return arr
基准测试
#blosc.SHUFFLE, cname='zstd' -> 4728KB, %timeit compress(arr,"Test.dat")1.03 s ± 12.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)#611 MB/s%timeit decompress("Test.dat")146 ms ± 481 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)#4310 MB/s%timeit decompress_pre("Test.dat",arr)50.9 ms ± 438 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)#12362 MB/s#blosc.SHUFFLE, cname='lz4' -> 9118KB, %timeit compress(arr,"Test.dat")32.1 ms ± 437 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)#19602 MB/s%timeit decompress("Test.dat")146 ms ± 332 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)#4310 MB/s%timeit decompress_pre("Test.dat",arr)53.6 ms ± 82.9 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)#11740 MB/s
时机
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)