原因:保护进程和子进程不会陷入pickle死循环
参考:python - multiprocessing pool example does not work and freeze the kernel - Stack Overflowhttps://stackoverflow.com/questions/52693216/multiprocessing-pool-example-does-not-work-and-freeze-the-kernel/52693952#52693952 示例:
from multiprocessing import Pool, Manager if __name__ == '__main__': po = Pool(10)
2. 丢进pool执行的函数,必须在此前定义好,并且需要在最外层定义。
原因:1.pool在实例化时就将函数进行pickle(序列化),因此必须在pool实例化之前,完成
函数的定义
2. pickle的原理是,将函数名序列化,然后传给进程,进程中根据函数名进行
import。因此函数必须写在最外层,才能正常import。
参考:parallel processing - Python multiprocessing.Pool: AttributeError - Stack Overflowhttps://stackoverflow.com/questions/52265120/python-multiprocessing-pool-attributeerror
正确实例(函数在pool实例化之前定义好):
from multiprocessing import Pool, Manager class Test: def pop_com_list(self, com_list): l0 = com_list.pop(0) print(l0) if __name__ == '__main__': test = Test() common_list = Manager().list([]) po = Pool(10) for i in range(10): common_list.append(i) for i in range(len(common_list)): po.apply_async(test.pop_com_list, (common_list,)) po.close() po.join()
错误实例(函数定义在main中而非最外层):
from multiprocessing import Pool, Manager if __name__ == '__main__': class Test: def pop_com_list(self, com_list): l0 = com_list.pop(0) print(l0) test = Test() common_list = Manager().list([]) po = Pool(10) for i in range(10): common_list.append(i) for i in range(len(common_list)): po.apply_async(test.pop_com_list, (common_list,)) po.close() po.join()
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)