python – n_jobs在sklearn-classes中不起作用

python – n_jobs在sklearn-classes中不起作用,第1张

概述有没有人使用sklearn-classes的“n_jobs”?我在Anaconda 3.4 64位中使用sklearn. Spyder版本是2.3.8.在将某些sklearn-class的“n_jobs”参数设置为非零值后,我的脚本无法完成执行.为什么会发生这种情况? 一些scikit-learn工具,如GridSearchCV和cross_val_score,内部依赖于 Python的多处理模块 有没有人使用sklearn-classes的“n_jobs”?我在Anaconda 3.4 64位中使用sklearn. Spyder版本是2.3.8.在将某些sklearn-class的“n_jobs”参数设置为非零值后,我的脚本无法完成执行.为什么会发生这种情况?解决方法 一些scikit-learn工具,如gridsearchcv和cross_val_score,内部依赖于 Python的多处理模块,通过传递n_jobs>来将执行并行化到几个Python进程上. 1作为参数.

取自Sklearn文档:

The problem is that Python multiprocessing does a fork system call without following it with an exec system call for performance reasons. Many librarIEs like (some versions of) Accelerate / veclib under OSX,(some versions of) MKL,the OpenMP runtime of GCC,nvIDia’s Cuda (and probably many others),manage their own internal thread pool. Upon a call to fork,the thread pool state in the child process is corrupted: the thread pool belIEves it has many threads while only the main thread state has been forked. It is possible to change the librarIEs to make them detect when a fork happens and reinitialize the thread pool in that case: we dID that for OpenBLAS (merged upstream in master since 0.2.10) and we contributed a patch to GCC’s OpenMP runtime (not yet revIEwed).

总结

以上是内存溢出为你收集整理的python – n_jobs在sklearn-classes中不起作用全部内容,希望文章能够帮你解决python – n_jobs在sklearn-classes中不起作用所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/1196167.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-06-03
下一篇 2022-06-03

发表评论

登录后才能评论

评论列表(0条)

保存