Setting OMP

Setting OMP,第1张

Setting OMP

Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 

(pytorch1.3-cuda10.2) [huanghaiyang@dgx02 semantic-segmentation-main]$ python -m runx.runx scripts/eval_cityscapes.yml -i
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
*****************************************
None
None
NoneNone

None
None
Global Rank: 5 Local Rank: 5
Global Rank: 1 Local Rank: 1
Global Rank: 2 Local Rank: 2
Global Rank: 3 Local Rank: 3
Global Rank: 0 Local Rank: 0
Global Rank: 6 Local Rank: 6
None
Global Rank: 7 Local Rank: 7
None
Global Rank: 4 Local Rank: 4

代码的时候出现上述问题,这是由于pytouch分布式训练的问题。别人的代码设置了多个GPU并行,然而你跑的时候只用了一个或者两个,这个参数需要指定

python -m torch.distributed.launch --nproc_per_node=1 --master_port 88888 train.py

--nproc_per_node=1 这个1表示你实际的GPU数量,

--master_port 88888 这个表示端口,一般不用设置,或者随便设置一个数字就行。当出现

runtimeerror: address already in use

这时候加--master_port 12345 就行

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5571837.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-14
下一篇 2022-12-14

发表评论

登录后才能评论

评论列表(0条)

保存