deepstream:5.1-21.02-triton的docker安装pytorch后会提示
libtorch_cuda_cpp.so: undefined symbol
参照Unable to import PyTorch - #4 by mchi - DeepStream SDK - NVIDIA Developer Forums解决方法
it’s caused by PyTorch version incompatibilities.
After installing torch, remove “/opt/tritonserver/lib/pytorch/” from the LD_LIBRARY_PATH, torch can then work, otherwise it will links to the lib under /opt/tritonserver/lib/pytorch/ and the failed due to incompatibilities. But, after changing LD_LIBRARY_PATH, nvinferserver plugin can’t work then.
According to Release Notes :: NVIDIA Deep Learning Triton Inference Server documentation 1, triton is using a dedicated Pytorch repo: triton-inference-server/pytorch_backend 1, so the incompatibility may be expected.
May I know why you need torch in DS docker?
# pip3 install torch==1.8.1+cpu torchvision==0.9.1+cpu torchaudio===0.8.1 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html # export LD_LIBRARY_PATH=/usr/src/tensorrt/lib:/opt/jarvis/lib/:/opt/kenlm/lib/:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 # python3 -c "import torch; print(torch.__version__)" 1.8.1+cpu
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)