#安装环境:
opencv:https://blog.csdn.net/xiao13mm/article/details/106165477 sudo apt-get install autoconf automake libtool curl make g++ unzip
#安装protobuf
sudo apt install protobuf-compiler libprotobuf-dev
protobuf 版本问题
报错: undefined reference to `google::protobuf:
原因:protoc 的版本,和找到的 protobuf 链接库版本不一致;
解决:修改两者之一即可
方案一:修改 progobuf 链接库位置
# 查找 protoc 位置 which protoc # 输出 $path/bin/protoc # 根据上述位置,设置 链接库位置 cmake -D Protobuf_INCLUDE_DIR=$path/include # 指定 protobuf 路径 -D Protobuf_LIBRARY=$path/lib/libprotobuf.so -D CMAKE_PREFIX_PATH=$path -D BUILD_SHARED_LIBS=ON # 建立动态库 -DCMAKE_BUILD_TYPE=Release -DNCNN_VULKAN=ON -DNCNN_SYSTEM_GLSLANG=ON # 支持 gpu -DNCNN_BUILD_EXAMPLES=ON -D CMAKE_INSTALL_PREFIX=./install $cmakepath
例如
cmake -D Protobuf_INCLUDE_DIR=/media/hao/CODE/LIB/CPP/protobuf/include -D Protobuf_LIBRARY=/media/hao/CODE/LIB/CPP/protobuf/lib/libprotobuf.so -D CMAKE_PREFIX_PATH=/media/hao/CODE/LIB/CPP/protobuf -D CMAKE_PREFIX_PATH=/media/hao/CODE/LIB/opencv/debug420/install -D BUILD_SHARED_LIBS=ON -DCMAKE_BUILD_TYPE=Release -DNCNN_VULKAN=ON -DNCNN_SYSTEM_GLSLANG=ON -DNCNN_BUILD_EXAMPLES=ON -D CMAKE_INSTALL_PREFIX=./install /media/hao/CODE/SOURCE/MOBILE_INFERENCE/ncnn/
#安装VULKAN
wget https://sdk.lunarg.com/sdk/download/1.2.189.0/linux/vulkansdk-linux-x86_64-1.2.189.0.tar.gz?Human=true -O vulkansdk-linux-x86_64-1.2.189.0.tar.gz tar -xf vulkansdk-linux-x86_64-1.2.189.0.tar.gz export VULKAN_SDK=$(pwd)/1.2.189.0/x86_64
#编译ncnn
git clone https://github.com/Tencent/ncnn.git cd ncnn mkdir build && cd build cmake -DCMAKE_BUILD_TYPE=Release -DNCNN_VULKAN=ON -DNCNN_BUILD_EXAMPLES=ON # -DNCNN_VULKAN=ON 才会调用GPU make make install2.ncnn模型转换和量化(yolov3)
训练的模型格式:https://github.com/AlexeyAB/darknet/ darknet训练出来的weights
编译完成之后进入#tools/darknet/darknet2ncnn,实现weights转ncnn
#tools/darknet/darknet2ncnn ./darknet2ncnn model.cfg model.weights model.param model.bin
此时为fp32格式
量化成fp16:
#量化成fp16 ./ncnnoptimize /media/fandong/sunhao/weights/ziji/mobilenetv2_yolov3.param /media/fandong/sunhao/weights/ziji/mobilenetv2_yolov3.bin yolov3-fp16.param yolov3-fp16.bin 65536
量化成int8:
#fp32 to int8 #创建table ./ncnn2table ../../examples/mobilenetv2_yolov3.param ../../examples/mobilenetv2_yolov3.bin /home/fandong/images/sub.txt yolov3.table mean=[0.0,0.0,0.0] norm=[0.003921569,0.003921569,0.003921569] shape=[416,416,3] pixel=BGR #量化为int8 ./ncnn2int8 ../../examples/mobilenetv2_yolov3.param ../../examples/mobilenetv2_yolov3.bin yolov3-int8.param yolov3-int8.bin yolov3.table
进入build/examples中测试
./yolov3 *.jpg3.ncnn模型转换和量化(yolov5)
模型转换(pt → onnx → ncnn)
pt→onnx:https://github.com/ultralytics/yolov5
python models/export.py --weights yolov5s.pt --img 640 --batch 1#pt转onnx python -m onnxsim yolov5s.onnx yolov5s-sim.onnx #去除多余的memory data
会报错:
Unsupported slice step ! Unsupported slice step ! Unsupported slice step ! Unsupported slice step ! Unsupported slice step !
#yolov5s中col-major space2depth *** 作,pytorch没有对应上层api实现
自定义op YoloV5Focus代替掉,修改param
同时这里需要更改:
其中,量化 *** 作同yolov3
进入build/examples中测试
./yolov5 *.jpg4.工程部署:
github工程地址:https://github.com/liujiaxing7/ncnn-Android-Yolov5
工程内容:
环境编译 cmake ndk ncnn opencvjni java调用c++c++工程化部分
替换模型文件
替换extract的网络层即可
效果:
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)