【38】开源算法库mmdetection的安装与使用

【38】开源算法库mmdetection的安装与使用,第1张


如有错误,恳请指出。


文章目录
  • 1. MMDetection的安装
  • 2. MMDetection的使用
    • 2.1 官方demo
      • Image推理
      • Video推理
      • Webcam推理
    • 2.2 实践测试

OpenMMLad有一系列的开源算法库,包含分类,检测,分割等等计算机视觉的任务,这篇博客用来简单记录一下其开源的检测算法库的安装与使用过程。


1. MMDetection的安装

安装mmdetection的过程不算复杂,主要是需要选择其开发的openmim安装套件,是一个自动安装依赖项的工具。而创建虚拟环境那些就不提及了,有关anaconda创建虚拟环境可以见之前的pytorch安装过程:Pytorch安装过程及问题解决

在创建了虚拟环境后,主要就是安装三部曲:

  1. pip install openmim
  2. mim install mmcv-full
installing mmcv-full from wheel.
Looking in links: https://download.openmmlab.com/mmcv/dist/cu102/torch1.9.0/index.html
Collecting mmcv-full==1.5.0
  Downloading https://download.openmmlab.com/mmcv/dist/cu102/torch1.9.0/mmcv_full-1.5.0-cp39-cp39-manylinux1_x86_64.whl (42.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 42.7/42.7 MB 8.8 MB/s eta 0:00:00:00:010:01m
Requirement already satisfied: numpy in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from mmcv-full==1.5.0) (1.21.5)
Collecting yapf
  Downloading yapf-0.32.0-py2.py3-none-any.whl (190 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 190.2/190.2 KB 1.2 MB/s eta 0:00:00a 0:00:01
Collecting addict
  Downloading addict-2.4.0-py3-none-any.whl (3.8 kB)
Requirement already satisfied: pyyaml in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from mmcv-full==1.5.0) (5.4.1)
Requirement already satisfied: packaging in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from mmcv-full==1.5.0) (21.3)
Requirement already satisfied: Pillow in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from mmcv-full==1.5.0) (8.4.0)
Requirement already satisfied: opencv-python>=3 in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from mmcv-full==1.5.0) (4.5.4.58)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from packaging->mmcv-full==1.5.0) (2.4.7)
Installing collected packages: yapf, addict, mmcv-full
Successfully installed addict-2.4.0 mmcv-full-1.5.0 yapf-0.32.0
Successfully installed mmcv-full.
  1. mim install mmdet==2.24.0
installing mmdet from https://github.com/open-mmlab/mmdetection.git.
正克隆到 '/tmp/tmpo0kldwjc/mmdetection'...
remote: Enumerating objects: 24460, done.
remote: Counting objects: 100% (22/22), done.
remote: Compressing objects: 100% (19/19), done.
remote: Total 24460 (delta 3), reused 12 (delta 3), pack-reused 24438
接收对象中: 100% (24460/24460), 37.54 MiB | 227.00 KiB/s, done.
处理 delta 中: 100% (17115/17115), done.
Note: checking out '73b4e65a6a30435ef6a35f405e3474a4d9cfb234'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

Successfully installed dependencies.
Requirement already satisfied: cython in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from -r /tmp/tmpo0kldwjc/mmdetection/requirements/build.txt (line 2)) (0.29.24)
Requirement already satisfied: numpy in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from -r /tmp/tmpo0kldwjc/mmdetection/requirements/build.txt (line 3)) (1.21.5)
DEPRECATION: In-tree builds are now the default. pip 22.1 will enforce this behaviour change. A possible replacement is to remove the --use-feature=in-tree-build flag.
Processing /tmp/tmpo0kldwjc/mmdetection
  Preparing metadata (setup.py) ... done
Requirement already satisfied: matplotlib in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from mmdet==2.24.1) (3.4.3)
Requirement already satisfied: numpy in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from mmdet==2.24.1) (1.21.5)
Requirement already satisfied: pycocotools in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from mmdet==2.24.1) (2.0.2)
Requirement already satisfied: six in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from mmdet==2.24.1) (1.16.0)
Requirement already satisfied: terminaltables in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from mmdet==2.24.1) (3.1.10)
Requirement already satisfied: python-dateutil>=2.7 in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from matplotlib->mmdet==2.24.1) (2.8.2)
Requirement already satisfied: pillow>=6.2.0 in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from matplotlib->mmdet==2.24.1) (8.4.0)
Requirement already satisfied: pyparsing>=2.2.1 in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from matplotlib->mmdet==2.24.1) (2.4.7)
Requirement already satisfied: cycler>=0.10 in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from matplotlib->mmdet==2.24.1) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from matplotlib->mmdet==2.24.1) (1.3.2)
Requirement already satisfied: setuptools>=18.0 in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from pycocotools->mmdet==2.24.1) (58.0.4)
Requirement already satisfied: cython>=0.27.3 in /home/fs/anaconda3/envs/yolox/lib/python3.9/site-packages (from pycocotools->mmdet==2.24.1) (0.29.24)
Building wheels for collected packages: mmdet
  Building wheel for mmdet (setup.py) ... done
  Created wheel for mmdet: filename=mmdet-2.24.1-py3-none-any.whl size=1388704 sha256=9a81611a9fbaad39d02b2020a1e909eabd4e707b0295defbfa8631741d995e8b
  Stored in directory: /tmp/pip-ephem-wheel-cache-7v2yeye2/wheels/d3/f1/67/f4b9c1d2a9647a900c8b6e1c18b44caa436906f7674dd9aa07
Successfully built mmdet
Installing collected packages: mmdet
Successfully installed mmdet-2.24.1
Successfully installed mmdet.

如此,就将mmdetection这个目标检测的开源库给安装成功了,可以正常import并使用。


2. MMDetection的使用

官方给出了使用现有模型权重的几个推理demo代码,分别是图像推理,视频推理与摄像头推理。

2.1 官方demo Image推理

image_demo.py参考代码:

import asyncio
from argparse import ArgumentParser

from mmdet.apis import (async_inference_detector, inference_detector,
                        init_detector, show_result_pyplot)


def parse_args():
    parser = ArgumentParser()
    parser.add_argument('img', help='Image file')
    parser.add_argument('config', help='Config file')
    parser.add_argument('checkpoint', help='Checkpoint file')
    parser.add_argument('--out-file', default=None, help='Path to output file')
    parser.add_argument(
        '--device', default='cuda:0', help='Device used for inference')
    parser.add_argument(
        '--palette',
        default='coco',
        choices=['coco', 'voc', 'citys', 'random'],
        help='Color palette used for visualization')
    parser.add_argument(
        '--score-thr', type=float, default=0.3, help='bbox score threshold')
    parser.add_argument(
        '--async-test',
        action='store_true',
        help='whether to set async options for async inference.')
    args = parser.parse_args()
    return args


def main(args):
    # build the model from a config file and a checkpoint file
    model = init_detector(args.config, args.checkpoint, device=args.device)
    # test a single image
    result = inference_detector(model, args.img)
    # show the results
    show_result_pyplot(
        model,
        args.img,
        result,
        palette=args.palette,
        score_thr=args.score_thr,
        out_file=args.out_file)


async def async_main(args):
    # build the model from a config file and a checkpoint file
    model = init_detector(args.config, args.checkpoint, device=args.device)
    # test a single image
    tasks = asyncio.create_task(async_inference_detector(model, args.img))
    result = await asyncio.gather(tasks)
    # show the results
    show_result_pyplot(
        model,
        args.img,
        result[0],
        palette=args.palette,
        score_thr=args.score_thr,
        out_file=args.out_file)


if __name__ == '__main__':
    args = parse_args()
    if args.async_test:
        asyncio.run(async_main(args))
    else:
        main(args)
Video推理

video_demo.py参考代码:

import argparse

import cv2
import mmcv

from mmdet.apis import inference_detector, init_detector


def parse_args():
    parser = argparse.ArgumentParser(description='MMDetection video demo')
    parser.add_argument('video', help='Video file')
    parser.add_argument('config', help='Config file')
    parser.add_argument('checkpoint', help='Checkpoint file')
    parser.add_argument(
        '--device', default='cuda:0', help='Device used for inference')
    parser.add_argument(
        '--score-thr', type=float, default=0.3, help='Bbox score threshold')
    parser.add_argument('--out', type=str, help='Output video file')
    parser.add_argument('--show', action='store_true', help='Show video')
    parser.add_argument(
        '--wait-time',
        type=float,
        default=1,
        help='The interval of show (s), 0 is block')
    args = parser.parse_args()
    return args


def main():
    args = parse_args()
    assert args.out or args.show, \
        ('Please specify at least one operation (save/show the '
         'video) with the argument "--out" or "--show"')

    model = init_detector(args.config, args.checkpoint, device=args.device)

    video_reader = mmcv.VideoReader(args.video)
    video_writer = None
    if args.out:
        fourcc = cv2.VideoWriter_fourcc(*'mp4v')
        video_writer = cv2.VideoWriter(
            args.out, fourcc, video_reader.fps,
            (video_reader.width, video_reader.height))

    for frame in mmcv.track_iter_progress(video_reader):
        result = inference_detector(model, frame)
        frame = model.show_result(frame, result, score_thr=args.score_thr)
        if args.show:
            cv2.namedWindow('video', 0)
            mmcv.imshow(frame, 'video', args.wait_time)
        if args.out:
            video_writer.write(frame)

    if video_writer:
        video_writer.release()
    cv2.destroyAllWindows()


if __name__ == '__main__':
    main()
Webcam推理

webcam_demo.py参考代码:

import argparse

import cv2
import torch

from mmdet.apis import inference_detector, init_detector

def parse_args():
    parser = argparse.ArgumentParser(description='MMDetection webcam demo')
    parser.add_argument('config', help='test config file path')
    parser.add_argument('checkpoint', help='checkpoint file')
    parser.add_argument(
        '--device', type=str, default='cuda:0', help='CPU/CUDA device option')
    parser.add_argument(
        '--camera-id', type=int, default=0, help='camera device id')
    parser.add_argument(
        '--score-thr', type=float, default=0.5, help='bbox score threshold')
    args = parser.parse_args()
    return args


def main():
    args = parse_args()
    device = torch.device(args.device)

    model = init_detector(args.config, args.checkpoint, device=device)
    camera = cv2.VideoCapture(args.camera_id)

    print('Press "Esc", "q" or "Q" to exit.')
    while True:
        ret_val, img = camera.read()
        result = inference_detector(model, img)

        ch = cv2.waitKey(1)
        if ch == 27 or ch == ord('q') or ch == ord('Q'):
            break

        model.show_result(
            img, result, score_thr=args.score_thr, wait_time=1, show=True)


if __name__ == '__main__':
    main()
2.2 实践测试

我的测试代码:

from mmdet.apis import init_detector, inference_detector, show_result_pyplot
import mmcv
import numpy as np
from PIL import Image
import cv2

# 获取模型
def get_model():
    # 选择配置文件与模型的权重
    config_file = './mmdetection-2.24.0/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
    checkpoint_file = './checkpoints/faster_rcnn/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'

    # 根据参数与权重初始化模型
    model = init_detector(config_file, checkpoint_file, device='cuda:1')
    return model


# 图像推理测试
def image_infer():
    model = get_model()
    image_path = './mmdetection-2.24.0/demo/demo.jpg'
    result = inference_detector(model, image_path)

    # 在窗口中可视化
    model.show_result(image_path, result, out_file='result.jpg')
    np.save('result_array', result)

    # 利用api进行可视化展示
    # palette choices=['coco', 'voc', 'citys', 'random']
    show_result_pyplot(model, image_path, result,
                       palette='coco',
                       score_thr=0.3,
                       out_file='image_result.jpg')


# 视频推理测试
def video_infer():
    model = get_model()
    video_path = './mmdetection-2.24.0/demo/demo.mp4'
    fourcc = cv2.VideoWriter_fourcc(*'mp4v')

    video_reader = mmcv.VideoReader(video_path)
    video_writer = cv2.VideoWriter(
            "video_result.mp4", fourcc, video_reader.fps,
            (video_reader.width, video_reader.height))

    # 读取每一帧进行显示
    for frame in mmcv.track_iter_progress(video_reader):
        result = inference_detector(model, frame)
        frame = model.show_result(frame, result, score_thr=0.3)
        model.show_result(frame, result, wait_time=1)

        # 在线显示视频,但使用的是服务器所以不可以
        # cv2.namedWindow('video', 0)
        # mmcv.imshow(frame, 'video', args.wait_time)

        # 导入文件
        video_writer.write(frame)

    # 释放缓存并消除
    video_writer.release()
    cv2.destroyAllWindows()


# 使用PIL库显示加载图像
def PIL_show(image_path):
    image = Image.open(image_path)
    image.show()


# 使用opencv库显示加载图像
def OpenCV_show(image_path):
    image = cv2.imread(image_path)
    cv2.imshow('image', image)
    cv2.waitKey(0)
    cv2.destroyAllWindows()


if __name__ == '__main__':

    image_infer()
    video_infer()
    # OpenCV_show('result.jpg')

但是由于在服务器上是用pycharm不能直接显示图像与播放检测出来的视频,所以这里我只能保存在本地上面查看效果了。

  • 图像检测结果,image_result.jpg

  • 视频检测某一帧的结果,video_result.mp4


参考资料:

1. mmdetection的项目地址

2. mmdetection的官方文档

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/langs/942956.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-18
下一篇 2022-05-18

发表评论

登录后才能评论

评论列表(0条)

保存