【从kitti开始自动驾驶】--6.2 全场景2D检测

【从kitti开始自动驾驶】--6.2 全场景2D检测,第1张

“飞扬的少年最动人心”
  • 1. python程序完善
    • 1.1 多目标的tracking
    • 1.2 区分不同object的颜色
  • 2. 移植至ROS
    • 2.1 读取tracking 资料
    • 2.2 提取tracking的目标和颜色
    • 2.3 发布函数重写
  • 3.效果预览
  • 4. 源码
    • 4.1 data_utils.py
    • 4.2 publish_utils.py
    • 4.3 kitti_all.py

本节将采用在jupyter notebook上的方式执行对单张图片上实现对所有object的打标,以及移植到ROS上,通过RVIZ可视化出来

tracking资料存放在/kitti_folder/tracking中
jupyter工程存放在/test3_autodrive_ws/src/jupyter_prj下的2D_tracking_label.ipynb
数据文件存放在/home/qinsir/kitti_folder/tracking/data_tracking_label_2/training/label_02
下,可以看到是txt文件

1. python程序完善 1.1 多目标的tracking
boxes = np.array(df[df.frame==frame][["bbox_left", "bbox_top", "bbox_right", "bbox_bottom"]])  #格式转换,第0帧所有的打标框
  • 将需要转换的目标提取出来并为一个array

#画出每一个坐标
for box in boxes:    #for一个矩阵,它是一行一行读取
    # 左上角,右下角,像素都是整数
    top_left = int(box[0]), int(box[1])
    right_down = int(box[2]), int(box[3])
    # 画举行
    cv2.rectangle(image, top_left, right_down, (255, 255, 0), 2)
  • 对单帧图片所有object进行标注

显示效果如下:

1.2 区分不同object的颜色
DETECTION_COLOR_DICT = {'Car':(255, 255, 0), 'Pedestrian':(0, 226, 255), 'Cyclist':(141, 40, 255)}
#为三种车型,分别添加颜色值对应的框
  • 设置不同交通工具的不同颜色框的字典(后期调用)
types = np.array(df[df.frame==0]["type"])
  • 提取第0帧的df数据中type并转为数组
for typ, box in zip(types, boxes):    #for一个矩阵,它是一行一行读取
    cv2.rectangle(image, top_left, right_down, DETECTION_COLOR_DICT[typ], 2)
  • 合并种类和坐标数组
  • 根据每个迭代对象的数据绘制

效果如下:

2. 移植至ROS

之前写道,读取image再发布,可以实现查看的效果,这里只需要先读取image,然后进行tracking *** 作,最后再发布image即可,意为不用新的publisher

2.1 读取tracking 资料

data_utils.py中:
直接移植jupyter程序中的读取tracking文件的资料:

def read_tracking(path):
    #数据获取和归类
    df = pd.read_csv(path,header=None, sep=' ')
    df.columns = COLUMN_NAMES

    #数据处理
    df.loc[df.type.isin(['Truck', 'Van', 'Tram']), 'type'] = 'Car'   # 将列表中有的这三种车归类为car
    df = df[df.type.isin(['Car', 'Pedestrian', 'Cyclist'])]  #取出这三个类别,更新df

    return df

kitti__all.py中

    df_tracking = read_tracking('/home/qinsir/kitti_folder/tracking/data_tracking_label_2/training/label_02/0000.txt')
  • 资料的读取:以第0组(厢型车等归类适用的tracking数据)为数据读取
2.2 提取tracking的目标和颜色
		boxes = np.array(df_tracking[df_tracking.frame==frame][["bbox_left", "bbox_top", "bbox_right", "bbox_bottom"]])  #格式转换,第0帧所有的打标框
        types = np.array(df_tracking[df_tracking.frame==frame]["type"])
  • boxes:从读取资料中提取绘图所需要的四个点的信息,四个点的坐标
  • types:从读取资料中读取各帧不同部分的type
2.3 发布函数重写

原来的publish_camera只有按帧发布照片:

def publish_camera(cam_pub, bridge, image):
    cam_pub.publish(bridge.cv2_to_imgmsg(image, 'bgr8'))

现在需要为了入口参数添加2.2的两个变量,因此需要重写为:

def publish_camera(cam_pub, bridge, image, box, typ):
  • 多了box(四个点)和typ(颜色)两个参数

for typ, box in zip(types, boxes):    #for一个矩阵,它是一行一行读取
        # 左上角,右下角,像素都是整数
        top_left = int(box[0]), int(box[1])
        right_down = int(box[2]), int(box[3])
  • 为每一帧的数据迭代
  • 找到左上角和右下角的坐标

 cv2.rectangle(image, top_left, right_down, DETECTION_COLOR_DICT[typ], 2)
  • 按照颜色在图片上画出tracking框
  • 最后在主函数中发布就可以啦
3.效果预览
  1. roscore
  2. rosrun demo1_kitti_pub_photo kitti_all.py
  3. rviz
    add正确的传感器后可以看到

4. 源码 4.1 data_utils.py
#!/usr/bin/env python3
#coding:utf-8
import os
import cv2
import numpy as np
import pandas as pd

# 提前标注每一行IMU数据的名称
IMU_COLUMN_NAMES = ['lat', 'lon', 'alt', 'roll', 'pitch', 'yaw', 'vn', 've', 'vf', 'vl', 'vu', 'ax', 'ay', 'az', 'af',
                    'al', 'au', 'wx', 'wy', 'wz', 'wf', 'wl', 'wu', 'posacc', 'velacc', 'navstat', 'numsats', 'posmode', 
                    'velmode', 'orimode']

#设置读取track文件的资料的title
TRACK_COLUMN_NAMES = ['frame', 'track_id', 'type', 'truncated', 'occluded', 'alpha', 'bbox_left', 'bbox_top', 
               'bbox_right', 'bbox_bottom', 'height', 'width', 'length', 'pos_x', 'pos_y', 'pos_z', 'rot_y']
def read_camera(path):
    return cv2.imread(path)

def read_point_cloud(path):
    return np.fromfile(path, dtype=np.float32).reshape(-1, 4)

def read_imu(path):
    df = pd.read_csv(path, header=None, sep=" ")
    df.columns = IMU_COLUMN_NAMES
    return df

def read_tracking(path):
    #数据获取和归类
    df = pd.read_csv(path,header=None, sep=' ')
    df.columns = TRACK_COLUMN_NAMES

    #数据处理
    df.loc[df.type.isin(['Truck', 'Van', 'Tram']), 'type'] = 'Car'   # 将列表中有的这三种车归类为car
    df = df[df.type.isin(['Car', 'Pedestrian', 'Cyclist'])]  #取出这三个类别,更新df

    return df
4.2 publish_utils.py
#!/usr/bin/env python3
#coding:utf-8

import rospy
from std_msgs.msg import Header
from sensor_msgs.msg import Image, PointCloud2, Imu, NavSatFix
from visualization_msgs.msg import Marker, MarkerArray
import sensor_msgs.point_cloud2 as pcl2
from geometry_msgs.msg import Point
from cv_bridge import CvBridge
import numpy as np
import tf
import cv2

FRAME_ID = "map"
DETECTION_COLOR_DICT = {'Car':(255, 255, 0), 'Pedestrian':(0, 226, 255), 'Cyclist':(141, 40, 255)}
#为三种车型,分别添加颜色值对应的框

'''
def publish_camera(cam_pub, bridge, image):
    cam_pub.publish(bridge.cv2_to_imgmsg(image, 'bgr8'))
'''
def publish_camera(cam_pub, bridge, image, boxes, types):
    #画出每一个坐标
    for typ, box in zip(types, boxes):    #for一个矩阵,它是一行一行读取
        # 左上角,右下角,像素都是整数
        top_left = int(box[0]), int(box[1])
        right_down = int(box[2]), int(box[3])
        # 画矩形
        cv2.rectangle(image, top_left, right_down, DETECTION_COLOR_DICT[typ], 2)
    cam_pub.publish(bridge.cv2_to_imgmsg(image, 'bgr8'))

def publish_point_cloud(pcl_pub, point_cloud):
    header = Header()
    header.frame_id = FRAME_ID
    header.stamp = rospy.Time.now()
    pcl_pub.publish(pcl2.create_cloud_xyz32(header, point_cloud[:, :3]))


def publish_ego_car(ego_car_pub):
    'publish left and right 45 degree FOV and ego car model mesh'
    #header部分
    marker = Marker()
    marker.header.frame_id = FRAME_ID
    marker.header.stamp = rospy.Time.now()
    # marker的id 
    marker.id = 0
    marker.action = Marker.ADD      # 加入一个marker
    marker.lifetime = rospy.Duration()  # 生存时间,()中无参数永远出现
    marker.type = Marker.LINE_STRIP     #marker 的type,有很多种,这里选择线条

    marker.color.r = 0.0
    marker.color.g = 1.0
    marker.color.b = 0.0            #这条线的颜色
    marker.color.a = 1.0            #透明度 1不透明
    marker.scale.x = 0.2            #大小,粗细

    #设定marker中的资料
    marker.points = []
    # 两条线,三个点即可
    #原点是0,0,0, 看左上角和右上角的数据要看kitti的设定,看坐标
    marker.points.append(Point(10, -10, 0))
    marker.points.append(Point(0, 0, 0))
    marker.points.append(Point(10, 10, 0))

    ego_car_pub.publish(marker) #设定完毕,发布

def publish_car_model(model):
    #header部分
    mesh_marker = Marker()
    mesh_marker.header.frame_id = FRAME_ID
    mesh_marker.header.stamp = rospy.Time.now()
    # marker的id 
    mesh_marker.id = -1
    mesh_marker.lifetime = rospy.Duration()  # 生存时间,()中无参数永远出现
    mesh_marker.type = Marker.MESH_RESOURCE     #marker 的type,有很多种,这里选择mesh
    mesh_marker.mesh_resource = "package://demo1_kitti_pub_photo/mesh/car_model/car.DAE"

    #平移量设置
    mesh_marker.pose.position.x = 0.0
    mesh_marker.pose.position.y = 0.0
    #以为0,0,0 是velodyne的坐标(车顶),这里坐标是车底,所以是负数
    mesh_marker.pose.position.z = -1.73

    #旋转量设定
    q = tf.transformations.quaternion_from_euler(np.pi/2, 0, np.pi/2)
    # 这里的参数和下载模型的预设角度有关,旋转关系,根据显示效果而调整,转成四元数q
    #x轴旋转
    mesh_marker.pose.orientation.x = q[0]
    mesh_marker.pose.orientation.y = q[1]
    mesh_marker.pose.orientation.z = q[2]
    mesh_marker.pose.orientation.w = q[3]

    #颜色设定(白色)
    mesh_marker.color.r = 1.0
    mesh_marker.color.g = 1.0
    mesh_marker.color.b = 1.0
    mesh_marker.color.a = 1.0

    #设置体积:  x,y,z方向的膨胀程度
    mesh_marker.scale.x = 0.4
    mesh_marker.scale.y = 0.4
    mesh_marker.scale.z = 0.4

    model.publish(mesh_marker) #设定完毕,发布

def publish_two_marker(kitti_two_marker):
    #建立markerarray
    markerarray = MarkerArray()
    # 绿线设定
    marker = Marker()
    marker.header.frame_id = FRAME_ID
    marker.header.stamp = rospy.Time.now()
    # marker的id 
    marker.id = 0
    marker.action = Marker.ADD      # 加入一个marker
    marker.lifetime = rospy.Duration()  # 生存时间,()中无参数永远出现
    marker.type = Marker.LINE_STRIP     #marker 的type,有很多种,这里选择线条

    marker.color.r = 0.0
    marker.color.g = 1.0
    marker.color.b = 0.0            #这条线的颜色
    marker.color.a = 1.0            #透明度 1不透明
    marker.scale.x = 0.2            #大小,粗细

    #设定marker中的资料
    marker.points = []
    # 两条线,三个点即可
    #原点是0,0,0, 看左上角和右上角的数据要看kitti的设定,看坐标
    marker.points.append(Point(10, -10, 0))
    marker.points.append(Point(0, 0, 0))
    marker.points.append(Point(10, 10, 0))
    #加入第一个
    markerarray.markers.append(marker)

    mesh_marker = Marker()
    mesh_marker.header.frame_id = FRAME_ID
    mesh_marker.header.stamp = rospy.Time.now()
    # marker的id 
    mesh_marker.id = -1
    mesh_marker.lifetime = rospy.Duration()  # 生存时间,()中无参数永远出现
    mesh_marker.type = Marker.MESH_RESOURCE     #marker 的type,有很多种,这里选择mesh
    mesh_marker.mesh_resource = "package://demo1_kitti_pub_photo/mesh/car_model/car.DAE"

    #平移量设置
    mesh_marker.pose.position.x = 0.0
    mesh_marker.pose.position.y = 0.0
    #以为0,0,0 是velodyne的坐标(车顶),这里坐标是车底,所以是负数
    mesh_marker.pose.position.z = -1.73

    #旋转量设定
    q = tf.transformations.quaternion_from_euler(np.pi/2, 0, np.pi/2)
    # 这里的参数和下载模型的预设角度有关,旋转关系,根据显示效果而调整,转成四元数q
    #x轴旋转
    mesh_marker.pose.orientation.x = q[0]
    mesh_marker.pose.orientation.y = q[1]
    mesh_marker.pose.orientation.z = q[2]
    mesh_marker.pose.orientation.w = q[3]

    #颜色设定(白色)
    mesh_marker.color.r = 1.0
    mesh_marker.color.g = 1.0
    mesh_marker.color.b = 1.0
    mesh_marker.color.a = 1.0

    #设置体积:  x,y,z方向的膨胀程度
    mesh_marker.scale.x = 0.4
    mesh_marker.scale.y = 0.4
    mesh_marker.scale.z = 0.4
    # 加入第二个:车子模型
    markerarray.markers.append(mesh_marker)

    #发布
    kitti_two_marker.publish(markerarray)

def publish_imu(imu_pub, imu_data):
    # 消息建立
    imu = Imu()

    #头填充
    imu.header.frame_id = FRAME_ID
    imu.header.stamp = rospy.Time.now()

    #旋转角度
    q = tf.transformations.quaternion_from_euler(float(imu_data.roll), float(imu_data.pitch), float(imu_data.yaw))
    # 将roll, pitch, yaw转成可被电脑识别的四元数q,并设定出去
    imu.orientation.x = q[0]
    imu.orientation.y = q[1]
    imu.orientation.z = q[2]
    imu.orientation.w = q[3]
    #线性加速度
    imu.linear_acceleration.x = imu_data.af
    imu.linear_acceleration.y = imu_data.al
    imu.linear_acceleration.z = imu_data.au 
    #角速度
    imu.angular_velocity.x = imu_data.wf
    imu.angular_velocity.y = imu_data.wl
    imu.angular_velocity.z = imu_data.wu

    #发布
    imu_pub.publish(imu)

def  publish_gps(gps_pub, imu_data):
    gps = NavSatFix()

    #头填充
    gps.header.frame_id = FRAME_ID
    gps.header.stamp = rospy.Time.now()

    #维度, 经度 和海拔
    gps.latitude = imu_data.lat
    gps.longitude = imu_data.lon
    gps.altitude = imu_data.al

    gps_pub.publish(gps)
4.3 kitti_all.py
#!/usr/bin/env python3
#coding:utf-8
from data_utils import *
from publish_utils import *


DATA_PATH = '/home/qinsir/kitti_folder/2011_09_26/2011_09_26_drive_0005_sync/'

if __name__ == '__main__':
    frame = 0
    rospy.init_node('kitti_node', anonymous=True)   #默认节点可以重名
    cam_pub = rospy.Publisher('kitti_cam', Image, queue_size=10)
    pcl_pub = rospy.Publisher("kitti_point_cloud", PointCloud2, queue_size=10)
    #ego_pub = rospy.Publisher('kitti_ego_car', Marker, queue_size=10)
    #model_pub = rospy.Publisher("kitti_car_model", Marker, queue_size=10)
    two_marker_pub = rospy.Publisher("kitti_two_mark", MarkerArray, queue_size=10)
    imu_pub = rospy.Publisher("kitti_imu", Imu, queue_size=10)  #IMU发布者
    gps_pub = rospy.Publisher("kitti_gps", NavSatFix, queue_size=10)

    bridge = CvBridge()      #opencv支持的图片和ROS可以读取的图片之间转换的桥梁

    rate = rospy.Rate(10)
    #以第0组(厢型车等归类适用的tracking数据)为数据读取
    df_tracking = read_tracking('/home/qinsir/kitti_folder/tracking/data_tracking_label_2/training/label_02/0000.txt')

    while not rospy.is_shutdown():
        boxes = np.array(df_tracking[df_tracking.frame==frame][["bbox_left", "bbox_top", "bbox_right", "bbox_bottom"]])  #格式转换,第0帧所有的打标框
        types = np.array(df_tracking[df_tracking.frame==frame]["type"])
        
        #使用OS,路径串接,%010d,这个字串有10个数字(比如0000000001).png
        img = read_camera(os.path.join(DATA_PATH, 'image_02/data/%010d.png'%frame)) 
        point_cloud = read_point_cloud(os.path.join(DATA_PATH, "velodyne_points/data/%010d.bin"%frame))
        imu_data = read_imu(os.path.join(DATA_PATH, "oxts/data/%010d.txt"%frame))

        publish_camera(cam_pub, bridge, img, boxes, types)
        #publish_camera(cam_pub, bridge, img)
        publish_point_cloud(pcl_pub, point_cloud)
        #publish_ego_car(ego_pub)
        #publish_car_model(model_pub)
        publish_two_marker(two_marker_pub)
        publish_imu(imu_pub, imu_data)
        publish_gps(gps_pub, imu_data)

        rospy.loginfo('new file publish')
        rate.sleep()
        frame += 1
        frame %= 154

欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/langs/733969.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-04-27
下一篇 2022-04-27

发表评论

登录后才能评论

评论列表(0条)

保存