- 使用 OpenCV 函数 cv::filter2D 执行一些拉普拉斯滤波以进行图像锐化
- 使用 OpenCV 函数 cv::distanceTransform 以获得二值图像的派生(derived)表示,其中每个像素的值被替换为其到最近背景像素的距离
- 使用 OpenCV 函数 cv::watershed 将图像中的对象与背景隔离
加载源图像并检查它是否加载没有任何问题,然后显示它:
# Load the image parser = argparse.ArgumentParser(description='Code for Image Segmentation with Distance Transform and Watershed Algorithm. Sample code showing how to segment overlapping objects using Laplacian filtering, in addition to Watershed and Distance Transformation') parser.add_argument('--input', help='Path to input image.', default='cards.png') args = parser.parse_args() src = cv.imread(cv.samples.findFile(args.input)) if src is None: print('Could not open or find the image:', args.input) exit(0) # Show source image cv.imshow('Source Image', src)
原图
将背景从白色更改为黑色,因为这将有助于稍后在使用距离变换(Distance Transform)期间提取更好的结果
src[np.all(src == 255, axis=2)] = 0
如果不太理解numpy.all的的用法,可以参考这里
之后,我们将锐化(sharpen)我们的图像,以锐化前景对象(the foreground objects)的边缘。 我们将应用具有相当强过滤器的拉普拉斯(laplacian)过滤器(二阶导数的近似值):
# 创建一个内核,我们将用它来锐化我们的图像 # 一个二阶导数的近似值,一个非常强大的内核 kernel = np.array([[1, 1, 1], [1, -8, 1], [1, 1, 1]], dtype=np.float32) # do the laplacian filtering as it is # well, we need to convert everything in something more deeper then CV_8U # because the kernel has some negative values, # and we can expect in general to have a Laplacian image with negative values # BUT a 8bits unsigned int (the one we are working with) can contain values from 0 to 255 # so the possible negative number will be truncated imgLaplacian = cv.filter2D(src, cv.CV_32F, kernel) sharp = np.float32(src) imgResult = sharp - imgLaplacian # convert back to 8bits gray scale imgResult = np.clip(imgResult, 0, 255) imgResult = imgResult.astype('uint8') imgLaplacian = np.clip(imgLaplacian, 0, 255) imgLaplacian = np.uint8(imgLaplacian) #cv.imshow('Laplace Filtered Image', imgLaplacian) cv.imshow('New Sharped Image', imgResult)
锐化处理的主要目的是突出灰度的过度部分。由于拉普拉斯是一种微分算子,如果所使用的定义具有负的中心系数,那么必须将原图像减去经拉普拉斯变换后的图像,而不是加上它,从而得到锐化结果。----摘自《数字图像处理(第三版)》
现在我们将新的锐化源图像分别转换为灰度和二值图像(binary):
# Create binary image from source image bw = cv.cvtColor(imgResult, cv.COLOR_BGR2GRAY) _, bw = cv.threshold(bw, 40, 255, cv.THRESH_BINARY | cv.THRESH_OTSU) cv.imshow('Binary Image', bw)
我们现在准备在二值图像(binary image)上应用距离变换。 此外,我们对输出图像进行归一化,以便能够对结果进行可视化和阈值处理:
# Perform the distance transform algorithm dist = cv.distanceTransform(bw, cv.DIST_L2, 3) # 对范围 = {0.0, 1.0} 的距离图像(the distance image)进行归一化(Normalize), # 以便我们可以对其进行可视化和阈值处理 cv.normalize(dist, dist, 0, 1.0, cv.NORM_MINMAX) cv.imshow('Distance Transform Image', dist)
distanceTransform用法
cv.distanceTransform( src, distanceType, maskSize[, dst[, dstType]] )
src:输入图像,数据类型为CV_8U的单通道图像
dst: 输出图像,与输入图像具有相同的尺寸,数据类型为CV_8U或者CV_32F的单通道图像。
distanceType:选择计算两个像素之间距离方法的标志,其常用的距离度量方法, DIST_L1(distance = |x1-x2| + |y1-y2| 街区距离), DIST_L2 (Euclidean distance 欧几里得距离,欧式距离) 。
maskSize:距离变换掩码矩阵的大小,参数可以选择的尺寸为DIST_MASK_3(3×3)和DIST_MASK_5(5×5).
我们对 dist 图像进行阈值处理,然后执行一些形态学 *** 作(即膨胀)以从上述图像中提取峰值:
# Threshold to obtain the peaks # This will be the markers for the foreground objects _, dist = cv.threshold(dist, 0.4, 1.0, cv.THRESH_BINARY) # Dilate a bit the dist image kernel1 = np.ones((3,3), dtype=np.uint8) dist = cv.dilate(dist, kernel1) cv.imshow('Peaks', dist)
从每个 blob 中,我们在 cv::findContours 函数的帮助下为分水岭算法创建一个种子/标记:
# Create the CV_8U version of the distance image # It is needed for findContours() dist_8u = dist.astype('uint8') # Find total markers contours, _ = cv.findContours(dist_8u, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE) # Create the marker image for the watershed algorithm markers = np.zeros(dist.shape, dtype=np.int32) # Draw the foreground markers for i in range(len(contours)): cv.drawContours(markers, contours, i, (i+1), -1) # Draw the background marker cv.circle(markers, (5,5), 3, (255,255,255), -1) markers_8u = (markers * 10).astype('uint8') cv.imshow('Markers', markers_8u)
最后,我们可以应用分水岭算法,并将结果可视化:
# Perform the watershed algorithm cv.watershed(imgResult, markers) #mark = np.zeros(markers.shape, dtype=np.uint8) mark = markers.astype('uint8') mark = cv.bitwise_not(mark) # uncomment this if you want to see how the mark # image looks like at that point #cv.imshow('Markers_v2', mark) # Generate random colors colors = [] for contour in contours: colors.append((rng.randint(0,256), rng.randint(0,256), rng.randint(0,256))) # Create the result image dst = np.zeros((markers.shape[0], markers.shape[1], 3), dtype=np.uint8) # Fill labeled objects with random colors for i in range(markers.shape[0]): for j in range(markers.shape[1]): index = markers[i,j] if index > 0 and index <= len(contours): dst[i,j,:] = colors[index-1] # Visualize the final image cv.imshow('Final Result', dst)
代码: https://gitee.com/carlzhangweiwen/python-opencv-learn
or: https://github.com/opencv/opencv/blob/master/samples/python/tutorial_code/ImgTrans/distance_transformation/imageSegmentation.py
Pixellib (https://github.com/ayoolaolafenwa/PixelLib)是一个用于对图像和视频中的对象进行分割的库。 它支持两种主要类型的图像分割:
- 语义分割
- 实例分割
PixelLib 支持两个用于图像分割的深度学习库,分别是 Pytorch 和 Tensorflow
参考:
Image Segmentation with Distance Transform and Watershed Algorithm https://docs.opencv.org/4.x/d2/dbd/tutorial_distance_transform.html
Numpy All, Explained https://www.sharpsightlabs.com/blog/numpy-all/
图书:《数字图像处理(第三版)》
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)