简体   繁体   中英

How to detect lines in OpenCV?

I am trying to detect lines in parking as shown below.

空停车场

What I hope to get is the clear lines and (x,y) position in the crossed line. However, the result is not very promising.

绘制霍夫线的停车场

I guess it is due to two main reasons:

  1. Some lines are very broken or missing. Even human eyes can clearly identify them. Even though HoughLine can help to connect some missing lines, since HoughLine sometimes would connect unnecessary lines together, I 'd rather to do it manually.

  2. There are some repeated lines.

The general pipeline for the work is shown as below:

1. Select some specific colors (white or yellow)

import cv2
import numpy as np
import matplotlib
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt

# white color mask
img = cv2.imread(filein)
#converted = convert_hls(img)
image = cv2.cvtColor(img,cv2.COLOR_BGR2HLS)
lower = np.uint8([0, 200, 0])
upper = np.uint8([255, 255, 255])
white_mask = cv2.inRange(image, lower, upper)
# yellow color mask
lower = np.uint8([10, 0,   100])
upper = np.uint8([40, 255, 255])
yellow_mask = cv2.inRange(image, lower, upper)
# combine the mask
mask = cv2.bitwise_or(white_mask, yellow_mask)
result = img.copy()
cv2.imshow("mask",mask) 

二进制图像

2. Repeat the dilation and erosion until the image can not be changed ( reference )

height,width = mask.shape
skel = np.zeros([height,width],dtype=np.uint8)      #[height,width,3]
kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (3,3))
temp_nonzero = np.count_nonzero(mask)
while(np.count_nonzero(mask) != 0 ):
    eroded = cv2.erode(mask,kernel)
    cv2.imshow("eroded",eroded)   
    temp = cv2.dilate(eroded,kernel)
    cv2.imshow("dilate",temp)
    temp = cv2.subtract(mask,temp)
    skel = cv2.bitwise_or(skel,temp)
    mask = eroded.copy()
 
cv2.imshow("skel",skel)
#cv2.waitKey(0)

在侵蚀和拨号之后

3. Apply the canny to filter the lines and use HoughLinesP to get the lines

edges = cv2.Canny(skel, 50, 150)
cv2.imshow("edges",edges)
lines = cv2.HoughLinesP(edges,1,np.pi/180,40,minLineLength=30,maxLineGap=30)
i = 0
for x1,y1,x2,y2 in lines[0]:
    i+=1
    cv2.line(result,(x1,y1),(x2,y2),(255,0,0),1)
print i

cv2.imshow("res",result)
cv2.waitKey(0)

精明之后

I wonder why after the first step of selecting certain color, the lines are broken and with noises. I would think in this step we should do something to make the broken line a complete, less noisy line. And then try to apply something to do the Canny and Hough lines. Any ideas?

Here is my pipeline, maybe it can give you some help.

First, get the gray image and process GaussianBlur.

img = cv2.imread('src.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

kernel_size = 5
blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size),0)

Second, process edge detection use Canny.

low_threshold = 50
high_threshold = 150
edges = cv2.Canny(blur_gray, low_threshold, high_threshold)

Then, use HoughLinesP to get the lines. You can adjust the parameters for better performance.

rho = 1  # distance resolution in pixels of the Hough grid
theta = np.pi / 180  # angular resolution in radians of the Hough grid
threshold = 15  # minimum number of votes (intersections in Hough grid cell)
min_line_length = 50  # minimum number of pixels making up a line
max_line_gap = 20  # maximum gap in pixels between connectable line segments
line_image = np.copy(img) * 0  # creating a blank to draw lines on

# Run Hough on edge detected image
# Output "lines" is an array containing endpoints of detected line segments
lines = cv2.HoughLinesP(edges, rho, theta, threshold, np.array([]),
                    min_line_length, max_line_gap)

for line in lines:
    for x1,y1,x2,y2 in line:
    cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),5)

Finally, draw the lines on your srcImage.

# Draw the lines on the  image
lines_edges = cv2.addWeighted(img, 0.8, line_image, 1, 0)

Here is my final performance.

Final Image:

在此处输入图片说明

I'm not sure what exactly you are asking, since there is no question in your posting.

One nice and robust technique to detect line segments is LSD (line segment detector), available in openCV since openCV 3.

Here's some simple basic C++ code, which can probably converted to python easily:

int main(int argc, char* argv[])
{
    cv::Mat input = cv::imread("C:/StackOverflow/Input/parking.png");
    cv::Mat gray;
    cv::cvtColor(input, gray, CV_BGR2GRAY);


    cv::Ptr<cv::LineSegmentDetector> det;
    det = cv::createLineSegmentDetector();



    cv::Mat lines;
    det->detect(gray, lines);

    det->drawSegments(input, lines);

    cv::imshow("input", input);
    cv::waitKey(0);
    return 0;
}

Giving this result:

在此处输入图片说明

Which looks better for further processing than your image (no line duplicates etc.)

There are some great answers here to the first part of your question, but as for the second part (finding the line intersections) I'm not seeing a whole lot.

I'd suggest you take a look at theBentley-Ottmann algorithm.

There are some python implementations of the algorithm here and here .

Edit: Using VeraPoseidon's Houghlines implementation and the second library linked here, I've managed to get the following result for intersection detection. Credit to Vera and the library author for their good work. The green squares represent a detected intersection. There are a few errors, but this seems like a really good starting place to me. It seems as though most of the locations you actually want to detect an intersection have multiple intersections detected, so you could probably run an appropriately sized window over the image that looked for multiple intersections and deemed a true intersection as one where that window activated.

Bentley-Ottmann 应用于 Houghlines

Here is the code I used to produce that result:

import cv2
import numpy as np
import isect_segments_bentley_ottmann.poly_point_isect as bot


img = cv2.imread('parking.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

kernel_size = 5
blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size),0)

low_threshold = 50
high_threshold = 150
edges = cv2.Canny(blur_gray, low_threshold, high_threshold)

rho = 1  # distance resolution in pixels of the Hough grid
theta = np.pi / 180  # angular resolution in radians of the Hough grid
threshold = 15  # minimum number of votes (intersections in Hough grid cell)
min_line_length = 50  # minimum number of pixels making up a line
max_line_gap = 20  # maximum gap in pixels between connectable line segments
line_image = np.copy(img) * 0  # creating a blank to draw lines on

# Run Hough on edge detected image
# Output "lines" is an array containing endpoints of detected line segments
lines = cv2.HoughLinesP(edges, rho, theta, threshold, np.array([]),
                    min_line_length, max_line_gap)
print(lines)
points = []
for line in lines:
    for x1, y1, x2, y2 in line:
        points.append(((x1 + 0.0, y1 + 0.0), (x2 + 0.0, y2 + 0.0)))
        cv2.line(line_image, (x1, y1), (x2, y2), (255, 0, 0), 5)

lines_edges = cv2.addWeighted(img, 0.8, line_image, 1, 0)
print(lines_edges.shape)
#cv2.imwrite('line_parking.png', lines_edges)

print points
intersections = bot.isect_segments(points)
print intersections

for inter in intersections:
    a, b = inter
    for i in range(3):
        for j in range(3):
            lines_edges[int(b) + i, int(a) + j] = [0, 255, 0]

cv2.imwrite('line_parking.png', lines_edges)

You can use something like this block of code for a strategy to remove multiple intersections in a small area:

for idx, inter in enumerate(intersections):
    a, b = inter
    match = 0
    for other_inter in intersections[idx:]:
        if other_inter == inter:
            continue
        c, d = other_inter
        if abs(c-a) < 15 and abs(d-b) < 15:
            match = 1
            intersections[idx] = ((c+a)/2, (d+b)/2)
            intersections.remove(other_inter)

    if match == 0:
        intersections.remove(inter)

Output image: 清理输出

You'll have to play with the windowing function though.

what happens if you adjust maxLineGap or size of your erosion kernel. Alternatively, you could find the distance between lines. You would have to go though pairs of lines say ax1,ay1 to ax2,ay2 cf bx1,by1 to bx2,by2 you can find the point where the gradient at right angles (-1 over gradient of line) to a crosses line b. Basic school geometry and simultaneous equations, something like:

x = (ay1 - by1) / ((by2 - by1) / (bx2 - bx1) + (ax2 - ax1) / (ay2 - ay1))
# then
y = by1 + x * (by2 - by1) / (bx2 - bx1)

and compare x,y with ax1,ay1

PS you might need to add a check for the distance between ax1,ay1 and bx1,by1 as some of your lines look to be continuations of other lines and these might be eliminated by the closest point technique.

I am beginner. I got something which may be helpful for this question.

A simple way to detect the lines in image.

output

below is code performed in google colab

import cv2
import numpy as np
from google.colab.patches import cv2_imshow
!wget  https://i.stack.imgur.com/sDQLM.png
#read image 
image = cv2.imread( "/content/sDQLM.png")

#convert to gray
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

#performing binary thresholding
kernel_size = 3
ret,thresh = cv2.threshold(gray,200,255,cv2.THRESH_BINARY)  

#finding contours 
cnts = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]

#drawing Contours
radius =2
color = (30,255,50)
cv2.drawContours(image, cnts, -1,color , radius)
# cv2.imshow(image) commented as colab don't support cv2.imshow()
cv2_imshow(image)
# cv2.waitKey()

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM