塘沽網(wǎng)站建設(shè)網(wǎng)站建設(shè)方案內(nèi)容
??大家好,我是csdn的博主:lqj_本人
這是我的個人博客主頁:
lqj_本人的博客_CSDN博客-微信小程序,前端,python領(lǐng)域博主lqj_本人擅長微信小程序,前端,python,等方面的知識
https://blog.csdn.net/lbcyllqj?spm=1011.2415.3001.5343嗶哩嗶哩歡迎關(guān)注:小淼前端
小淼前端的個人空間_嗶哩嗶哩_bilibili
本篇文章主要講述python的人工智能視覺模塊自動駕駛原理,本篇文章已經(jīng)成功收錄到我們python專欄中:
https://blog.csdn.net/lbcyllqj/category_12089557.html
https://blog.csdn.net/lbcyllqj/category_12089557.html
?前言
本程序主要講述python的AI視覺方面的應(yīng)用:自動駕駛尋找車道。
推薦博客好文章
(上過csdn熱榜top5的優(yōu)質(zhì)好文!)
1.若不知道怎么安裝opencv或者使用的請看我的這篇文章(曾上過csdn綜合熱榜的top1):
python進(jìn)階——人工智能視覺識別_lqj_本人的博客-CSDN博客
2.基于opencv的人工智能視覺實現(xiàn)的目標(biāo)實時跟蹤功能(曾上過csdn綜合熱榜的top5):
python進(jìn)階——人工智能實時目標(biāo)跟蹤_lqj_本人的博客-CSDN博客
3.基于PaddlenHub模塊以及playsound模塊實現(xiàn)口罩檢測并實時語音報警(曾上過csdn綜合熱榜的top1):
python進(jìn)階——AI視覺實現(xiàn)口罩檢測實時語音報警系統(tǒng)_lqj_本人的博客-CSDN博客
項目前須知
1.opencv的圖像灰度轉(zhuǎn)化方法
gray = cv2.cvtColor("圖像", cv2.COLOR_RGB2GRAY)
2.opencv檢測圖像邊緣
高斯模糊圖像
cv2.GaussianBlur(gray, (5, 5), 0)
獲取精明圖像
canny = cv2.Canny(blur, 50, 150)
3.matplotlib繪制圖像庫的使用
項目詳情
我們先拿到實時攝像的某一幀的圖像
導(dǎo)入庫
import cv2
import numpy as np
import matplotlib.pyplot as plt
邊緣檢測
進(jìn)行圖像的灰度轉(zhuǎn)化以及圖像的邊緣檢測
def canny(image):"""1.圖像的灰度轉(zhuǎn)化"""#把某一幀的圖片轉(zhuǎn)換成灰度圖像gray = cv2.cvtColor(lane_image, cv2.COLOR_RGB2GRAY)"""2.檢測圖像邊緣"""#高斯模糊圖像blur = cv2.GaussianBlur(gray, (5, 5), 0)#獲取精明的圖片canny = cv2.Canny(blur, 50, 150)return canny
image = cv2.imread('1.jpg')
lane_image = np.copy(image)
canny = canny(lane_image)
plt.imshow(canny)
plt.show()
得到繪圖結(jié)果
?因為中國的車道時沿右邊行駛的,所以我們可以在繪圖的圖像中清楚的看見X軸與Y軸的數(shù)碼,由X軸的(400,0)位置到X軸的大約(1100,0)位置是右車道的寬度,然后我們再來看Y軸的數(shù)碼,大約在150的位置是我們可視范圍內(nèi)的右車道的盡頭點,又因為(400,0)到(1100,0)的距離為700px,所以我們可以得到可視范圍內(nèi)的右車道的盡頭點為(700,150)。
根據(jù)上述位置的計算,我們可以得出一個右車道中的三角形
def region_of_interest(image):height = image.shape[0]polygons = np.array([[(400,height),(1100,height),(700,150)]])mask = np.zeros_like(image)cv2.fillPoly(mask,polygons,255)return maskimage = cv2.imread('1.jpg')
lane_image = np.copy(image)
canny = canny(lane_image)
cv2.imshow('result',region_of_interest(canny))
cv2.waitKey(0)
得出檢測三角形
生成蒙版
?將檢測到的圖像由255(白色)表示,周圍區(qū)域用0(黑色表示)
?有時候三角形不是正好與我們看到的進(jìn)到點到左右兩側(cè)點的形狀正好相似,所以我們需要自己微調(diào)一下
polygons = np.array([[(400,height),(1200,height),(800,200)]])
然后,我們可以對我們的圖像進(jìn)行右車道三角形的裁剪
masked_image = cv2.bitwise_and(image,mask)
cropped_image = region_of_interest(canny)
cv2.imshow('result',cropped_image)
邊緣檢測與蒙版產(chǎn)生的效果
裁剪顯示圖像
定義車道起始點位置
def make_coordinates(image,line_parameters):slope,intercept = line_parametersprint(image.shape)y1 = image.shape[0]y2 = int(y1*(3/5))x1 = int((y1 - intercept)/slope)x2 = int((y2 - intercept)/slope)return np.array([x1,y1,x2,y2])
霍夫變換的直線檢測
用到的是Opencv封裝好的函數(shù)cv.HoughLinesP函數(shù),使用到的參數(shù)如下:
image:輸入圖像,通常為canny邊緣檢測處理后的圖像
rho:線段以像素為單位的距離精度
theta:像素以弧度為單位的角度精度(np.pi/180較為合適)
threshold:霍夫平面累加的閾值
minLineLength:線段最小長度(像素級)
maxLineGap:最大允許斷裂長度
lines = cv2.HoughLinesP(cropped_image, 2, np.pi/180, 100, np.array([]), minLineLength=40, maxLineGap=5)
繪制車道
def display_lines(image,lines):line_image = np.zeros_like(image)if lines is not None:for line in lines:# print(line)x1,y1,x2,y2 = line.reshape(4)cv2.line(line_image,(x1,y1),(x2,y2),(255,100,10),10)return line_image
效果圖像
?圖像與繪制車道融合
視頻流中位置檢測
def average_slope_intercept(image,lines):left_fit = []right_fit = []if lines is None:return Nonefor line in lines:x1,y1,x2,y2 = line.reshape(4)parameters = np.polyfit((x1,x2),(y1,y2),1)# print(parameters)slope = parameters[0]intercept = parameters[1]if slope < 0:left_fit.append((slope,intercept))else:right_fit.append((slope,intercept))print(left_fit)print(right_fit)
打印左右位置結(jié)果
?檢測數(shù)每一幀的左右位置結(jié)果
left_fit_average = np.average(left_fit,axis=0)right_fit_average = np.average(right_fit,axis=0)print(left_fit_average,'左')print(right_fit_average,'右')left_line = make_coordinates(image,left_fit_average)right_line = make_coordinates(image,right_fit_average)return np.array([left_line,right_line])
導(dǎo)入視頻流做最后處理
cap = cv2.VideoCapture('3.mp4')# try:
while cap.isOpened():_,frame = cap.read()canny_image = canny(frame)cropped_image = region_of_interest(canny_image)lines = cv2.HoughLinesP(cropped_image, 2, np.pi/180, 100, np.array([]), minLineLength=40, maxLineGap=5)averaged_lines = average_slope_intercept(frame, lines)line_image = display_lines(frame, averaged_lines)combo_image = cv2.addWeighted(frame, 0.8, line_image, 1, 1)# cv2.resizeWindow("result", 1080, 960);cv2.imshow('result', line_image)cv2.waitKey(10)
完整代碼
import cv2
import numpy as np
import matplotlib.pyplot as pltdef make_coordinates(image,line_parameters):slope,intercept = line_parametersprint(image.shape)y1 = image.shape[0]y2 = int(y1*(3/5))x1 = int((y1 - intercept)/slope)x2 = int((y2 - intercept)/slope)return np.array([x1,y1,x2,y2])def average_slope_intercept(image,lines):left_fit = []right_fit = []if lines is None:return Nonefor line in lines:x1,y1,x2,y2 = line.reshape(4)parameters = np.polyfit((x1,x2),(y1,y2),1)# print(parameters)slope = parameters[0]intercept = parameters[1]if slope < 0:left_fit.append((slope,intercept))else:right_fit.append((slope,intercept))# print(left_fit)# print(right_fit)left_fit_average = np.average(left_fit,axis=0)right_fit_average = np.average(right_fit,axis=0)print(left_fit_average,'左')print(right_fit_average,'右')left_line = make_coordinates(image,left_fit_average)right_line = make_coordinates(image,right_fit_average)return np.array([left_line,right_line])def canny(image):"""1.圖像的灰度轉(zhuǎn)化"""#把某一幀的圖片轉(zhuǎn)換成灰度圖像gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)"""2.檢測圖像邊緣"""#高斯模糊圖像blur = cv2.GaussianBlur(gray, (5, 5), 0)#獲取精明的圖片canny = cv2.Canny(blur, 50, 150)return canny
#每一行都是一個二維數(shù)組,包含我們的線坐標(biāo),形式為[[x1,yl,x2,y2]]。這些坐標(biāo)指定了線條的參數(shù),以及線條相對與圖像空間位置,確保他們被放置在正確的位置
def display_lines(image,lines):line_image = np.zeros_like(image)if lines is not None:for line in lines:# print(line)x1,y1,x2,y2 = line.reshape(4)cv2.line(line_image,(x1,y1),(x2,y2),(255,100,10),10)return line_imagedef region_of_interest(image):height = image.shape[0]polygons = np.array([[(300,height),(650,height),(500,150)]])mask = np.zeros_like(image)cv2.fillPoly(mask,polygons,255)masked_image = cv2.bitwise_and(image,mask)return masked_image# image = cv2.imread('1.png')
# lane_image = np.copy(image)
# canny_image = canny(lane_image)
# cropped_image = region_of_interest(canny_image)
# lines = cv2.HoughLinesP(cropped_image,2,np.pi/180,100,np.array([]),minLineLength=40,maxLineGap=5)
# averaged_lines = average_slope_intercept(lane_image,lines)
# line_image = display_lines(lane_image,averaged_lines)
# combo_image = cv2.addWeighted(lane_image,0.8,line_image,1,1)
# cv2.imshow('result',combo_image)
# cv2.waitKey(0)cap = cv2.VideoCapture('3.mp4')# try:
while cap.isOpened():_,frame = cap.read()canny_image = canny(frame)cropped_image = region_of_interest(canny_image)lines = cv2.HoughLinesP(cropped_image, 2, np.pi/180, 100, np.array([]), minLineLength=40, maxLineGap=5)averaged_lines = average_slope_intercept(frame, lines)line_image = display_lines(frame, averaged_lines)combo_image = cv2.addWeighted(frame, 0.8, line_image, 1, 1)# cv2.resizeWindow("result", 1080, 960);cv2.imshow('result', combo_image)cv2.waitKey(10)
用前須知
根據(jù)自己的需要適當(dāng)微調(diào)參數(shù):
def region_of_interest(image):height = image.shape[0]polygons = np.array([[(300,height),(650,height),(500,150)]])mask = np.zeros_like(image)cv2.fillPoly(mask,polygons,255)masked_image = cv2.bitwise_and(image,mask)return masked_image