1 回答

TA貢獻(xiàn)1836條經(jīng)驗(yàn) 獲得超13個(gè)贊
回答 Q1:
您沒有正確使用map_1和map_2。
cv2.fisheye.initUndistortRectifyMap函數(shù)生成的map應(yīng)該是目標(biāo)圖像的像素位置到源圖像的像素位置的映射,即dst(x,y)=src(mapx(x,y),mapy (x,y))。請(qǐng)參閱OpenCV 中的重映射。
在代碼中,map_1用于 x 方向像素映射,map_2用于 y 方向像素映射。例如, (X_undistorted, Y_undistorted)是未失真圖像中的像素位置。map_1[Y_undistorted, X_undistorted]告訴你這個(gè)像素應(yīng)該在哪里映射到扭曲圖像中的x坐標(biāo),而map_2會(huì)給你相應(yīng)的y坐標(biāo)。
因此,map_1和map_2對(duì)于從失真圖像構(gòu)建未失真圖像很有用,并不真正適合逆向過程。
remapped_points = []
for corner in corners2:
remapped_points.append(
(map_1[int(corner[0][1]), int(corner[0][0])], map_2[int(corner[0][1]), int(corner[0][0])]))
此代碼查找角的未失真像素位置是不正確的。您將需要使用undistortPoints函數(shù)。
回答 Q2:
映射和不失真是不同的。
您可以將映射視為基于未失真圖像中的像素位置與像素圖構(gòu)建未失真圖像,而未失真是使用鏡頭失真模型使用原始像素位置找到未失真像素位置。
為了在未失真的圖像中找到角點(diǎn)的正確像素位置。您需要使用新估計(jì)的 K 將未失真點(diǎn)的歸一化坐標(biāo)轉(zhuǎn)換回像素坐標(biāo),在您的情況下,它是final_K,因?yàn)槲词д娴膱D像可以被視為由具有 final_K 的相機(jī)拍攝而沒有失真(有小縮放效果)。
這是修改后的 undistort 函數(shù):
def undistort_list_of_points(point_list, in_K, in_d, in_K_new):
K = np.asarray(in_K)
d = np.asarray(in_d)
# Input can be list of bbox coords, poly coords, etc.
# TODO -- Check if point behind camera?
points_2d = np.asarray(point_list)
points_2d = points_2d[:, 0:2].astype('float32')
points2d_undist = np.empty_like(points_2d)
points_2d = np.expand_dims(points_2d, axis=1)
result = np.squeeze(cv2.fisheye.undistortPoints(points_2d, K, d))
K_new = np.asarray(in_K_new)
fx = K_new[0, 0]
fy = K_new[1, 1]
cx = K_new[0, 2]
cy = K_new[1, 2]
for i, (px, py) in enumerate(result):
points2d_undist[i, 0] = px * fx + cx
points2d_undist[i, 1] = py * fy + cy
return points2d_undist
這是我做同樣事情的代碼。
import cv2
import numpy as np
import matplotlib.pyplot as plt
K = np.asarray([[556.3834638575809,0,955.3259939726225],[0,556.2366649196925,547.3011305411478],[0,0,1]])
D = np.asarray([[-0.05165940570900624],[0.0031093602070252167],[-0.0034036648250202746],[0.0003390345044343793]])
print("K:\n", K)
print("D:\n", D.ravel())
# read image and get the original image on the left
image_path = "sample.jpg"
image = cv2.imread(image_path)
image = image[:, :image.shape[1]//2, :]
image_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
fig = plt.figure()
plt.imshow(image_gray, "gray")
H_in, W_in = image_gray.shape
print("Grayscale Image Dimension:\n", (W_in, H_in))
scale_factor = 1.0
balance = 1.0
img_dim_out =(int(W_in*scale_factor), int(H_in*scale_factor))
if scale_factor != 1.0:
K_out = K*scale_factor
K_out[2,2] = 1.0
K_new = cv2.fisheye.estimateNewCameraMatrixForUndistortRectify(K_out, D, img_dim_out, np.eye(3), balance=balance)
print("Newly estimated K:\n", K_new)
map1, map2 = cv2.fisheye.initUndistortRectifyMap(K, D, np.eye(3), K_new, img_dim_out, cv2.CV_32FC1)
print("Rectify Map1 Dimension:\n", map1.shape)
print("Rectify Map2 Dimension:\n", map2.shape)
undistorted_image_gray = cv2.remap(image_gray, map1, map2, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
fig = plt.figure()
plt.imshow(undistorted_image_gray, "gray")
ret, corners = cv2.findChessboardCorners(image_gray, (6,8),cv2.CALIB_CB_ADAPTIVE_THRESH+cv2.CALIB_CB_FAST_CHECK+cv2.CALIB_CB_NORMALIZE_IMAGE)
corners_subpix = cv2.cornerSubPix(image_gray, corners, (3,3), (-1,-1), (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.1))
undistorted_corners = cv2.fisheye.undistortPoints(corners_subpix, K, D)
undistorted_corners = undistorted_corners.reshape(-1,2)
fx = K_new[0,0]
fy = K_new[1,1]
cx = K_new[0,2]
cy = K_new[1,2]
undistorted_corners_pixel = np.zeros_like(undistorted_corners)
for i, (x, y) in enumerate(undistorted_corners):
px = x*fx + cx
py = y*fy + cy
undistorted_corners_pixel[i,0] = px
undistorted_corners_pixel[i,1] = py
undistorted_image_show = cv2.cvtColor(undistorted_image_gray, cv2.COLOR_GRAY2BGR)
for corner in undistorted_corners_pixel:
image_corners = cv2.circle(np.zeros_like(undistorted_image_show), (int(corner[0]),int(corner[1])), 15, [0, 255, 0], -1)
undistorted_image_show = cv2.add(undistorted_image_show, image_corners)
fig = plt.figure()
plt.imshow(undistorted_image_show, "gray")
添加回答
舉報(bào)