Hide/Show Apps

Sensor fusion of a camera and 2D LIDAR for lane detection and tracking

Yeniaydın, Yasin
This thesis proposes a novel lane detection and tracking algorithm based on sensor fusion of a camera and 2D LIDAR. The proposed method is based on the top down view of a grayscale image, whose lane pixels are enhanced by the convolution with a 1D top-hat kernel. The convolved image is horizontally divided into a predetermined number of regions and the histogram of each region is computed. Next, the highest valued local maxima in a predefined ratio in the histogram plots are determined as candidate lane pixels. In addition, we segment 2D LIDAR data to detect objects on the road and map them to the top down view to determine object pixels. Pixels occluded by the detected objects are then turned into background pixels to obtain a modified top down view. Next, the Hough Transform is applied to the modified top down view to detect lines. These detected lines are merged based on their slopes and the interception points between the lines and bottom and top border of the image frame. After the merging process, the best lane pair is selected based on length, slope and interception points of the lines. Lastly, lane detection is carried out on the selected pair using a second-order polynomial with similar curvatures for the left and right lane markings. The polynomial coefficients are determined via the least squares method and tracked by a Kalman Filter. In addition, the thesis provides methods for the reference trajectory generation, the computation of the lateral error and heading error of a vehicle for lane keeping. Computational and experimental evaluations show that the proposed method significantly increases the lane detection accuracy.