Sensor fusion of a camera and 2D LIDAR for lane detection and tracking

Download
2019
Yeniaydın, Yasin
This thesis proposes a novel lane detection and tracking algorithm based on sensor fusion of a camera and 2D LIDAR. The proposed method is based on the top down view of a grayscale image, whose lane pixels are enhanced by the convolution with a 1D top-hat kernel. The convolved image is horizontally divided into a predetermined number of regions and the histogram of each region is computed. Next, the highest valued local maxima in a predefined ratio in the histogram plots are determined as candidate lane pixels. In addition, we segment 2D LIDAR data to detect objects on the road and map them to the top down view to determine object pixels. Pixels occluded by the detected objects are then turned into background pixels to obtain a modified top down view. Next, the Hough Transform is applied to the modified top down view to detect lines. These detected lines are merged based on their slopes and the interception points between the lines and bottom and top border of the image frame. After the merging process, the best lane pair is selected based on length, slope and interception points of the lines. Lastly, lane detection is carried out on the selected pair using a second-order polynomial with similar curvatures for the left and right lane markings. The polynomial coefficients are determined via the least squares method and tracked by a Kalman Filter. In addition, the thesis provides methods for the reference trajectory generation, the computation of the lateral error and heading error of a vehicle for lane keeping. Computational and experimental evaluations show that the proposed method significantly increases the lane detection accuracy.

Suggestions

Sensor Fusion of a Camera and 2D LIDAR for Lane Detection
Schmidt, Klaus Verner (null; 2019-04-26)
This paper presents a novel lane detection algorithm based on fusion of camera and 2D LIDAR data. On the one hand, objects on the road are detected via 2D LIDAR. On the other hand, binary bird’s eye view (BEV) images are acquired from the camera data and the locations of objects detected by LIDAR are estimated on the BEV image. In order to remove the noise generated by objects on the BEV, a modified BEV image is obtained, where pixels occluded by the detected objects are turned into background pixels. Then,...
Analysis of vision aided inertial navigation systems
Yuksel, Yigiter; Kaygisiz, H. Burak (2006-04-19)
We propose in this paper a method to integrate inertial navigation systems with electro optic imaging devices. Our method is based on updating the inertial navigation system in a Kalman filter structure using line of sight measurements obtained from a camera. The proposed method is analyzed based on a UAV scenario generated by our trajectory simulator and the results are provided here. The results show that even a single vision aid can improve the performance of inertial navigation system.
Robot end-effector based sensor integration for tracking moving parts
Konukseven, Erhan İlhan (2000-08-31)
This paper presents a cost-efficient end-effector based infrared proximity sensor integration system and the implementation of fuzzy-logic control algorithm.
Measurement correction of a set of analog sun sensors via neural network
Sozen, Semsettin Numan; Gokce, Murat; Yavuzyilmaz, Cagatay; Gulmammadov, Farid; Söken, Halil Ersin (2021-06-23)
A Neural Network (NN) based method to improve the accuracy of a set of analog Sun sensors is presented. Analog Sun Sensors are commonly used on satellites due to their reduced cost, small size and low power consumption. However, especially in Earth imaging satellites, they are prone to the Earth albedo effects. Magnitude and direction of albedo change depending on the reflection characteristics of the Earth's surface, position and attitude of the satellite and position of the Sun. The albedo may deteriorate...
Radar target classification method with reduced aspect dependency and improved noise performance using multiple signal classification algorithm
SEÇMEN, MUSTAFA; Sayan, Gönül (Institution of Engineering and Technology (IET), 2009-12-01)
This study introduces a novel aspect and polarisation invariant radar target classification method based on the use of multiple signal classification (MUSIC) algorithm for feature extraction. In the suggested method, for each candidate target at each designated reference aspect, feature matrices called 'MUSIC spectrum matrices (MSMs)' are constructed using the target's scattered data at different late-time intervals. An individual MSM corresponds to a map of a target's natural resonance-related power distri...
Citation Formats
Y. Yeniaydın, “Sensor fusion of a camera and 2D LIDAR for lane detection and tracking,” Thesis (M.S.) -- Graduate School of Natural and Applied Sciences. Electrical and Electronics Engineering., Middle East Technical University, 2019.