Camera motion blur and its effect on feature detectors

Üzer, Ferit
Perception, hence the usage of visual sensors is indispensable in mobile and autonomous robotics. Visual sensors such as cameras, rigidly mounted on a robot frame are the most common usage scenario. In this case, the motion of the camera due to the motion of the moving platform as well as the resulting shocks or vibrations causes a number of distortions on video frame sequences. Two most important ones are the frame-to-frame changes of the line-of-sight (LOS) and the presence of motion blur in individual frames. The latter of these two, namely motion blur plays a particularly dominant role in determining the performance of many vision algorithms used in mobile robotics. It is caused by the relative motion between the vision sensor and the scene during the exposure time of the frame. Motion blur is clearly an undesirable phenomenon in computer vision not only because it degrades the quality of images but also causes other feature extraction procedures to degrade or fail. Although there are many studies on feature based tracking, navigation, object recognition algorithms in the computer vision and robotics literature, there is no comprehensive work on the effects of motion blur on different image features and their extraction. In this thesis, a survey of existing models of motion blur and approaches to motion deblurring is presented. We review recent literature on motion blur and deblurring and we focus our attention on motion blur induced degradation of a number of popular feature detectors. We investigate and characterize this degradation using video sequences captured by the vision system of a mobile legged robot platform. Harris Corner detector, Canny Edge detector and Scale Invariant Feature Transform (SIFT) are chosen as the popular feature detectors that are most commonly used for mobile robotics applications. The performance degradation of these feature detectors due to motion blur are categorized to analyze the effect of legged locomotion on feature performance for perception. These analysis results are obtained as a first step towards the stabilization and restoration of video sequences captured by our experimental legged robotic platform and towards the development of motion blur robust vision system.


Object Recognition via Local Patch Labelling
Ulusoy, İlkay (2005-03-01)
In recent years the problem of object recognition has received considerable attention from both the machine learning and computer vision communities. The key challenge of this problem is to be able to recognize any member of a category of objects in spite of wide variations in visual appearance due to variations in the form and colour of the object, occlusions, geometrical transformations (such as scaling and rotation), changes in illumination, and potentially non-rigid deformations of the object itself. In...
Multi-Frame motion deblurring of video using the natural oscillatory motion of dexterous legged robots
Gultekin, Gokhan Koray; Saranlı, Afşar (2019-07-18)
Motion blur is a common problem for machine vision applications in legged mobile robotic platforms. Due to the oscillatory walking nature of these platforms, the cameras on them experience disturbances so that most of the captured frames are motion blurred. Motion blur results in loss of information in individual image frames and therefore makes single-frame deblurring an ill-posed problem. The variation in the magnitude of motion blur in consecutive video frames can be exploited to better restore these fra...
Platform motion disturbances decoupling by means of inertial sensors for a motion stabilized gimbal
Mutlu, Deniz; Balkan, Raif Tuna; Platin, Bülent Emre; Department of Mechanical Engineering (2015)
In this study, a method is developed to overcome platform motion based disturbances resulting from kinematic and dynamic interactions between platform and gimbal system. The method is confined to using underlying non-linear relations in order to increase performance of the system in nearly all of its motion envelope. Sensor requirements and measurements methods are also stated for the developed method. In order to simulate real system conditions, an identification procedure is applied on the system whose ou...
3-D Rigid Body Tracking Using Vision and Depth Sensors
Gedik, O. Serdar; Alatan, Abdullah Aydın (Institute of Electrical and Electronics Engineers (IEEE), 2013-10-01)
In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm,...
AKSOY, Yagiz; Alatan, Abdullah Aydın (2014-10-30)
Most of the mobile applications require efficient and precise computation of the device pose, and almost every mobile device has inertial sensors already equipped together with a camera. This fact makes sensor fusion quite attractive for increasing efficiency during pose tracking. However, the state-of-the-art fusion algorithms have a major shortcoming: lack of well-defined uncertainty introduced to the system during the prediction stage of the fusion filters. Such a drawback results in determining covarian...
Citation Formats
F. Üzer, “Camera motion blur and its effect on feature detectors,” M.S. - Master of Science, Middle East Technical University, 2010.