Feature Detection Performance Based Benchmarking of Motion Deblurring Methods: Applications to Vision for Legged Robots

2019-02-01
Gultekin, Gokhan Koray
Saranlı, Afşar
Dexterous legged robots can move on variable terrain at high speeds. The locomotion of these legged platforms on such terrain causes severe oscillations of the robot body depending on the surface and locomotion speed. Camera sensors mounted on such platforms experience the same disturbances, hence resulting in motion blur. This is a particular corruption of the image and results in information loss further resulting in degradation or loss of important image features. Although motion blur is a significant problem for legged mobile robots, it is of more general interest since it is present in many other handheld/mobile camera applications. Deblurring methods exist in the literature to compensate for blur, however most proposed performance metrics focus on the visual quality of compensated images. From the perspective of computer vision algorithms, feature detection performance is an essential factor that determines vision performance. In this study, we claim that existing image quality based metrics are not suitable to assess the performance of deblurring algorithms when the output is used for computer vision in general and legged robotics in particular. For comparatively evaluating deblurring algorithms, we define a novel performance metric based on the feature detection accuracy on sharp and deblurred images. We rank these algorithms according to the new metric as well as image quality based metrics from the literature and experimentally demonstrate that existing metrics may not be good indicators of algorithm performance, hence good selection criteria for computer vision application. Additionally, noting that a suitable data set to evaluate the effects of motion blur and its compensation for legged platforms is lacking in the literature, we develop a comprehensive multi-sensor data set for that purpose. The data set consists of monocular image sequences collected in synchronization with a low cost MEMS gyroscope, an accurate fiber optic gyroscope and an externally measured ground truth motion data. We make use of this data set for an extensive benchmarking of prominent motion deblurring methods from the literature in terms of existing and the proposed feature based metric.
IMAGE AND VISION COMPUTING

Suggestions

Single and multi-frame motion deblurring for legged robots: characterization using a novel fd-aroc performance metric and a comprehensive motion-blur dataset
Gültekin, Gökhan Koray; Saranlı, Afşar; Department of Electrical and Electronics Engineering (2016)
Dexterous legged robots are agile platforms that can move on variable terrain at high speeds. The locomotion of these legged platforms causes oscillations of the robot body which become more severe depending on the surface and locomotion speed. Camera sensors mounted on such platforms experience the same disturbances, hence resulting in motion blur. This is a corruption of the image and results in loss of information which in turn causes degradation or loss of important image features. Most of the studies i...
Learning to Navigate Endoscopic Capsule Robots
Turan, Mehmet; Almalioglu, Yasin; Gilbert, Hunter B.; Mahmood, Faisal; Durr, Nicholas J.; Araujo, Helder; Sari, Alp Eren; Ajay, Anurag; Sitti, Metin (Institute of Electrical and Electronics Engineers (IEEE), 2019-07-01)
Deep reinforcement learning (DRL) techniques have been successful in several domains, such as physical simulations, computer games, and simulated robotic tasks, yet the transfer of these successful learning concepts from simulations into the real world scenarios remains still a challenge. In this letter, a DRL approach is proposed to learn the continuous control of a magnetically actuated soft capsule endoscope (MASCE). Proposed controller approach can alleviate the need for tedious modeling of complex and ...
Stability and control of planar compass gait walking with series-elastic ankle actuation
KERIMOGLU, Deniz; MORGUL, Omer; Saranlı, Uluç (2017-03-01)
Passive dynamic walking models are capable of capturing basic properties of walking behaviours and can generate stable human-like walking without any actuation on inclined surfaces. The passive compass gait model is among the simplest of such models, consisting of a planar point mass and two stick legs. A number of different actuation methods have been proposed both for this model and its more complex extensions to eliminate the need for a sloped ground, balancing collision losses using gravitational potent...
Reactive Planning and Control of Planar Spring-Mass Running on Rough Terrain
Arslan, Omur; Saranlı, Uluç (2012-06-01)
An important motivation for work on legged robots has always been their potential for high-performance locomotion on rough terrain. Nevertheless, most existing control algorithms for such robots either make rigid assumptions about their environments or rely on kinematic planning at low speeds. Moreover, the traditional separation of planning from control often has negative impact on the robustness of the system. In this paper, we introduce a new method for dynamic, fully reactive footstep planning for a pla...
Data-driven image captioning via salient region discovery
Kilickaya, Mert; Akkuş, Burak Kerim; Çakıcı, Ruket; Erdem, Aykut; Erdem, Erkut; İKİZLER CİNBİŞ, NAZLI (Institution of Engineering and Technology (IET), 2017-09-01)
n the past few years, automatically generating descriptions for images has attracted a lot of attention in computer vision and natural language processing research. Among the existing approaches, data-driven methods have been proven to be highly effective. These methods compare the given image against a large set of training images to determine a set of relevant images, then generate a description using the associated captions. In this study, the authors propose to integrate an object-based semantic image r...
Citation Formats
G. K. Gultekin and A. Saranlı, “Feature Detection Performance Based Benchmarking of Motion Deblurring Methods: Applications to Vision for Legged Robots,” IMAGE AND VISION COMPUTING, pp. 26–38, 2019, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/39885.