Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
A comparative study of deep reinforcement learning methods and conventional controllers for aerial manipulation
Download
12626135.pdf
Date
2021-2-26
Author
Ünal, Kazım Burak
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
554
views
1210
downloads
Cite This
Aerial manipulation with unmanned aerial vehicles is increasingly becoming a necessity in many applications. In this thesis, we analyze the controller approaches for a bi-rotor aerial manipulator for a pick and place operation. First of all, we compare a classical control approach with a minimum snap trajectory generation and Deep Reinforcement actor-critic algorithms for the control of the aerial manipulator. Furthermore, we examine the effects of degrees of freedom of the manipulator for the Deep Reinforcement Learning approaches and analyze how the change of goal position of the object that the aerial manipulator needs to carry affects the training of the learning approaches. Moreover, to obtain a faster convergence for the learning approaches we have added informative states in which the aerial manipulator starts with the object it needs to carry is already grasped. Our results of the 2D simulation environment for the aerial manipulation suggest that all of the actor-critic algorithms yield valuable results with off-policy algorithms being more sample efficient. Still, these algorithms have stability issues that fail in the task for some cases. On the other hand, the classical controller approach does not have this problem but finishes the task slower than the Deep Reinforcement Learning approaches.
Subject Keywords
Aerial manipulation
,
Deep reinforcement learning
,
Classical control
URI
https://hdl.handle.net/11511/89658
Collections
Graduate School of Natural and Applied Sciences, Thesis
Suggestions
OpenMETU
Core
A simulation study of ad hoc networking of UAVs with opportunistic resource utilization networks
Lilien, Leszek T.; BEN OTHMANE, Lotfi; Angın, Pelin; DECARLO, Andrew; Salih, Raed M.; BHARGAVA, Bharat (Elsevier BV, 2014-02-01)
Specialized ad hoc networks of unmanned aerial vehicles (UAVs) have been playing increasingly important roles in applications for homeland defense and security. Common resource virtualization techniques are mainly designed for stable networks; they fall short in providing optimal performance in more dynamic networks such as mobile ad hoc networks (MANETs)-due to their highly dynamic and unstable nature. We propose application of Opportunistic Resource Utilization Networks (Oppnets), a novel type of MANETs, ...
An analytical approach for modelling unmanned aerial vehicles and base station interaction for disaster recovery scenarios
Owilla, Eugene; Ever, Enver; Computer Engineering (2022-8)
Unmanned Aerial Vehicles (UAVs) are an emerging technology with the potential to be used in various sectors for a wide array of applications and services. In wireless networking, UAVs are a vital part of supplementary infrastructure aimed at improv- ing coverage principally during public safety crises. Due to their relatively low cost and scalability of use, there has been a mushrooming focus into the roles that UAVs can play in ameliorating service provided to stranded ground devices. Following a public sa...
A flexible reference point-based multi-objective evolutionary algorithm: An application to the UAV route planning problem
DAŞDEMİR, ERDİ; Köksalan, Mustafa Murat; TEZCANER ÖZTÜRK, DİCLEHAN (Elsevier BV, 2020-02-01)
We study the multi-objective route planning problem of an unmanned air vehicle (UAV) moving in a continuous terrain. In this problem, the UAV starts from a base, visits all targets and returns to the base in a continuous terrain that is monitored by radars. We consider two objectives: minimizing total distance and minimizing radar detection threat. This problem has infinitely many Pareto-optimal points and generating all those points is not possible. We develop a general preference-based multi-objective evo...
Integrating navigation surveillance of unmanned air vehicles into the civilian national airspaces by using ADS-B applications
Pahsa, Alper; Kaya, Pınar; Alat, Gökçen; Baykal, Buyurman (null; 2011-05-12)
Autonomous Fruit Picking With a Team of Aerial Manipulators
Köse, Tahsincan; Ertekin Bolelli, Şeyda; Department of Computer Engineering (2021-9-7)
Manipulation is the ultimate capability for autonomous micro unmanned aerial vehicles (MAVs), which would enable a substantial number of novel use-cases. Precision agriculture is such a domain with plenty of practical problems that could utilize aerial manipulation, which is faster with respect to ground manipulation. Apple harvesting is the most prominent use case with ever-growing percentages in the overall apple production costs due to increasing imbalance between labor supply and demand. Moreover, conte...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
K. B. Ünal, “A comparative study of deep reinforcement learning methods and conventional controllers for aerial manipulation,” M.S. - Master of Science, Middle East Technical University, 2021.