Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
A Transformer-Based Network for Full Object Pose Estimation with Depth Refinement
Download
Advanced Intelligent Systems - 2024 - Abdulsalam - A Transformer‐Based Network for Full Object Pose Estimation with Depth.pdf
Date
2024-10-01
Author
Abdulsalam, Mahmoud
Ahıska, Kenan
Aouf, Nabil
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
18
views
7
downloads
Cite This
In response to increasing demand for robotics manipulation, accurate vision-based full pose estimation is essential. While convolutional neural networks-based approaches have been introduced, the quest for higher performance continues, especially for precise robotics manipulation, including in the Agri-robotics domain. This article proposes an improved transformer-based pipeline for full pose estimation, incorporating a Depth Refinement Module. Operating solely on monocular images, the architecture features an innovative Lighter Depth Estimation Network using a Feature Pyramid with an up-sampling method for depth prediction. A Transformer-based Detection Network with additional prediction heads is employed to directly regress object centers and predict the full poses of the target objects. A novel Depth Refinement Module is then utilized alongside the predicted centers, full poses, and depth patches to refine the accuracy of the estimated poses. The performance of this pipeline is extensively compared with other state-of-the-art methods, and the results are analyzed for fruit picking applications. The results demonstrate that the pipeline improves the accuracy of pose estimation to up to 90.79% compared to other methods available in the literature. Explore an advanced transformer-based pipeline for precise 6D pose estimation in robotics. This approach integrates a novel depth refinement module with monocular images, surpassing traditional multi-modal methods. Comprehensive performance evaluations demonstrate up to 90.79% accuracy improvements, highlighting its potential in enhancing robotic manipulation tasks.image (c) 2024 WILEY-VCH GmbH
URI
https://hdl.handle.net/11511/117811
Journal
ADVANCED INTELLIGENT SYSTEMS
DOI
https://doi.org/10.1002/aisy.202400110
Collections
Department of Electrical and Electronics Engineering, Article
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
M. Abdulsalam, K. Ahıska, and N. Aouf, “A Transformer-Based Network for Full Object Pose Estimation with Depth Refinement,”
ADVANCED INTELLIGENT SYSTEMS
, vol. 6, no. 10, pp. 0–0, 2024, Accessed: 00, 2025. [Online]. Available: https://hdl.handle.net/11511/117811.