Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Real time color blending of rendered and captured video
Date
2004-12-04
Author
Reinhard, Erik
Akyüz, Ahmet Oğuz
Colbert, Mark
Hughes, Charles
Oconnor, Matthew
Metadata
Show full item record
Item Usage Stats
159
views
0
downloads
Cite This
Augmented reality involves mixing captured video with rendered elements in real-time. For augmented reality to be effective in training and simulation applications, the computer generated components need to blend in well with the captured video. Straightforward compositing is not sufficient, since the chromatic content of video and rendered data may be very different such that it is immediately obvious which parts of the composited image were rendered and which were captured. We propose a simple and effective method to color-correct the computer generated imagery. The method relies on the computation of simple statistics such as mean and variance, but does so in an appropriately chosen color space - which is key to the effectiveness of our approach. By shifting and scaling the pixel data in the rendered stream to take on the mean and variance of the captured video stream, the rendered elements blend in very well. Our implementation currently reads, color-corrects and composites video and rendered streams at a rate of more than 22 frames per second for a 720x480 pixel format. Without color correction, our implementation generates around 30 frames per second, indicating that our approach comes at a reasonably small computational cost.
URI
http://www.ceng.metu.edu.tr/~akyuz/files/blend.pdf
https://hdl.handle.net/11511/73057
Conference Name
Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2004 (1 - 04 Aralık 2004)
Collections
Department of Computer Engineering, Conference / Seminar
Suggestions
OpenMETU
Core
Real time panoramic background subtraction on GPU
BÜYÜKSARAÇ, SERDAR; Akar, Gözde; Temizel, Alptekin (2016-05-19)
In this study, we propose a method for panoramic background subtraction by using Pan-Tilt cameras in real-time. The proposed method is based on parallelization of image registration, panorama generation and background subtraction operations to run on Graphics Processing Unit (GPU). Experiments results showed that GPU usage increases speed of the algorithm 33 times without considerable performance loss and makes working real-time possible.
Real time FPGA implementation of Full Search video stabilization method
ÖZSARAÇ, ismail; Ulusoy, İlkay (2012-04-20)
Full Search video stabilization method is implemented on FPGA to realize its real time performance. Also, the method is implemented and tested in MATLAB. FPGA results are compared with MATLAB's to see the accuracy performance. The input video is PAL which frame period is 40 ms. The FPGA implementation is capable of producing new stabilization data at every PAL frame which allows the implementation to be classified as real time. Simulation and hardware tests show that FPGA implementation can reach the MATLAB...
Multimodal Stereo Vision Using Mutual Information with Adaptive Windowing
Yaman, Mustafa; Kalkan, Sinan (2013-05-23)
This paper proposes a method for computing disparity maps from a multimodal stereovision system composed of an infrared and a visible camera pair. The method uses mutual information (MI) as the basic similarity measure where a segmentation-based adaptive windowing mechanism is proposed for greatly enhancing the results. On several datasets, we show that (i) our proposal improves the quality of existing MI formulation, and (ii) our method can provide depth comparable to the quality of Kinect depth data.
Prioritized 3D scene reconstruction and rate-distortion efficient representation for video sequences
İmre, Evren; Alatan, Abdullah Aydın; Department of Electrical and Electronics Engineering (2007)
In this dissertation, a novel scheme performing 3D reconstruction of a scene from a 2D video sequence is presented. To this aim, first, the trajectories of the salient features in the scene are determined as a sequence of displacements via Kanade-Lukas-Tomasi tracker and Kalman filter. Then, a tentative camera trajectory with respect to a metric reference reconstruction is estimated. All frame pairs are ordered with respect to their amenability to 3D reconstruction by a metric that utilizes the baseline dis...
Metric scale and 6dof pose estimation using a color camera and distance sensors
Ölmez, Burhan; Tuncer, Temel Engin; Department of Electrical and Electronics Engineering (2021-2-26)
Monocular color cameras are widely used for 6DoF pose estimation and sparse creation of 3D point cloud of the environment over decades with SfM, VO, and V-SLAM algorithms. In this thesis, a novel algorithm is presented to estimate the metric scale information of a monocular visual odometry algorithm using a distance sensor. This method uses a state-of-the-art visual odometry algorithm Semi-Direct Visual Odometry (SVO) [1] for obtaining sparse 3D point cloud and then matches these points with the measurement...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
E. Reinhard, A. O. Akyüz, M. Colbert, C. Hughes, and M. Oconnor, “Real time color blending of rendered and captured video,” Orlando, Amerika Birleşik Devletleri, 2004, p. 15021, Accessed: 00, 2021. [Online]. Available: http://www.ceng.metu.edu.tr/~akyuz/files/blend.pdf.