Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
Spatial synchronization of audiovisual objects by 3D audio object coding
Date
2010-12-01
Author
Günel Kılıç, Banu
Kondoz, Ahmet M.
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
113
views
0
downloads
Cite This
Free viewpoint video enables the visualisation of a scene from arbitrary viewpoints and directions. However, this flexibility in video rendering provides a challenge in 3D media for achieving spatial synchronicity between the audio and video objects. When the viewpoint is changed, its effect on the perceived audio scene should be considered to avoid mismatches in the perceived positions of audiovisual objects. Spatial audio coding with such flexibility requires decomposing the sound scene into audio objects initially, and then synthesizing the new scene according to the geometric relations between the A/V capturing setup, selected viewpoint and the rendering system. This paper proposes a free viewpoint audio coding framework for 3D media systems utilising multiview cameras and a microphone array. A real-time source separation technique is used for object decomposition followed by spatial audio coding. Binaural, multichannel sound systems and wave field synthesis systems are addressed. Subjective test results shows that the method achieves spatial synchronicity for various viewpoints consistently, which is not possible by conventional recording techniques.
URI
https://hdl.handle.net/11511/30257
DOI
https://doi.org/10.1109/mmsp.2010.5662065
Collections
Graduate School of Informatics, Conference / Seminar
Suggestions
OpenMETU
Core
Temporal watermarking of digital video
Koz, A; Alatan, Abdullah Aydın (2003-04-11)
A video watermarking method is presented, based on the temporal sensitivity of Human Visual System (HVS). The method exploits the temporal contrast thresholds of HVS to determine the spatio-temporal locations, where the watermark should be embedded, and the maximum strength of watermark, which still gives imperceptible distortion after watermark insertion. The robustness results indicate that the proposed scheme survives video distortions, such as additive Gaussian noise, ITU H.263+ coding at medium bit rat...
Depth assisted object segmentation in multi-view video
Cigla, Cevahir; Alatan, Abdullah Aydın (2008-01-01)
In this work, a novel and unified approach for multi-view video (MVV) object segmentation is presented. In the first stage, a region-based graph-theoretic color segmentation algorithm is proposed, in which the popular Normalized Cuts segmentation method is improved with some modifications on its graph structure. Segmentation is obtained by recursive bi-partitioning of a weighted graph of an initial over-segmentation mask. The available segmentation mask is also utilized during dense depth map estimation ste...
Region-based parametric motion segmentation using color information
Altunbasak, Y; Eren, Pekin Erhan; Tekalp, AM (1998-01-01)
This paper presents pixel-based and region-based parametric motion segmentation methods for robust motion segmentation with the goal of aligning motion boundaries with those of real objects in a scene. We first describe a two-step iterative procedure for parametric motion segmentation by either motion-vector or motion-compensated intensity matching. We next present a region-based extension of this method, whereby all pixels within a predefined spatial region are assigned the same motion label. These predefi...
Oblivious video watermaking using temporal sensitivity of HVS
Koz, A; Alatan, Abdullah Aydın (2004-04-30)
An oblivious video watermarking method is presented based on the temporal sensitivity of Human Visual System (HVS). The method exploits the temporal contrast thresholds of HVS to determine the maximum strength of watermark, which still gives imperceptible distortion after watermark insertion. Compared to other approaches in the literature, the method guarantees to avoid flickering problem in the watermarked video and gives better robustness results to video distortions, such as additive Gaussian noise, H.26...
Spatial Correlation in Single-Carrier Massive MIMO Systems
Beigiparast, Nader; Güvensen, Gökhan Muzaffer; Ayanoglu, Ender (2020-02-02)
© 2020 IEEE.We present the analysis of a single-carrier massive MIMO system for the frequency selective Gaussian multi-user channel, in both uplink and downlink directions. We develop expressions for the achievable sum rate when there is spatial correlation among antennas at the base station. It is known that the channel matched filter precoder (CMFP) performs the best in a spatially uncorrelated downlink channel. However, we show that, in a spatially correlated downlink channel with two different correlati...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
B. Günel Kılıç and A. M. Kondoz, “Spatial synchronization of audiovisual objects by 3D audio object coding,” 2010, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/30257.