Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
A complexity-utility framework for optimizing quality of experience for visual content in mobile devices
Download
index.pdf
Date
2012
Author
Önür, Özgür Deniz
Metadata
Show full item record
Item Usage Stats
187
views
80
downloads
Cite This
Subjective video quality and video decoding complexity are jointly optimized in order to determine the video encoding parameters that will result in the best Quality of Experience (QoE) for an end user watching a video clip on a mobile device. Subjective video quality is estimated by an objective criteria, video quality metric (VQM), and a method for predicting the video quality of a test sequence from the available training sequences with similar content characteristics is presented. Standardized spatial index and temporal index metrics are utilized in order to measure content similarity. A statistical approach for modeling decoding complexity on a hardware platform using content features extracted from video clips is presented. The overall decoding complexity is modeled as the sum of component complexities that are associated with the computation intensive code blocks present in state-of-the-art hybrid video decoders. The content features and decoding complexities are modeled as random parameters and their joint probability density function is predicted as Gaussian Mixture Models (GMM). These GMMs are obtained off-line using a large training set comprised of video clips. Subsequently, decoding complexity of a new video clip is estimated by using the available GMM and the content features extracted in real time. A novel method to determine the video decoding capacity of mobile terminals by using a set of subjective decodability experiments that are performed once for each device is also proposed. Finally, the estimated video quality of a content and the decoding capacity of a device are combined in a utility-complexity framework that optimizes complexity-quality trade-off to determine video coding parameters that result in highest video quality without exceeding the hardware capabilities of a client device. The simulation results indicate that this approach is capable of predicting the user viewing satisfaction on a mobile device.
Subject Keywords
Computer input-output equipment.
,
Digital electronics.
,
Coding theory.
,
Image processing
,
Three-dimensional imaging.
URI
http://etd.lib.metu.edu.tr/upload/12614088/index.pdf
https://hdl.handle.net/11511/21447
Collections
Graduate School of Natural and Applied Sciences, Thesis
Suggestions
OpenMETU
Core
A knapsack model for bandwidth management of prerecorded multiple MPEG video sources
Erten, YM; Gullu, R; Süral, Haldun (2005-01-01)
In this article we provide a framework for controlling the bit rate of multiple prerecorded MPEG video sequences by choosing the quantization factors assigned to individual sources in a way that the total mean square error at the output of the encoder is minimized. We propose and test a knapsack model for the selection of the quantization factors. Our computations based on a set of relatively diverse video sequences reveal that the proposed model achieves a high utilization of the available bandwidth and ac...
A multi-view video codec based on H.264
Bilen, Cagdas; Aksay, Anil; Akar, Gözde (2006-10-11)
H.264 is the current state-of-the-art monoscopic video codec providing almost twice the coding efficiency with the same quality comparing the previous codecs. With the increasing interest in 3D TV, multi-view video sequences that are provided by multiple cameras capturing the three dimensional objects and/or scene are more widely used. Compressing multi-view sequences independently with H.264 (simulcast) is not efficient since the redundancy between the closer cameras is not exploited. In order to reduce th...
Intra prediction with 3-tap filters for lossless and lossy video coding
Ranjbar Alvar, Saeed; Kamışlı, Fatih; Department of Electrical and Electronics Engineering (2016)
Video coders are primarily designed for lossy compression. The basic steps in modern lossy video compression are block-based spatial or temporal prediction, transformation of the prediction error block, quantization of the transform coefficients and entropy coding of the quantized coefficients together with other side information. In some cases, this lossy coding architecture may not be efficient for compression. For example, when lossless video compression is desirable, the transform and quantization steps...
Implementation of a distributed video codec
Işık, Cem Vedat; Akar, Gözde; Department of Electrical and Electronics Engineering (2008)
Current interframe video compression standards such as the MPEG4 and H.264, require a high-complexity encoder for predictive coding to exploit the similarities among successive video frames. This requirement is acceptable for cases where the video sequence to be transmitted is encoded once and decoded many times. However, some emerging applications such as video-based sensor networks, power-aware surveillance and mobile video communication systems require computational complexity to be shifted from encoder ...
A real time, low latency, hardware implementation of the 2-D discrete wavelet transformation for streaming image applications
Benderli, O; Tekmen, YC; Ismailoglu, N (2003-08-29)
In this paper, we present a 2-D Discrete Wavelet Transformation (DWT) hardware for applications where row-based raw image data is streamed in at high bandwidths and local buffering of the entire image is not feasible. The latency that is introduced as the images stream through the DWT filter and the amount of locally stored image data is a function of the image and tile size. For an n(1) x n(2) size image processed using (n(1)/k(1)) x (n(2)/k(2)) sized tiles the latency is equal to the time elapsed to accum...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
Ö. D. Önür, “A complexity-utility framework for optimizing quality of experience for visual content in mobile devices,” Ph.D. - Doctoral Program, Middle East Technical University, 2012.