Video Content Analysis Method for Audiovisual Quality Assessment

2016-06-08
Konuk, Baris
Zerman, Emin
NUR YILMAZ, GÖKÇE
Akar, Gözde
In this study a novel, spatio-temporal characteristics based video content analysis method is presented. The proposed method has been evaluated on different video quality assessment databases, which include videos with different characteristics and distortion types. Test results obtained on different databases demonstrate the robustness and accuracy of the proposed content analysis method. Moreover, this analysis method is employed in order to examine the performance improvement in audiovisual quality assessment when the video content is taken into consideration.

Suggestions

Content Aware Audiovisual Quality Assessment
Konuk, Baris; Zerman, Emin; Akar, Gözde; NUR YILMAZ, GÖKÇE (2015-05-19)
In this study, a novel, content aware audiovisual quality assessment (AVQA) method using a spatio-temporal characteristics based video classification method has been proposed and evaluated on AVQA database created by University of Plymouth. The proposed AVQA method is evaluated using subjective audio mean opinion score (MOS) and subjective video MOS. Results indicate that both classification method and the proposed content dependent AVQA method are quite satisfactory
Video segmentation based on audio feature extraction
Atar, Neriman; Akar, Gözde; Department of Electrical and Electronics Engineering (2009)
In this study, an automatic video segmentation and classification system based on audio features has been presented. Video sequences are classified such as videos with “speech”, “music”, “crowd” and “silence”. The segments that do not belong to these regions are left as “unclassified”. For the silence segment detection, a simple threshold comparison method has been done on the short time energy feature of the embedded audio sequence. For the “speech”, “music” and “crowd” segment detection a multiclass class...
Semantic video analysis for surveillance systems
Kardaş, Karani; Coşar, Ahmet; Çiçekli, Fehime Nihan; Department of Computer Engineering (2018)
This thesis presents novel studies about semantic inference of video events. In this respect, a surveillance video analysis system, called SVAS is introduced for surveillance domain, in which semantic rules and the definition of event models can be learned or defined by the user for automatic detection and inference of complex video events. In the scope of SVAS, an event model method named Interval-Based Spatio-Temporal Model (IBSTM) is proposed. SVAS can learn action models and event models without any pre...
A SPATIOTEMPORAL NO-REFERENCE VIDEO QUALITY ASSESSMENT MODEL
Konuk, Baris; Zerman, Emin; NUR YILMAZ, GÖKÇE; Akar, Gözde (2013-09-18)
Many researchers have been developing objective video quality assessment methods due to increasing demand for perceived video quality measurement results by end users to speed-up advancements of multimedia services. However, most of these methods are either Full-Reference (FR) metrics, which require the original video or Reduced-Reference (RR) metrics, which need some features extracted from the original video. No-Reference (NR) metrics, on the other hand, do not require any information about the original v...
Silhouette Orientation Volumes for Efficient Fall Detection in Depth Videos
Akagündüz, Erdem; Sengur, Abdulkadir; Wang, Haibo; Ince, Melih Cevdet (2017-05-01)
A novel method to detect human falls in depth videos is presented in this paper. A fast and robust shape sequence descriptor, namely the Silhouette Orientation Volume (SOV), is used to represent actions and classify falls. The SOV descriptor provides high classification accuracy even with a combination of simple associated models, such as Bag-of-Words and the Naive Bayes classifier. Experiments on the public SDU-Fall dataset show that this new approach achieves up to 91.89% fall detection accuracy with a si...
Citation Formats
B. Konuk, E. Zerman, G. NUR YILMAZ, and G. Akar, “Video Content Analysis Method for Audiovisual Quality Assessment,” 2016, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/53859.