Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
A novel optical flow-based representation for temporal video segmentation
Download
index.pdf
Date
2017-01-01
Author
Akpınar, Samet
Alpaslan, Ferda Nur
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
200
views
93
downloads
Cite This
Temporal video segmentation is a field of multimedia research enabling us to temporally split video data into semantically coherent scenes. In order to develop methods challenging temporal video segmentation, detecting scene boundaries is one of the more widely used approaches. As a result, representation of temporal information becomes important. We propose a new temporal video segment representation to formalize video scenes as a sequence of temporal motion change information. The idea here is that some sort of change in the optical flow character determines motion change and cuts between consecutive scenes. The problem is eventually reduced to an optical flow-based cut detection problem from which the average motion vector concept is put forward. This concept is used for proposing a pixel-based representation enriched with a novel motion-based approach. Temporal video segment points are classified as cuts and noncuts according to the proposed video segment representation. Consequently, the proposed method and representation is applied to benchmark data sets and the results are compared to other state-of-the art methods.
Subject Keywords
Temporal video segmentation
,
Optical flow
,
Temporal video segment representation
,
Average motion vector
,
Cut detection
URI
https://hdl.handle.net/11511/33193
Journal
TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES
DOI
https://doi.org/10.3906/elk-1608-308
Collections
Department of Computer Engineering, Article
Suggestions
OpenMETU
Core
A Graph-Based Approach for Video Scene Detection
Sakarya, Ufuk; Telatar, Zjya (2008-04-22)
In this paper, a graph-based method for video scene detection is proposed. The method is based on a weighted undirected graph. Each shot is a vertex on the graph. Edge weights among the vertices are evaluated by using spatial and temporal similarities of shots. By using the complete information of the graph, a set of the vertices mostly similar to each other and dissimilar to the others is detected. Temporal continuity constraint is achieved on this set. This set is the first detected video scene. The verti...
A modular scheme for 2D/3D conversion of TV broadcast
Knorr, Sebastian; Imre, Evren; Oezkalayci, Burak; Alatan, Abdullah Aydın; Sikora, Thomas (2006-06-16)
The 3D reconstruction from 2D broadcast video is a challenging problem with many potential applications, such as 3DTV, free-viewpoint video or augmented reality. In this paper, a modular system capable of efficiently reconstructing 3D scenes from broadcast video is proposed. The system consists of four constitutive modules: tracking and segmentation, self-calibration, sparse reconstruction and, finally, dense reconstruction. This paper also introduces some novel approaches for moving object segmentation and...
Summarizing video: Content, features, and HMM topologies
Yasaroglu, Y; Alatan, Abdullah Aydın (2003-01-01)
An algorithm is proposed for automatic summarization of multimedia content by segmenting digital video into semantic scenes using HMMs. Various multi-modal low-level features are extracted to determine state transitions in HMMs for summarization. Advantage of using different model topologies and observation sets in order to segment different content types is emphasized and verified by simulations. Performance of the proposed algorithm is also compared with a deterministic scene segmentation method. A better...
Recursive Prediction for Joint Spatial and Temporal Prediction in Video Coding
Kamışlı, Fatih (2014-06-01)
Video compression systems use prediction to reduce redundancies present in video sequences along the temporal and spatial dimensions. Standard video coding systems use either temporal or spatial prediction on a per block basis. If temporal prediction is used, spatial information is ignored. If spatial prediction is used, temporal information is ignored. This may be a computationally efficient approach, but it does not effectively combine temporal and spatial information. In this letter, we provide a framewo...
An embedding technique to determine tau tau backgrounds in proton-proton collision data
Sirunyan, A. M.; et. al. (IOP Publishing, 2019-06-01)
An embedding technique is presented to estimate standard model ττ backgrounds from data with minimal simulation input. In the data, the muons are removed from reconstructed μμ events and replaced with simulated tau leptons with the same kinematic properties. In this way, a set of hybrid events is obtained that does not rely on simulation except for the decay of the tau leptons. The challenges in describing the underlying event or the production of associated jets in the simulation are avoided. The technique...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
S. Akpınar and F. N. Alpaslan, “A novel optical flow-based representation for temporal video segmentation,”
TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES
, pp. 3983–3993, 2017, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/33193.