Show/Hide Menu
Hide/Show Apps
Logout
Türkçe
Türkçe
Search
Search
Login
Login
OpenMETU
OpenMETU
About
About
Open Science Policy
Open Science Policy
Open Access Guideline
Open Access Guideline
Postgraduate Thesis Guideline
Postgraduate Thesis Guideline
Communities & Collections
Communities & Collections
Help
Help
Frequently Asked Questions
Frequently Asked Questions
Guides
Guides
Thesis submission
Thesis submission
MS without thesis term project submission
MS without thesis term project submission
Publication submission with DOI
Publication submission with DOI
Publication submission
Publication submission
Supporting Information
Supporting Information
General Information
General Information
Copyright, Embargo and License
Copyright, Embargo and License
Contact us
Contact us
UTILIZATION OF EVENT BASED CAMERAS FOR VIDEO FRAME INTERPOLATION
Download
UTILIZATION_OF_EVENT_BASED_CAMERAS_FOR_VIDEO_FRAME_INTERPOLATION.pdf
Date
2022-8-25
Author
Kılıç, Onur Selim
Metadata
Show full item record
This work is licensed under a
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
.
Item Usage Stats
227
views
364
downloads
Cite This
Video Frame Interpolation (VFI) aims to synthesize several frames in the middle of two adjacent original video frames. State-of-the-art frame interpolation techniques create intermediate frames by considering the objects' motions within the frames. However, these approaches adopt a first-order approximation that fails without information between the keyframes. Event cameras are new sensors that provide additional information in the dead time between frames. They measure per-pixel brightness changes asynchronously with high temporal resolution and low latency. The algorithms that use both the event-based information and the original frames overcome the problem of first-order approximation. However, they still have issues related to ghosting and the regions with insufficient events. This thesis aims to utilize visual transformers efficiently to combine event-based information and RGB frames to create better quality intermediate frames. The results show that the proposed video frame interpolation technique surpasses the state-of-the-art methods.
Subject Keywords
Event Based Cameras, Video Frame Interpolation
URI
https://hdl.handle.net/11511/99447
Collections
Graduate School of Natural and Applied Sciences, Thesis
Suggestions
OpenMETU
Core
Recursive Prediction for Joint Spatial and Temporal Prediction in Video Coding
Kamışlı, Fatih (2014-06-01)
Video compression systems use prediction to reduce redundancies present in video sequences along the temporal and spatial dimensions. Standard video coding systems use either temporal or spatial prediction on a per block basis. If temporal prediction is used, spatial information is ignored. If spatial prediction is used, temporal information is ignored. This may be a computationally efficient approach, but it does not effectively combine temporal and spatial information. In this letter, we provide a framewo...
FRAME-RATE CONVERSION FOR MULTIVIEW VIDEO EXPLOITING 3D MOTION MODELS
Gedik, O. Serdar; Alatan, Abdullah Aydın (2010-09-29)
A frame-rate conversion (FRC) scheme for increasing the frame-rate of multiview video for reduction of motion blur in hold-type displays is proposed. In order to obtain high quality inter-frames, the proposed method utilizes 3D motion models relying on the 3D scene information extractable from multiview video. First of all, independently moving objects (IMOs) are segmented by using a depth-based object segmentation method. Then, interest points on IMOs are obtained via scale invariant feature transform (SIF...
Error analysis and testing of DRM for frame cameras
Bettemir, Oe. H.; Karslıoğlu, Mahmut Onur; Friedrich, J. (2007-06-16)
A new Differential Rectification Method (DRM) for frame cameras is proposed by Karslioglu and Friedrich [1] for the generation of orthoimages from monoscopic digital images which map every pixel onto a curved surface of reference frame i.e. WGS84. The ellipsoidal geodetic coordinates, of each pixel are calculated directly under the condition that a precise enough elevation model is available avoiding additional earth curvature corrections after the rectification. In this paper, convergence of the differenti...
GIBBS RANDOM FIELD MODEL BASED 3-D MOTION ESTIMATION BY WEAKENED RIGIDITY
Alatan, Abdullah Aydın (1994-01-01)
3-D motion estimation from a video sequence remains a challenging problem. Modelling the local interactions between the 3-D motion parameters is possible by using Gibbs random fields. An energy function which gives the joint probability distribution of the motion vectors, is constructed. The most probable motion vector set is found by maximizing the probability, represented by this distribution. Since the 3-D motion estimation problem is ill-posed, the regularization is achieved by an initial rigidity assum...
Estimation of depth fields suitable for video compression based on 3-D structure and motion of objects
Alatan, Abdullah Aydın (Institute of Electrical and Electronics Engineers (IEEE), 1998-6)
Intensity prediction along motion trajectories removes temporal redundancy considerably in video compression algorithms. In three-dimensional (3-D) object-based video coding, both 3-D motion and depth values are required for temporal prediction. The required 3-D motion parameters for each object are found by the correspondence-based E-matrix method. The estimation of the correspondences-two-dimensional (2-D) motion field-between the frames and segmentation of the scene into objects are achieved simultaneous...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
O. S. Kılıç, “UTILIZATION OF EVENT BASED CAMERAS FOR VIDEO FRAME INTERPOLATION,” M.S. - Master of Science, Middle East Technical University, 2022.