Recursive Prediction for Joint Spatial and Temporal Prediction in Video Coding

Video compression systems use prediction to reduce redundancies present in video sequences along the temporal and spatial dimensions. Standard video coding systems use either temporal or spatial prediction on a per block basis. If temporal prediction is used, spatial information is ignored. If spatial prediction is used, temporal information is ignored. This may be a computationally efficient approach, but it does not effectively combine temporal and spatial information. In this letter, we provide a framework where available temporal and spatial information can be combined effectively to perform joint spatial and temporal prediction in video coding. Experimental results obtained from one sample realization of this framework show its potential.


Estimation of depth fields suitable for video compression based on 3-D structure and motion of objects
Alatan, Abdullah Aydın (Institute of Electrical and Electronics Engineers (IEEE), 1998-6)
Intensity prediction along motion trajectories removes temporal redundancy considerably in video compression algorithms. In three-dimensional (3-D) object-based video coding, both 3-D motion and depth values are required for temporal prediction. The required 3-D motion parameters for each object are found by the correspondence-based E-matrix method. The estimation of the correspondences-two-dimensional (2-D) motion field-between the frames and segmentation of the scene into objects are achieved simultaneous...
Guenyel, Bertan; Alatan, Abdullah Aydın (2010-09-29)
A multi-resolution motion estimation scheme is proposed for tracking of the true 2D motion in video sequences for motion compensated image interpolation. The proposed algorithm utilizes frames with different resolutions and adaptive block dimensions for efficient representation of motion. Firstly, motion vectors for each block are assigned as a result of predictive search in each pass. Then, the outlier motion vectors are detected and corrected at the end of each pass. Simulation results with respect to dif...
Summarizing video: Content, features, and HMM topologies
Yasaroglu, Y; Alatan, Abdullah Aydın (2003-01-01)
An algorithm is proposed for automatic summarization of multimedia content by segmenting digital video into semantic scenes using HMMs. Various multi-modal low-level features are extracted to determine state transitions in HMMs for summarization. Advantage of using different model topologies and observation sets in order to segment different content types is emphasized and verified by simulations. Performance of the proposed algorithm is also compared with a deterministic scene segmentation method. A better...
Gedik, O. Serdar; Alatan, Abdullah Aydın (2010-09-29)
A frame-rate conversion (FRC) scheme for increasing the frame-rate of multiview video for reduction of motion blur in hold-type displays is proposed. In order to obtain high quality inter-frames, the proposed method utilizes 3D motion models relying on the 3D scene information extractable from multiview video. First of all, independently moving objects (IMOs) are segmented by using a depth-based object segmentation method. Then, interest points on IMOs are obtained via scale invariant feature transform (SIF...
A novel optical flow-based representation for temporal video segmentation
Akpınar, Samet; Alpaslan, Ferda Nur (2017-01-01)
Temporal video segmentation is a field of multimedia research enabling us to temporally split video data into semantically coherent scenes. In order to develop methods challenging temporal video segmentation, detecting scene boundaries is one of the more widely used approaches. As a result, representation of temporal information becomes important. We propose a new temporal video segment representation to formalize video scenes as a sequence of temporal motion change information. The idea here is that some s...
Citation Formats
F. Kamışlı, “Recursive Prediction for Joint Spatial and Temporal Prediction in Video Coding,” IEEE SIGNAL PROCESSING LETTERS, pp. 732–736, 2014, Accessed: 00, 2020. [Online]. Available: