Making linear prediction perform like maximum likelihood in Gaussian autoregressive model parameter estimation

2020-01-01
A two-stage method for the parameter estimation of Gaussian autoregressive models is proposed. The proposed first stage is an improved version of the conventional forward-backward prediction method and can be interpreted as its weighted version with the weights derived from the arithmetic mean of the log-likelihood functions for different conditioning cases. The weighted version is observed to perform better than the conventional forward-backward prediction method and other linear prediction based methods (correlation method, covariance method, Burg's method etc.) in terms of attained likelihood value. The proposed second stage uses the estimate of the first stage as the initial condition and approximates the highly non-linear log-likelihood function with a quadratic function around the initial estimate. The optimization of the quadratic cost function yields the optimal perturbation vector that locally maximizes the likelihood in the vicinity of the initial condition. The proposed method is compared with other methods and it has been observed that the likelihood value attained at the end of two-stages is almost identical to the value attained by higher complexity numerical-search based optimization tools in a wide range of experiments. The maximum likelihood-like performance at a significantly lower implementation cost makes the proposed method especially valuable for the applications with short data-records and limited computational resources.
SIGNAL PROCESSING

Suggestions

Low-level multiscale image segmentation and a benchmark for its evaluation
Akbaş, Emre (Elsevier BV, 2020-10-01)
In this paper, we present a segmentation algorithm to detect low-level structure present in images. The algorithm is designed to partition a given image into regions, corresponding to image structures, regardless of their shapes, sizes, and levels of interior homogeneity. We model a region as a connected set of pixels that is surrounded by ramp edge discontinuities where the magnitude of these discontinuities is large compared to the variation inside the region. Each region is associated with a scale that d...
A unified framework for derivation and implementation of Savitzky-Golay filters
Candan, Çağatay (Elsevier BV, 2014-11-01)
The Savitzky-Golay (SG) filter design problem is posed as the minimum norm solution of an underdetermined equation system. A unified SG filter design framework encompassing several important applications such as smoothing, differentiation, integration and fractional delay is developed. In addition to the generality and flexibility of the framework, an efficient SG filter implementation structure, naturally emerging from the framework, is proposed. The structure is shown to reduce the number of multipliers i...
Deconvolution and preequalization with best delay LS inverse filters
Tuncer, Temel Engin (Elsevier BV, 2004-11-01)
A new method for finding the best delay for the design of least-squares (1,S) inverse filters is introduced. It is shown that there is a considerable difference between the LS errors of a best delay filter and an arbitrary LS inverse filter. Proposed method is an effective and computationally efficient approach for the design of LS optimum filters. Deconvolution problem is considered and the MSE performances of pseudoinverse, preequalization and LS inverse filtering are investigated. In this respect, the th...
Properties of the momentum LMS algorithm
Tugay, Mehmet Ali; Tanik, Yalçin (Elsevier BV, 1989-10)
One of the most recent modifications on Widrow and Hoff's LMS algorithm has been the inclusion of a momentum term into the weight update equation. The resulting algorithm is referred to as “The Momentum LMS (MLMS) algorithm”. This paper revises the basic properties of the MLMS algorithm for stationary inputs. As a result, new bounds, on the parameters of the algorithm, for convergence are found, and it is shown that, under slow convergence conditions, this new algorithm is equivalent to the usual LMS algori...
One-dimensional representation of two-dimensional information for HMM based handwriting recognition
Arica, N; Yarman Vural, Fatoş Tunay (Elsevier BV, 2000-06-01)
In this study, we introduce a one-dimensional feature set, which embeds two-dimensional information into an observation sequence of one-dimensional string, selected from a code-book. It provides a consistent normalization among distinct classes of shapes, which is very convenient for Hidden Markov Model (HMM) based shape recognition schemes. The normalization parameters, which maximize the recognition rate, are dynamically estimated in the training stage of HMM. The proposed recognition system is tested on ...
Citation Formats
Ç. Candan, “Making linear prediction perform like maximum likelihood in Gaussian autoregressive model parameter estimation,” SIGNAL PROCESSING, pp. 0–0, 2020, Accessed: 00, 2020. [Online]. Available: https://hdl.handle.net/11511/38021.