Hide/Show Apps

Making linear prediction perform like maximum likelihood in Gaussian autoregressive model parameter estimation

A two-stage method for the parameter estimation of Gaussian autoregressive models is proposed. The proposed first stage is an improved version of the conventional forward-backward prediction method and can be interpreted as its weighted version with the weights derived from the arithmetic mean of the log-likelihood functions for different conditioning cases. The weighted version is observed to perform better than the conventional forward-backward prediction method and other linear prediction based methods (correlation method, covariance method, Burg's method etc.) in terms of attained likelihood value. The proposed second stage uses the estimate of the first stage as the initial condition and approximates the highly non-linear log-likelihood function with a quadratic function around the initial estimate. The optimization of the quadratic cost function yields the optimal perturbation vector that locally maximizes the likelihood in the vicinity of the initial condition. The proposed method is compared with other methods and it has been observed that the likelihood value attained at the end of two-stages is almost identical to the value attained by higher complexity numerical-search based optimization tools in a wide range of experiments. The maximum likelihood-like performance at a significantly lower implementation cost makes the proposed method especially valuable for the applications with short data-records and limited computational resources. (C) 2019 Elsevier B.V. All rights reserved.