|
github.com/RhysU/ar
Autoregressive process modeling tools in header-only C++
|
Autoregressive process modeling tools in header-only C++. More...
Classes | |
| struct | AIC |
| Represents the Akaike information criterion (AIC). More... | |
| struct | AICC |
| Represents the asymptotically-corrected Akaike information criterion (AICC). More... | |
| struct | best_model_function |
| A template typedef and helper method returning a best_model implementation matching a model selection criterion provided at runtime. More... | |
| struct | BIC |
| Represents the consistent criterion BIC. More... | |
| class | Burg |
| Represents estimation using Burg's recursive method. More... | |
| struct | CIC |
| Represents the combined information criterion (CIC) as applied to a particular estimation_method. More... | |
| struct | criterion |
| Criteria for autoregressive model order selection following Broersen. More... | |
| struct | empirical_variance_function |
| An STL-ready binary_function for a given method's empirical variance. More... | |
| class | empirical_variance_generator |
| An STL AdaptableGenerator for a given method's empirical variance. More... | |
| class | empirical_variance_iterator |
| An immutable RandomAccessIterator over a method's empirical variance sequence. More... | |
| struct | estimation_method |
| A parent type for autoregressive process parameter estimation techniques. More... | |
| struct | FIC |
| Represents the finite information criterion (FIC) as applied to a particular estimation_method. More... | |
| struct | FIC< YuleWalker< MeanHandling >, AlphaNumerator, AlphaDenominator > |
| Represents the finite information criterion (FIC) as applied to the YuleWalker estimation_method. More... | |
| struct | FSIC |
| Represents the finite sample information criterion (FSIC) as applied to a particular estimation_method. More... | |
| struct | GIC |
| Represents the generalized information criterion (GIC). More... | |
| class | LSF |
| Represents forward prediction least squares minimization. More... | |
| class | LSFB |
| Represents forward and backward prediction least squares minimization. More... | |
| struct | MCC |
| Represents the minimally consistent criterion (MCC). More... | |
| struct | mean_retained |
| Denotes the sample mean was retained in a signal during estimation. More... | |
| struct | mean_subtracted |
| Method-specific estimation variance routines following Broersen. More... | |
| class | predictor |
| Simulate an autoregressive model process with an InputIterator interface. More... | |
| class | strided_adaptor |
| An adapter to add striding over another (usually random access) iterator. More... | |
| class | YuleWalker |
| Represents estimation by solving the Yule–Walker equations. More... | |
Autoregressive process modeling tools in header-only C++.
All routines estimate and/or evaluate autoregressive models of the form
\begin{align} x_n + a_1 x_{n - 1} + \dots + a_p x_{n - p} &= \epsilon_n & \epsilon_n &\sim{} N\left(0, \sigma^2_\epsilon\right) \\ \sigma^2_x \left( \rho_0 + a_1 \rho_{1} + \dots + a_p \rho_{p} \right) &= \sigma^2_\epsilon & \rho_0 &= 1 \\ \rho_k + a_1 \rho_{k-1} + \dots + a_p \rho_{k-p} &= 0 & k &\geq{} p \end{align}
where \(x_i\) are the process values, \(a_i\) are the model parameters, and \(\rho_i\) are the lag \(i\) autocorrelations. The white noise input process \(\epsilon_n\) has variance \(\sigma^2_\epsilon\). The model has output variance \(\sigma^2_x\) and therefore a gain equal to \(\sigma^2_x / \sigma^2_\epsilon\).
| predictor<typename std::iterator_traits<RandomAccessIterator>::value_type> ar::autocorrelation | ( | RandomAccessIterator | params_first, |
| RandomAccessIterator | params_last, | ||
| Value | gain, | ||
| InputIterator | autocor_first | ||
| ) |
Construct an iterator over the autocorrelation function \(\rho_k\) given process parameters and initial conditions.
| params_first | Beginning of range containing \(a_1,\dots,a_p\). |
| params_last | Exclusive ending of the parameter range. |
| gain | The model gain \(\sigma^2_x / \sigma^2_\epsilon\). |
| autocor_first | Beginning of range containing \(\rho_0,...\rho_p\). |
| Sequence1::difference_type ar::best_model | ( | Integer1 | N, |
| Integer2 | minorder, | ||
| Sequence1 & | params, | ||
| Sequence2 & | sigma2e, | ||
| Sequence3 & | gain, | ||
| Sequence4 & | autocor | ||
| ) |
Obtain the best model according to criterion applied to \(\sigma^2_\epsilon\) given a hierarchy of candidates.
On input, params, sigma2e, gain, and autocor should be Sequences which were populated by burg_method when hierarchy is true (or in some other equivalent manner). On output, these arguments will contain only values relevant to the best model.
| [in] | N | Sample count used to compute \(\sigma^2_\epsilon\). For example, the return value of burg_method. |
| [in] | minorder | Constrain the best model to be at least this order. Supplying zero specifies no constraint. |
| [in,out] | params | Model parameters |
| [in,out] | sigma2e | \(\sigma^2_\epsilon\) |
| [in,out] | gain | Model gain |
| [in,out] | autocor | Model autocorrelations |
| Sequence1::difference_type ar::best_model | ( | Integer1 | N, |
| Integer2 | minorder, | ||
| Sequence1 & | params, | ||
| Sequence2 & | sigma2e, | ||
| Sequence3 & | gain, | ||
| Sequence4 & | autocor, | ||
| OutputIterator | crit | ||
| ) |
Obtain the best model according to criterion applied to \(\sigma^2_\epsilon\) given a hierarchy of candidates.
On input, params, sigma2e, gain, and autocor should be Sequences which were populated by burg_method when hierarchy is true (or in some other equivalent manner). On output, these arguments will contain only values relevant to the best model.
| [in] | N | Sample count used to compute \(\sigma^2_\epsilon\). For example, the return value of burg_method. |
| [in] | minorder | Constrain the best model to be at least this order. Supplying zero specifies no constraint. |
| [in,out] | params | Model parameters |
| [in,out] | sigma2e | \(\sigma^2_\epsilon\) |
| [in,out] | gain | Model gain |
| [in,out] | autocor | Model autocorrelations |
| [out] | crit | Value assigned to each model by the criterion. |
crit. Definition at line 2310 of file ar.hpp.
References AR_ENSURE_ARG.
| std::size_t ar::burg_method | ( | InputIterator | data_first, |
| InputIterator | data_last, | ||
| Value & | mean, | ||
| std::size_t & | maxorder, | ||
| OutputIterator1 | params_first, | ||
| OutputIterator2 | sigma2e_first, | ||
| OutputIterator3 | gain_first, | ||
| OutputIterator4 | autocor_first, | ||
| const bool | subtract_mean, | ||
| const bool | hierarchy, | ||
| Vector & | f, | ||
| Vector & | b, | ||
| Vector & | Ak, | ||
| Vector & | ac | ||
| ) |
Fit an autoregressive model to stationary time series data using Burg's method.
That is, find coefficients \(a_i\) such that the sum of the squared errors in the forward predictions \(x_n = -a_1 x_{n-1} - \dots - a_p x_{n-p}\) and backward predictions \(x_n = -a_1 x_{n+1} - \dots - a_p x_{n+p}\) are both minimized. Either a single model of given order or a hierarchy of models up to and including a maximum order may fit.
The input data \(\vec{x}\) are read from [data_first, data_last) in a single pass. The mean is computed, returned in mean, and removed from further consideration whenever subtract_mean is true. The estimated model parameters \(a_i\) are output using params_first with the behavior determined by the amount of data read, maxorder, and the hierarchy flag:
hierarchy is false, only the \(a_1, \dots, a_\text{maxorder}\) parameters for an AR(maxorder) process are output. hierarchy is true, the maxorder*(maxorder+1)/2 parameters \(a_1, \dots, a_m\) for models AR(0), AR(1), AR(2), ..., AR(maxorder) are output. Notice AR(0) has no parameters. Note that the latter case is always computed; the hierarchy flag merely controls what is output. In both cases, the maximum order is limited by the number of data samples provided and is output to maxorder.
One mean squared discrepancy \(\sigma^2_\epsilon\), also called the innovation variance, and gain, defined as \(\sigma^2_x / \sigma^2_\epsilon\), are output for each model, including the trivial zeroth order model when maxorder is zero or hierarchy is true, using sigma2e_first and gain_first. The autocorrelations for lags [0,k] are output using autocor_first. When hierarchy is true, only lags [0,m] should be applied for some AR(m) model. Outputting the lag k autocorrelation is technically redundant as it may be computed from \(a_i\) and lags 0, ..., k-1. Autocovariances may be computed by multiplying the autocorrelations by the gain times \(\sigma^2_\epsilon\).
The software aspects of the implementation differs from many other sources. In particular,
mean, hierarchy. Gain and autocorrelation calculations have been added based on sections 5.2 and 5.3 of Broersen, P. M. T. Automatic autocorrelation and spectral analysis. Springer, 2006. http://dx.doi.org/10.1007/1-84628-329-9. The classical algorithm, rather than the variant using denominator recursion due to Andersen (http://dx.doi.org/10.1109/PROC.1978.11160), has been chosen as the latter can be numerically unstable.
| [in] | data_first | Beginning of the input data range. |
| [in] | data_last | Exclusive end of the input data range. |
| [out] | mean | Mean of data. |
| [in,out] | maxorder | On input, the maximum model order desired. On output, the maximum model order computed. |
| [out] | params_first | Model parameters for a single model or for an entire hierarchy of models. At most !hierarchy ? maxorder : maxorder*(maxorder+1)/2 values will be output. |
| [out] | sigma2e_first | The mean squared discrepancy for only AR(maxorder) or for an entire hierarchy. Either one or at most maxorder + 1 values will be output. |
| [out] | gain_first | The model gain for only AR(maxorder) or an entire hierarchy. Either one or at most maxorder + 1 values will be output. |
| [out] | autocor_first | Lag one through lag maxorder autocorrelations. At most maxorder + 1 values will be output. |
| [in] | subtract_mean | Should mean be subtracted from the data? |
| [in] | hierarchy | Should the entire hierarchy of estimated models be output? |
[data_first, data_last).| [in] | f | Working storage. Reuse across invocations may speed execution by avoiding allocations. |
| [in] | b | Working storage similar to f. |
| [in] | Ak | Working storage similar to f. |
| [in] | ac | Working storage similar to f. |
Definition at line 542 of file ar.hpp.
References welford_variance_population().
| std::size_t ar::burg_method | ( | InputIterator | data_first, |
| InputIterator | data_last, | ||
| Value & | mean, | ||
| std::size_t & | maxorder, | ||
| OutputIterator1 | params_first, | ||
| OutputIterator2 | sigma2e_first, | ||
| OutputIterator3 | gain_first, | ||
| OutputIterator4 | autocor_first, | ||
| const bool | subtract_mean = false, |
||
| const bool | hierarchy = false |
||
| ) |
Fit an autoregressive model to stationary time series data using Burg's method.
That is, find coefficients \(a_i\) such that the sum of the squared errors in the forward predictions \(x_n = -a_1 x_{n-1} - \dots - a_p x_{n-p}\) and backward predictions \(x_n = -a_1 x_{n+1} - \dots - a_p x_{n+p}\) are both minimized. Either a single model of given order or a hierarchy of models up to and including a maximum order may fit.
The input data \(\vec{x}\) are read from [data_first, data_last) in a single pass. The mean is computed, returned in mean, and removed from further consideration whenever subtract_mean is true. The estimated model parameters \(a_i\) are output using params_first with the behavior determined by the amount of data read, maxorder, and the hierarchy flag:
hierarchy is false, only the \(a_1, \dots, a_\text{maxorder}\) parameters for an AR(maxorder) process are output. hierarchy is true, the maxorder*(maxorder+1)/2 parameters \(a_1, \dots, a_m\) for models AR(0), AR(1), AR(2), ..., AR(maxorder) are output. Notice AR(0) has no parameters. Note that the latter case is always computed; the hierarchy flag merely controls what is output. In both cases, the maximum order is limited by the number of data samples provided and is output to maxorder.
One mean squared discrepancy \(\sigma^2_\epsilon\), also called the innovation variance, and gain, defined as \(\sigma^2_x / \sigma^2_\epsilon\), are output for each model, including the trivial zeroth order model when maxorder is zero or hierarchy is true, using sigma2e_first and gain_first. The autocorrelations for lags [0,k] are output using autocor_first. When hierarchy is true, only lags [0,m] should be applied for some AR(m) model. Outputting the lag k autocorrelation is technically redundant as it may be computed from \(a_i\) and lags 0, ..., k-1. Autocovariances may be computed by multiplying the autocorrelations by the gain times \(\sigma^2_\epsilon\).
The software aspects of the implementation differs from many other sources. In particular,
mean, hierarchy. Gain and autocorrelation calculations have been added based on sections 5.2 and 5.3 of Broersen, P. M. T. Automatic autocorrelation and spectral analysis. Springer, 2006. http://dx.doi.org/10.1007/1-84628-329-9. The classical algorithm, rather than the variant using denominator recursion due to Andersen (http://dx.doi.org/10.1109/PROC.1978.11160), has been chosen as the latter can be numerically unstable.
| [in] | data_first | Beginning of the input data range. |
| [in] | data_last | Exclusive end of the input data range. |
| [out] | mean | Mean of data. |
| [in,out] | maxorder | On input, the maximum model order desired. On output, the maximum model order computed. |
| [out] | params_first | Model parameters for a single model or for an entire hierarchy of models. At most !hierarchy ? maxorder : maxorder*(maxorder+1)/2 values will be output. |
| [out] | sigma2e_first | The mean squared discrepancy for only AR(maxorder) or for an entire hierarchy. Either one or at most maxorder + 1 values will be output. |
| [out] | gain_first | The model gain for only AR(maxorder) or an entire hierarchy. Either one or at most maxorder + 1 values will be output. |
| [out] | autocor_first | Lag one through lag maxorder autocorrelations. At most maxorder + 1 values will be output. |
| [in] | subtract_mean | Should mean be subtracted from the data? |
| [in] | hierarchy | Should the entire hierarchy of estimated models be output? |
[data_first, data_last). Definition at line 752 of file ar.hpp.
References burg_method().
| Value ar::decorrelation_time | ( | const std::size_t | N, |
| predictor< Value > | rho, | ||
| const bool | abs_rho = false |
||
| ) |
Compute the decorrelation time for variance of the mean given autocorrelation details.
That is, compute
\begin{align} T_0 &= 1 + 2 \sum_{i=1}^{N} \left(1 - \frac{i}{N}\right) \rho_i \end{align}
following Trenberth, K. E. "Some effects of finite sample size and persistence on meteorological statistics. Part I: Autocorrelations." Monthly Weather Review 112 (1984). http://dx.doi.org/10.1175/1520-0493(1984)112%3C2359:SEOFSS%3E2.0.CO;2
Rather than \(\rho\), \(\left|\rho\right|\) may be used in the definition of \(T_0\) to better approximate the "decay of the correlation envelope" according to section 17.1.5 of Hans von Storch and Francis W. Zwiers. Statistical analysis in climate research. Cambridge University Press, March 2001. ISBN 978-0521012300. Doing so is more robust for oscillatory processes and always provides a larger, more conservative estimate of \(T_0\).
| N | Maximum lag used to compute the autocorrelation. |
| rho | A predictor iterating over the autocorrelation. |
| abs_rho | Use \(\left|\rho\right|\) when calculating \(T_0\)? |
| Value ar::decorrelation_time | ( | const std::size_t | N, |
| predictor< Value > | rho1, | ||
| predictor< Value > | rho2, | ||
| const bool | abs_rho = false |
||
| ) |
Compute the decorrelation time for a covariance given autocorrelation details from two processes.
That is, compute
\begin{align} T_0 &= 1 + 2 \sum_{i=1}^{N} \left(1 - \frac{i}{N}\right) \rho_{1,i} \rho_{2,i} \end{align}
following Trenberth, K. E. "Some effects of finite sample size and persistence on meteorological statistics. Part I: Autocorrelations." Monthly Weather Review 112 (1984). http://dx.doi.org/10.1175/1520-0493(1984)112%3C2359:SEOFSS%3E2.0.CO;2
Rather than \(\rho\), \(\left|\rho\right|\) may be used in the definition of \(T_0\) to better approximate the "decay of the correlation envelope" according to section 17.1.5 of Hans von Storch and Francis W. Zwiers. Statistical analysis in climate research. Cambridge University Press, March 2001. ISBN 978-0521012300. Doing so is more robust for oscillatory processes and always provides a larger, more conservative estimate of \(T_0\).
| N | Maximum lag used to compute the autocorrelation. |
| rho1 | A predictor iterating over the autocorrelation for the first process. |
| rho2 | A predictor iterating over the autocorrelation for the first process. |
| abs_rho | Use \(\left|\rho\right|\) when calculating \(T_0\)? |
| Result ar::evaluate | ( | Result | sigma2e, |
| Integer1 | N, | ||
| Integer2 | p | ||
| ) |
Evaluate a given criterion for N samples and model order p.
| [in] | sigma2e | The residual \(\sigma^2_\epsilon\) |
| [in] | N | Sample count used to compute \(\sigma^2_\epsilon\) |
| [in] | p | The model order use to compute \(sigma^2_\epsilon\) |
| std::iterator_traits<InputIterator>::difference_type ar::evaluate_models | ( | Integer1 | N, |
| Integer2 | ordfirst, | ||
| InputIterator | first, | ||
| InputIterator | last | ||
| ) |
Find the index of the best model from a hierarchy of candidates according to a criterion given \(\sigma^2_\epsilon\) for each model.
| [in] | N | Sample count used to compute \(\sigma^2_\epsilon\). |
| [in] | ordfirst | The model order corresponding to first. When \(sigma^2_\epsilon\) is produced entirely by burg_method, this should be 0u. |
| [in] | first | Beginning of the range holding \(\sigma^2_\epsilon\) |
| [in] | last | Exclusive end of input range. |
first to the best model.| std::iterator_traits<InputIterator>::difference_type ar::evaluate_models | ( | Integer1 | N, |
| Integer2 | ordfirst, | ||
| InputIterator | first, | ||
| InputIterator | last, | ||
| OutputIterator | crit | ||
| ) |
Algorithmic helpers for autoregressive model order selection.
Evaluate a criterion on a hierarchy of models given \(\sigma^2_\epsilon\) for each model. The index of the best model, i.e. the one with minimum criterion value, is returned.
| [in] | N | Sample count used to compute \(\sigma^2_\epsilon\). |
| [in] | ordfirst | The model order corresponding to first. When \(\sigma^2_\epsilon\) is produced entirely by burg_method with hierarchy == true, this should be 0. |
| [in] | first | Beginning of the range holding \(\sigma^2_\epsilon\) |
| [in] | last | Exclusive end of input range. |
| [out] | crit | Value assigned to each model by the criterion. |
first to the best model. An obscenely negative value is returned on error. | ValueType ar::negative_half_reflection_coefficient | ( | InputIterator1 | a_first, |
| InputIterator1 | a_last, | ||
| InputIterator2 | b_first | ||
| ) |
Algorithms for autoregressive parameter estimation and manipulation.
Robustly compute negative one half the reflection coefficient assuming \(\vec{a}\) and \(\vec{b}\) contain real-valued backward and forward prediction error sequences, respectively. Zero is returned whenever the reflection coefficient numerator is identically zero, as otherwise constant zero signals produce undesired NaN reflection coefficients. The constant zero special case does not defeat NaN detection as any data introducing NaN into the denominator must introduce NaN into the numerator.
| [in] | a_first | Beginning of the first input range \(\vec{a}\). |
| [in] | a_last | Exclusive end of first input range \(\vec{a}\). |
| [in] | b_first | Beginning of the second input range \(\vec{b}\). |
| std::size_t ar::welford_covariance_population | ( | InputIterator1 | first1, |
| InputIterator1 | last1, | ||
| InputIterator2 | first2, | ||
| OutputType1 & | mean1, | ||
| OutputType2 & | mean2, | ||
| OutputType3 & | covar | ||
| ) |
Compute means and the population covariance using Welford's algorithm.
| [in] | first1 | Beginning of the first input range. |
| [in] | last1 | Exclusive end of first input range. |
| [in] | first2 | Beginning of the second input range. |
| [out] | mean1 | Mean of the first data set. |
| [out] | mean2 | Mean of the second data set. |
| [out] | covar | The covariance of the two sets. |
[first1, last1). Definition at line 326 of file ar.hpp.
References welford_ncovariance().
| std::size_t ar::welford_covariance_sample | ( | InputIterator1 | first1, |
| InputIterator1 | last1, | ||
| InputIterator2 | first2, | ||
| OutputType1 & | mean1, | ||
| OutputType2 & | mean2, | ||
| OutputType3 & | covar | ||
| ) |
Compute means and the sample covariance using Welford's algorithm.
| [in] | first1 | Beginning of the first input range. |
| [in] | last1 | Exclusive end of first input range. |
| [in] | first2 | Beginning of the second input range. |
| [out] | mean1 | Mean of the first data set. |
| [out] | mean2 | Mean of the second data set. |
| [out] | covar | The covariance of the two sets. |
[first1, last1). Definition at line 356 of file ar.hpp.
References welford_ncovariance().
| ValueType ar::welford_inner_product | ( | InputIterator | first, |
| InputIterator | last, | ||
| ValueType | init | ||
| ) |
Compute the inner product of [first, last) with itself using welford_nvariance.
Welford's algorithm is combined with the linearity of the expectation operator to compute a more expensive but also more numerically stable result than can be had using std::inner_product.
| [in] | first | Beginning of the input data range. |
| [in] | last | Exclusive end of the input data range. |
| [in] | init | Initial value, often zero, establishing the result type. |
[first, last) with itself. Definition at line 383 of file ar.hpp.
References welford_nvariance().
| ValueType ar::welford_inner_product | ( | InputIterator1 | first1, |
| InputIterator1 | last1, | ||
| InputIterator2 | first2, | ||
| ValueType | init | ||
| ) |
Compute the inner product of [first1, last1) against [first2, ...) using welford_ncovariance.
Welford's algorithm is combined with the linearity of the expectation operator to compute a more expensive but also numerically stable result than can be had using std::inner_product.
| [in] | first1 | Beginning of the first input range. |
| [in] | last1 | Exclusive end of first input range. |
| [in] | first2 | Beginning of the second input range. |
| [in] | init | Initial value, often zero, establishing the result type. |
[first1, last1) against [first2, ...). Definition at line 411 of file ar.hpp.
References welford_ncovariance().
| std::size_t ar::welford_ncovariance | ( | InputIterator1 | first1, |
| InputIterator1 | last1, | ||
| InputIterator2 | first2, | ||
| OutputType1 & | mean1, | ||
| OutputType2 & | mean2, | ||
| OutputType3 & | ncovar | ||
| ) |
Compute means and the number of samples, N, times the population covariance using Welford's algorithm.
The implementation follows the covariance section of http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance.
| [in] | first1 | Beginning of the first input range. |
| [in] | last1 | Exclusive end of first input range. |
| [in] | first2 | Beginning of the second input range. |
| [out] | mean1 | Mean of the first data set. |
| [out] | mean2 | Mean of the second data set. |
| [out] | ncovar | N times the covariance of the two sets. |
[first1, last1). | std::size_t ar::welford_nvariance | ( | InputIterator | first, |
| InputIterator | last, | ||
| OutputType1 & | mean, | ||
| OutputType2 & | nvar | ||
| ) |
Stable, one-pass algorithms for computing variances and covariances.
Compute the mean and the number of samples, N, times the population variance using Welford's algorithm. The latter quantity is effectively the centered sum of squares. The algorithm is found in Knuth's TAOCP volume 2 section 4.2.2.A on page 232. The implementation follows http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance.
| [in] | first | Beginning of the input data range. |
| [in] | last | Exclusive end of the input data range. |
| [out] | mean | Mean of the data in [first, last). |
| [out] | nvar | N times the variance of the data. |
[first, last). | std::size_t ar::welford_variance_population | ( | InputIterator | first, |
| InputIterator | last, | ||
| OutputType1 & | mean, | ||
| OutputType2 & | var | ||
| ) |
Compute the mean and population variance using Welford's algorithm.
| [in] | first | Beginning of the input data range. |
| [in] | last | Exclusive end of the input data range. |
| [out] | mean | Mean of the data in [first, last). |
| [out] | var | The population variance of the data. |
[first, last). Definition at line 217 of file ar.hpp.
References welford_nvariance().
| std::size_t ar::welford_variance_sample | ( | InputIterator | first, |
| InputIterator | last, | ||
| OutputType1 & | mean, | ||
| OutputType2 & | var | ||
| ) |
Compute the mean and sample variance using Welford's algorithm.
| [in] | first | Beginning of the input data range. |
| [in] | last | Exclusive end of the input data range. |
| [out] | mean | Mean of the data in [first, last). |
| [out] | var | The sample variance of the data. |
[first, last). Definition at line 241 of file ar.hpp.
References welford_nvariance().
| void ar::zohar_linear_solve | ( | RandomAccessIterator | a_first, |
| RandomAccessIterator | a_last, | ||
| ForwardIterator | d_first | ||
| ) |
Solve a real-valued, symmetric Toeplitz set of linear equations in-place.
That is, compute
\[ L_{n+1}^{-1} d_{n+1} \mbox{ for } L_{n+1} = \bigl(\begin{smallmatrix} 1 & \tilde{a}_n \\ a_n & L_n \end{smallmatrix}\bigr) \]
given \(\vec{a}\) and \(\vec{d}\). The dimension of the problem is fixed by n = distance(a_first, a_last). The working precision is fixed by the value_type of d_first.
| [in] | a_first | Beginning of the range containing \(\vec{a}\). |
| [in] | a_last | End of the range containing \(\vec{a}\). |
| [in,out] | d_first | Beginning of the range containing \(\vec{d}\). Also the beginning of the output range to which n+1 entries will be written. |
Definition at line 1422 of file ar.hpp.
References zohar_linear_solve().
| void ar::zohar_linear_solve | ( | RandomAccessIterator | a_first, |
| RandomAccessIterator | a_last, | ||
| RandomAccessIterator | r_first, | ||
| ForwardIterator | d_first | ||
| ) |
Solve a Toeplitz set of linear equations in-place.
That is, compute
\[ L_{n+1}^{-1} d_{n+1} \mbox{ for } L_{n+1} = \bigl(\begin{smallmatrix} 1 & \tilde{a}_n \\ r_n & L_n \end{smallmatrix}\bigr) \]
given \(\vec{a}\), \(\vec{r}\), and \(\vec{d}\). The dimension of the problem is fixed by n = distance(a_first, a_last). A symmetric Toeplitz solve can be performed by having \(\vec{a}\) and \(\vec{r}\) iterate over the same data. The Hermitian case requires two buffers with \(vec{r}\) being the conjugate of \(\vec{a}\). The working precision is fixed by the value_type of d_first.
| [in] | a_first | Beginning of the range containing \(\vec{a}\). |
| [in] | a_last | End of the range containing \(\vec{a}\). |
| [in] | r_first | Beginning of the range containing \(\vec{r}\). |
| [in,out] | d_first | Beginning of the range containing \(\vec{d}\). Also the beginning of the output range to which n+1 entries will be written. |
Definition at line 1389 of file ar.hpp.
References zohar_linear_solve().
| void ar::zohar_linear_solve | ( | RandomAccessIterator | a_first, |
| RandomAccessIterator | a_last, | ||
| RandomAccessIterator | r_first, | ||
| InputIterator | d_first, | ||
| OutputIterator | s_first | ||
| ) |
Solve a Toeplitz set of linear equations.
That is, find \(s_{n+1}\) satisfying
\[ L_{n+1} s_{n+1} = d_{n+1} \mbox{ where } L_{n+1} = \bigl(\begin{smallmatrix} 1 & \tilde{a}_n \\ r_n & L_n \end{smallmatrix}\bigr) \]
given \(\vec{a}\), \(\vec{r}\), and \(\vec{d}\). The dimension of the problem is fixed by n = distance(a_first, a_last). A symmetric Toeplitz solve can be performed by having \(\vec{a}\) and \(\vec{r}\) iterate over the same data. The Hermitian case requires two buffers with \(vec{r}\) being the conjugate of \(\vec{a}\). The working precision is fixed by the value_type of d_first.
The algorithm is from Zohar, Shalhav. "The Solution of a Toeplitz Set of
Linear Equations." J. ACM 21 (April 1974): 272-276. http://dx.doi.org/10.1145/321812.321822. It has complexity like O(2*(n+1)^2). Zohar improved upon earlier work from Page 1504 from Trench, William F. "Weighting Coefficients for the Prediction of Stationary
Time Series from the Finite Past." SIAM Journal on Applied Mathematics 15 (November 1967): 1502-1510. http://www.jstor.org/stable/2099503.See Bunch, James R. "Stability of Methods for Solving Toeplitz Systems of
Equations." SIAM Journal on Scientific and Statistical Computing 6 (1985): 349-364. http://dx.doi.org/10.1137/0906025 for a discussion of the algorithm's stability characteristics.
| [in] | a_first | Beginning of the range containing \(\vec{a}\). |
| [in] | a_last | End of the range containing \(\vec{a}\). |
| [in] | r_first | Beginning of the range containing \(\vec{r}\). |
| [in] | d_first | Beginning of the range containing \(\vec{d}\) which should have n+1 entries available. |
| [out] | s_first | Beginning of the output range to which n+1 entries will be written. |
1.8.17