Minimum Variance Estimation Without Regularity Assumptions And Critical Thinking

On By In 1

In estimation theory and statistics, the Cramér–Rao bound (CRB), Cramér–Rao lower bound (CRLB), Cramér–Rao inequality, Frechet-Darmois-Cramér-Rao inequality, or information inequality expresses a lower bound on the variance of estimators of a deterministic (fixed, though unknown) parameter. This term is named in honor of Harald Cramér, Calyampudi Radhakrishna Rao, Maurice Fréchet and Georges Darmois all of whom independently derived this limit to statistical precision in the 1940s.[1][2][3][4][5][6][7]

In its simplest form, the bound states that the variance of any unbiased estimator is at least as high as the inverse of the Fisher information. An unbiased estimator which achieves this lower bound is said to be (fully) efficient. Such a solution achieves the lowest possible mean squared error among all unbiased methods, and is therefore the minimum variance unbiased (MVU) estimator. However, in some cases, no unbiased technique exists which achieves the bound. This may occur even when an MVU estimator exists.

The Cramér–Rao bound can also be used to bound the variance of biased estimators of given bias. In some cases, a biased approach can result in both a variance and a mean squared error that are below the unbiased Cramér–Rao lower bound; see estimator bias.

Statement[edit]

The Cramer–Rao bound is stated in this section for several increasingly general cases, beginning with the case in which the parameter is a scalar and its estimator is unbiased. All versions of the bound require certain regularity conditions, which hold for most well-behaved distributions. These conditions are listed later in this section.

Scalar unbiased case[edit]

Suppose is an unknown deterministic parameter which is to be estimated from measurements , distributed according to some probability density function. The variance of any unbiased estimator of is then bounded by the reciprocal of the Fisher information:

where the Fisher information is defined by

and is the natural logarithm of the likelihood function and denotes the expected value (over ).

The efficiency of an unbiased estimator measures how close this estimator's variance comes to this lower bound; estimator efficiency is defined as

or the minimum possible variance for an unbiased estimator divided by its actual variance. The Cramér–Rao lower bound thus gives

General scalar case[edit]

A more general form of the bound can be obtained by considering an unbiased estimator of a function of the parameter, . Here, unbiasedness is understood as stating that . In this case, the bound is given by

where is the derivative of (by ), and is the Fisher information defined above.

Bound on the variance of biased estimators[edit]

Apart from being a bound on estimators of functions of the parameter, this approach can be used to derive a bound on the variance of biased estimators with a given bias, as follows. Consider an estimator with bias , and let . By the result above, any unbiased estimator whose expectation is has variance greater than or equal to . Thus, any estimator whose bias is given by a function satisfies

The unbiased version of the bound is a special case of this result, with .

It's trivial to have a small variance − an "estimator" that is constant has a variance of zero. But from the above equation we find that the mean squared error of a biased estimator is bounded by

using the standard decomposition of the MSE. Note, however, that if this bound might be less than the unbiased Cramér–Rao bound . For instance, in the example of estimating variance below, .

Multivariate case[edit]

Extending the Cramér–Rao bound to multiple parameters, define a parameter column vector

with probability density function which satisfies the two regularity conditions below.

The Fisher information matrix is a matrix with element defined as

Let be an estimator of any vector function of parameters, , and denote its expectation vector by . The Cramér–Rao bound then states that the covariance matrix of satisfies

where


If is an unbiased estimator of (i.e., ), then the Cramér–Rao bound reduces to

If it is inconvenient to compute the inverse of the Fisher information matrix, then one can simply take the reciprocal of the corresponding diagonal element to find a (possibly loose) lower bound.[8]

Regularity conditions[edit]

The bound relies on two weak regularity conditions on the probability density function, , and the estimator :

  • The Fisher information is always defined; equivalently, for all such that ,
exists, and is finite.
  • The operations of integration with respect to and differentiation with respect to can be interchanged in the expectation of ; that is,
whenever the right-hand side is finite.

Next:When can the MVB Up:Cramér-Rao Lower Bound Previous:Cramér-Rao Lower Bound   Contents

Minimum Variance Estimation

The Theorem below gives a lower bound for the variance (or mse) of an estimator.


Theorem 8..1
Let , based on a sample X from be an estimator of (assumed to be one-dimensional). Then

(8.7)

and
(8.8)

where is given in (8.1) and is defined in (7.6).

Outline of Proof. [The validity depends on regularity conditions, where the interchange of integration and differentiation operations is permitted and on the existence and integrability of various partial derivatives.]

Define as in (7.7) and we note that so Var(V) and




Recall that the absolute value of the correlation coefficient, for measuring correlation between any two variables is less than or equal to and that

so that we have

or



thus proving (8.7). Now (8.8) follows using ( 8.3).

Corollary. For the class of unbiased estimators,

(8.9)

Now inequality (8.9) is known as the Cramér-Rao lower bound, or sometimes the Information inequality. It provides (in ``regular estimation'' cases) a lower bound on the variance of an unbiased estimator, T. The inequality is generally attributed to Cramér's work in 1946 and Rao's work in 1945, though it was apparently first given by M. Frechet in 1937-38.


Definition 8..8
The (absolute) efficiency of an unbiased estimator T is defined as

(8.10)

Note that, because of (8.9), , so we can think of as a measure of efficiency of any given estimator, rather than the relative efficiency of one with respect to another as in Definition 8.1.

In the case where , so that the actual lower bound of Var() is achieved, some texts refer to the estimator T as efficient. This terminology is not universally accepted. Some prefer to use the phrase minimum variance bound (MVB) for , and an estimator which is unbiased and which attains this bound is called a minimum variance bound unbiased (MVBU) estimator.


Example 8..4
In the problem of estimating in a normal distribution with mean and known variance , find the MVB of an unbiased estimator.

The MVB is where . For a sample we have for the likelihood,




So the MVB is .

Next:When can the MVB Up:Cramér-Rao Lower Bound Previous:Cramér-Rao Lower Bound   Contents Bob Murison 2000-10-31

0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *