-
From Jane Harvill
In previous lessons, evaluations of point estimators have been based on their mean squared error (MSE) performance. MSE is a special case of a function called a loss… -
From Jane Harvill
In the previous lesson on best unbiased estimators and the Cramer-Rao Lower Bound, the concept of sufficiency was not used. We will now consider how sufficiency is a… -
From Jane Harvill
As we saw in the previous lesson, a comparison of estimators based on the mean squared error (MSE) may not yield a clear favorite. It turns out there is no "one… -
From Jane Harvill
The first two criteria for evaluating point estimators are the bias of the estimator and the mean squared error of the estimator. The goal is to chose the estimator… -
From Jane Harvill
The Bayesian approach to statistics is fundamentally different from the classical approach that we have been discussing. However, some aspects of the Bayesian approach… -
From Jane Harvill
The method of maximum likelihood is, by far, the most popular technique for deriving estimators. It is based on finding a function of the data at which the likelihood… -
From Jane Harvill
Method of moments is thought to be one of the oldest, if not the oldest method for finding point estimators. First introduced in 1887 by Chebychev in his proof on the… -
Introduction to Point Estimation
04:58duration 4 minutes 58 seconds
Introduction to Point Estimation
From Jane Harvill
An overview of the next two weeks' lessons is provided. Following that, a brief overview of point estimation is discussed -
From Jane Harvill
The previous two lessons both describe data reduction principles in the following way. A function T(x) of the sample is specified, and the principle states that if x… -
From Jane Harvill
In this lesson, an important function in statistic, called the likelihood function, is introduced. This function can also be used to summarize data. There are many ways… -
From Jane Harvill
In this lesson, we briefly review the concept of sufficiency and quickly move to minimal sufficiency. Following this is a brief discussion of ancillary statistics. We… -
From Jane Harvill
The information in a random sample is used to make inferences about an unknown parameter theta. If the sample size n is large, then the observed sample is a long list of… -
From Jane Harvill
The previous lesson on convergence concepts primarily focused on results as they apply to the sample mean or to a standardized random variable having a limiting normal… -
From Jane Harvill
In this lesson, the effect of allowing the sample size n to increase to infinity is considered. Although this idea is not practical, it does provide useful… -
From Jane Harvill
Sample values such as the smallest, largest, or middle observations from a random sample can provide additional summary information. The minimum, maximum, or median are…