-
From Jane Harvill November 20th, 2020
In previous lessons, evaluations of point estimators have been based on their mean squared error (MSE) performance. MSE is a special case of a function called a loss… -
From Jane Harvill November 19th, 2020
In the previous lesson on best unbiased estimators and the Cramer-Rao Lower Bound, the concept of sufficiency was not used. We will now consider how sufficiency is a… -
From Jane Harvill November 18th, 2020
As we saw in the previous lesson, a comparison of estimators based on the mean squared error (MSE) may not yield a clear favorite. It turns out there is no "one… -
From Jane Harvill November 17th, 2020
The first two criteria for evaluating point estimators are the bias of the estimator and the mean squared error of the estimator. The goal is to chose the estimator… -
From Jane Harvill November 15th, 2020
The Bayesian approach to statistics is fundamentally different from the classical approach that we have been discussing. However, some aspects of the Bayesian approach… -
From Jane Harvill November 15th, 2020
The method of maximum likelihood is, by far, the most popular technique for deriving estimators. It is based on finding a function of the data at which the likelihood… -
From Jane Harvill November 9th, 2020
Method of moments is thought to be one of the oldest, if not the oldest method for finding point estimators. First introduced in 1887 by Chebychev in his proof on the… -
From Jane Harvill November 8th, 2020
An overview of the next two weeks' lessons is provided. Following that, a brief overview of point estimation is discussed -
From Jane Harvill November 8th, 2020
The previous two lessons both describe data reduction principles in the following way. A function T(x) of the sample is specified, and the principle states that if x… -
From Jane Harvill November 8th, 2020
In this lesson, an important function in statistic, called the likelihood function, is introduced. This function can also be used to summarize data. There are many ways… -
From Jane Harvill November 6th, 2020
In this lesson, we briefly review the concept of sufficiency and quickly move to minimal sufficiency. Following this is a brief discussion of ancillary statistics. We… -
From Jane Harvill November 1st, 2020
The information in a random sample is used to make inferences about an unknown parameter theta. If the sample size n is large, then the observed sample is a long list of… -
From Jane Harvill November 1st, 2020
The previous lesson on convergence concepts primarily focused on results as they apply to the sample mean or to a standardized random variable having a limiting normal… -
From Jane Harvill November 1st, 2020
In this lesson, the effect of allowing the sample size n to increase to infinity is considered. Although this idea is not practical, it does provide useful… -
From Jane Harvill October 26th, 2020
Sample values such as the smallest, largest, or middle observations from a random sample can provide additional summary information. The minimum, maximum, or median are…