|
In this lesson, the assumption of the normality of errors, and hence the normality of the responses is added to the simple linear regression model. The consequences are that maximum likelihood…
|
|
So far, the interval estimators we have discussed are frequentists methods. In this lesson, we discuss Bayesian credible sets - the correct name for Bayesian "confidence intervals." In…
|
|
A confidence interval estimator for the shift parameter is found by inverting the CDF of a sufficient statistic for the parameter.
|
|
In this first lesson on interval estimation, we address three methods in the frequentist domain for finding interval estimators. The first is the inversion of a test statistic, the second is using…
|
|
The method for finding a Bayesian test depends on the posterior of the parameter and a function of the data. Using the posterior probabilities of the null parameter space and the alternative…
|
|
This example illustrates how to find the Bayes estimator, under squared error loss function, of the rate of a Poisson distribution if the prior distribution on the rate is a gamma distribution.
|
|
Example to illustrate finding the maximum likelihood estimator of the rate of a Poisson distribution.
|
|
This example illustrates how to find the maximum likelihood estimator (MLE) of the upper bound of a uniform(0, B) distribution. In this example, calculus cannot be used to find the MLE since the…
|
|
In statistical inference, the frequentist perspective considers the parameter θ to be a fixed, but unknown quantity. Statistics used for inferential purposes ideally have specific optimal…
|
|
An example to illustrate how to find the joint probability density function of a random sample from a gamma distribution using the result that the gamma family of distributions is an exponential…
|
|
Example to show that the gamma family of PDFs is an exponential family
|
|
The Bayesian approach to statistics is fundamentally different from the classical approach that we have been discussing. However, some aspects of the Bayesian approach can be quite helpful to other…
|
|
The information in a random sample is used to make inferences about an unknown parameter theta. If the sample size n is large, then the observed sample is a long list of numbers that is difficult, if…
|
|
The previous lesson on convergence concepts primarily focused on results as they apply to the sample mean or to a standardized random variable having a limiting normal distribution. There are times…
|
|
In this lesson, the effect of allowing the sample size n to increase to infinity is considered. Although this idea is not practical, it does provide useful approximations for the finite-sample case.…
|
|
Sample values such as the smallest, largest, or middle observations from a random sample can provide additional summary information. The minimum, maximum, or median are all examples of order…
|
|
In previous lessons, properties of samples and of statistics computed on a random sample were discussed under a very general framework; in other words, under the idea that a random sample is selected…
|
|
When a random sample is drawn, some summary of the values is usually computed. Any well-defined summary may be expressed mathematically as a function of an n-dimension vector. The domain of that…
|
|
In earlier lessons, we discussed the absence or presence of a relationship between variables, and how to model that relationship. We discussed joint probability functions, conditional probability…
|
|
Oftentimes when two random variables (X, Y) are observed, the values of the two variables are related. For example, it may be that knowledge about the value of X gives information about the value of…
|
|
The previous lesson concluded with joint and marginal distributions for the discrete case. We now consider the same concepts, but for continuous random vectors. While discrete random vectors and the…
|
|
All of the models previously discussed involved only one random variable and were called univariate models. We now move forward and discuss probability models that involve more than one random…
|
|
The beta distribution is the last continuous distribution we will discuss. Like the gamma distribution, the beta distribution earns its name from being associated with the beta function. The beta…
|
|
In this lesson, we learn about the most important distribution in statistics - the normal distribution. We investigate the probability density function and cumulative distribution function and the…
|
|
The gamma family of distributions is a very special family that has many distributions as a specific case. In this lesson, we begin with the gamma function. We then introduce the gamma…
|
|
An overview of the uniform distribution is given, including its probability density function (PDF) and cumulative distribution function (CDF). The mean, variance, and moment generating function are…
|
|
In this lesson, a brief review of the general properties of continuous distributions is provided. The end of the lesson is a comparison of the properties for continuous and discrete distributions.
|
|
The Poisson distribution is different from all of the discrete distributions we have considered up until this point. Instead of counting the number of "successes" or counting the number of…
|
|
The negative binomial distribution is an extension of the geometric distribution, and so is also related to the Bernoulli distribution. Whereas the geometric distribution results from counting the…
|
|
The experiment that is used to illustrate when the hypergeometric distribution arises is that of an urn with N balls of two colors; M "white" balls and N - M "black" balls. The…
|
|
Like the binomial and hypergeometric distributions, the geometric distribution is related to the Bernoulli(p) distribution. Unlike the binomial and hypergeometric distributions, the geometric…
|
|
The binomial distribution is introduced through its unique connection to the Bernoulli distribution through what is called a Bernoulli process. The probability mass function is derived and proven to…
|
|
The Bernoulli and binomial distributions are used for modeling a dichotomous experiment or a sequence of independent dichotomous experiments, respectively. Alone, both distributions apply to a wide…
|
|
The discrete uniform distribution is one of the simplest discrete distributions. In this lesson, the distribution is defined via its probability mass function. The cumulative distribution function…
|
|
In this lesson, a brief review of the general properties of discrete distributions is provided.
|
|
In this lesson, probability density functions are introduced, defined, and discussed. Properties of probability mass functions and probability density functions are presented. Concepts are…
|
|
This lesson is the first of many that discuss probability functions. We begin the discussion of probability functions with the cumulative distribution function (CDF). The CDF is defined and…
|
|
Parchman Endowed Lectures October 25, 2016
|
|
2016 Baylor Libraries Symposium: Thomas Paine's Rights of Man Panel Five 2016 September 30 Abigail Higgins An Arbiter of Rights? The Role of the Divine in Locke, Paine, and Jefferson
|
|
2016 Baylor Libraries Symposium: Thomas Paine's Rights of Man Panel Five 2016 September 30 Dr. Brad Owens W.R. "Bob" Poage: Retail Politics in Wartime
|