As we saw in the previous lesson, a comparison of estimators based on the mean squared error (MSE) may not yield a clear favorite. It turns out there is no "one best MSE" estimator. The reason for this is that the class of all estimators is too large. One way to reduce the size of the class of estimators being considered as "best" is to impose a restriction on those estimators allowed to be considered "best." The restriction we impose is that to be considered as a "best estimator," an estimator must first be unbiased. Among all unbiased estimators, if we can find one that has uniformly minimum variance, we will choose that estimator as the best unbiased estimator.