Estimation in statistics involves making inferences about population parameters based on sample data. It is a crucial part of data analysis and allows researchers to draw conclusions about a population without examining every individual member. There are two types of estimation:
1. Point Estimation
- Definition: A single value estimate of a population parameter.
- Example: The sample mean (x̄) is a point estimate of the population mean (μ).
2. Interval Estimation
- Definition: Provides a range of values within which the population parameter is expected to lie, along with a confidence level.
- Example: A 95% confidence interval for the population mean might be (45.6, 49.8).
Methods to get Estimates
- Method of Moments – Estimates population parameters by equating sample moments (mean, variance, etc.) to their corresponding population moments. For example, if the sample mean is used to estimate the population mean, and the sample variance is used to estimate the population variance.
- Maximum Likelihood Estimation (MLE) – Estimates population parameters by maximizing the likelihood function, which measures how likely it is to observe the sample data given the parameter values. For example, estimating the mean and variance of a normal distribution by maximizing the likelihood of observing the given sample data.
- Bayesian Estimation – Incorporates prior knowledge or beliefs about the parameter in the form of a prior distribution, and updates this with sample data to obtain a posterior distribution. For example, using prior information about the mean of a population and updating it with sample data to obtain a posterior estimate.
We will get this in more detail in the next few chapters.