下一章 上一章 目录 设置
24、024 ...
-
Maximum Likelihood Estimation (MLE) is a method used to estimate the parameters of a statistical model by maximizing the likelihood function. Suppose we have a random sample $X_1, X_2, \ldots, X_n $ from a probability distribution with probability density function (pdf) or probability mass function (pmf) $ f(x; \theta) $, where $ \theta $ represents the parameter(s) to be estimated.
The likelihood function $ L(\theta) $ is defined as the joint probability density or mass function of the observed data, considered as a function of the parameter(s) $ \theta $:
\begin{equation}
L(\theta) = \prod_{i=1}^{n} f(x_i; \theta)
\end{equation}
The goal of MLE is to find the parameter value(s) $ \hat{\theta} $ that maximizes the likelihood function, or equivalently, the log-likelihood function $ \ell(\theta) $:
\begin{equation}
\ell(\theta) = \log L(\theta) = \sum_{i=1}^{n} \log f(x_i; \theta)
\end{equation}
To find $ \hat{\theta} $, we differentiate the log-likelihood function with respect to $\theta $, set the derivative(s) equal to zero, and solve for $ \hat{\theta} $:
\begin{equation}
\frac{\partial \ell(\theta)}{\partial \theta} = 0
\end{equation}
If the log-likelihood function is concave, the critical point(s) obtained by solving the above equation corresponds to the maximum likelihood estimator(s) $ \hat{\theta} $. In some cases, it may be more convenient to work with the negative log-likelihood function, denoted as $ -\ell(\theta) $, which is equivalent to minimizing the negative log-likelihood.
The MLE properties, including consistency, asymptotic normality, and efficiency, make it one of the most widely used methods for parameter estimation in statistics.\\