89 minute read

Notice a tyop typo? Please submit an issue or open a PR.

The goal of input analysis is to ensure that the random variables we use in our simulations adequately approximate the real-world phenomena we are modeling. We might use random variables to model interarrival times, service times, breakdown times, or maintenance times, among other things. These variables don't come out of thin air; we have to specify them appropriately.

If we specify our random variables improperly, we can end up with a GIGO simulation: garbage-in-garbage-out. This situation can ruin the entire model and invalidate any results we might obtain.

Let's consider a single-server queueing system with *constant* service times of ten minutes. Let's incorrectly assume that we have *constant* interarrival times of twelve minutes. We should expect to never have a line under this assumption because the service times are always shorter than the interarrival times.

However, suppose that, in reality, we have Exp(12) interarrival times. In this case, the simulation never sees the line that actually occurs. In fact, the line might get quite long, and the simulation has no way of surfacing that fact.

Here's the high-level game plan for performing input analysis. First, we'll collect some data for analysis for any random variable of interest. Next, we'll determine, or at least estimate, the underlying distribution along with associated parameters: for example, Nor(30,8). We will then conduct a formal statistical test to see if the distribution we chose is "approximately" correct. If we fail the test, then our guess is wrong, and we have to go back to the drawing board.

In this lesson, we will look at some high-level methods for examining data and guessing the underlying distribution.

We can always present data in the form of a histogram. Suppose we collect one hundred observations, and we plot the following histogram. We can't really determine the underlying distribution because we don't have enough cells.

Suppose we plot the following histogram. The resolution on this plot is too granular. We can potentially see the underlying distribution, but we risk missing the forest for the trees.

If we take the following histogram, we get a much clearer picture of the underlying distribution.

It turns out that, if we take enough observations, the histogram will eventually converge to the true pdf/pmf of the random variable we are trying to model, according to the GlivenkoβCantelli theorem.

If we turn the histogram on its side and add some numerical information, we get a **stem-and-leaf diagram**, where each stem represents the common root shared among a collection of observations, and each leaf represents the observation itself.

When looking at empirical data, what questions might we ask to arrive at the underlying distribution? For example, can we at least tell if the observations are discrete or continuous?

We might want to ask whether the distribution is univariate or multivariate. We might be interested in someone's weight, but perhaps we need to generate height and weight observations simultaneously.

Additionally, we might need to check how much data we have available. Certain distributions lend themselves more easily to smaller samples of data.

Furthermore, we might need to communicate with experts regarding the nature of the data. For example, we might want to know if the arrival rate changes at our facility as the day progresses. While we might observe the rate directly, we might want to ask the floor supervisor what to expect beforehand.

Finally, what happens when we don't have much or any data? What if the system we want to model doesn't exist yet? How might we guess a good distribution?

Let's suppose we know that we have a discrete distribution. For example, we might realize that we only see a finite number of observations during our data collection process. How do we determine which discrete distribution to use?

If we want to model success and failures, we might use a Bernoulli random variable and estimate $p$. If we want to look at the number of successes in $n$ trials, we need to consider using a binomial random variable.

Perhaps we want to understand how many trials we need until we get our first success. In that case, we need to look at a geometric random variable. Alternatively, if we want to know how many trials we need until the $n$th success, we need a negative binomial random variable.

We can use the Poisson($\lambda$) distribution to count the number of arrivals over time, assuming that the arrival process satisfies certain elementary assumptions.

If we honestly don't have a good model for the discrete data, perhaps we can use an empirical or sample distribution.

What if the distribution is continuous?

We might consider the uniform distribution if all we know about the data is the minimum and maximum possible values. If we know the most likely value as well, we might use the triangular distribution.

If we are looking at interarrival times from a Poisson process, then we know we should be looking at the Exp($\lambda$) distribution. If the process is nonhomogeneous, we might have to do more work, but the exponential distribution is a good starting point.

We might consider the normal distribution if we are looking at heights, weights, or IQs. Furthermore, if we are looking at sample means or sums, the normal distribution is a good choice because of the central limit theorem.

We can use the Beta distribution, which generalizes the uniform distribution, to specify bounded data. We might use the gamma, Weibull, Gumbel, or lognormal distribution if we are dealing with reliability data.

When in doubt, we can use the empirical distribution, which is based solely on the sample.

As we said, we will choose a "reasonable" distribution, and then we'll perform a hypothesis test to make sure that our choice is not too ridiculous.

For example, suppose we hypothesize that some data is normal. This data should fall approximately on a straight line when we graph it on a normal probability plot, and it should look normal when we graph it on a standard plot. At the very least, it should also pass a goodness-of-fit test for normality, of which there are several.

It's not enough to decide that some sequence of observations is normal; we still have to estimate $\mu$ and $\sigma^2$. In the next few lessons, we will look at point estimation, which lets us understand how to estimate these unknown parameters. We'll cover the concept of unbiased estimation first.

A **statistic** is a function of the observations $X_1,...,X_n$ that is not explicitly dependent on any unknown parameters. For example, the sample mean, $\bar{X}$, and the sample variance, $S^2$, are two statistics:

$\begin{alignedat}{1}
\bar{X} = &\frac{1}{n} \sum_{i=1}^n X_i \\[2ex]
S^2 = &\frac{1}{n-1} \sum_{i=1}^x(X_i - \bar{X})^2
\end{alignedat}$

Statistics are random variables. In other words, if we take two different samples, we should expect to see two different values for a given statistic.

We usually use statistics to estimate some unknown **parameter** from the underlying probability distribution of the $X_i$'s. For instance, we use the sample mean, $\bar{X}$, to estimate the true mean, $\mu$, of the underlying distribution, which we won't normally know. If $\mu$ is the true mean, then we can take a bunch of samples and use $\bar{X}$ to estimate $\mu$. We know, via the law of large numbers that, as $n \to \infty$, $\bar{X} \to \mu$.

Let's suppose that we have a collection of iid random variables, $X_1,...,X_n$. Let $T(\bold{X}) \equiv T(X_1,...,X_n)$ be a function that we can compute based only on the observations. Therefore, $T(\bold{X})$ is a statistic. If we use $T(\bold{X})$ to estimate some unknown parameter $\theta$, then $T(\bold{X})$ is known as a **point estimator** for $\theta$.

For example, $\bar{X}$ is usually a point estimator for the true mean, $\mu = E[X_i]$, and $S^2$ is often a point estimator for the true variance, $\sigma^2 = \text{Var}(X)$.

$T(\bold{X})$ should have specific properties:

- Its expected value should equal the parameter it's trying to estimate. This property is known as
*unbiasedness*. - It should have a low variance. It doesn't do us any good if $T(\bold{X})$ is bouncing around depending on the sample we take.

We say that $T(\bold{X})$ is **unbiased** for $\theta$ if $E[T(\bold{X})] = \theta$. For example, suppose that random variables, $X_1,...,X_n$ are iid *anything* with mean $\mu$. Then:

$\begin{alignedat}{1}
E[\bar{X}] & = E\left[\frac{1}{n}\sum_{i=1}^nX_i\right] \\[3ex]
& = \frac{1}{n}\sum_{i=1}^nE[X_i] \\[3ex]
& = \frac{nE[X_i]}{n} \\[2ex]
& = E[X_i] = \mu
\end{alignedat}$

Since $E[\bar{X}] = \mu$, $\bar{X}$ is always unbiased for $\mu$. That's why we call it the *sample mean*.

Similarly, suppose we have random variables, $X_1,...,X_n$ which are iid $\text{Exp}(\lambda)$. Then, $\bar{X}$ is unbiased for $\mu = E[X_i] = 1 / \lambda$. Even though $\lambda$ is unknown, we know that $\bar{X}$ is a good estimator for $1/ \lambda$.

Be careful, though. Just because $\bar{X}$ is unbiased for $1 / \lambda$ does **not** mean that $1 / \bar{X}$ is unbiased for $\lambda$: $E[1/\bar{X}] \neq 1 /E[\bar{X}] = \lambda$. In fact, $1/\bar{X}$ is biased for $\lambda$ in this exponential case.

Here's another example. Suppose that random variables, $X_1,...,X_n$ are iid *anything* with mean $\mu$ and variance $\sigma^2$. Then:

$E[S^2] = E\left[\frac{\sum_{i=1}^n(X_i - \bar{X})^2}{n - 1}\right] = \text{Var}(X_i) = \sigma^2$

Since $E[S^2] = \sigma^2$, $S^2$ is always unbiased for $\sigma^2$. That's why we called it the *sample variance*.

For example, suppose random variables $X_1,...,X_n$ are iid $\text{Exp}(\lambda)$. Then $S^2$ is unbiased for $\text{Var}(X_i) = 1 / \lambda^2$.

Let's give a proof for the unbiasedness of $S^2$ as an estimate for $\sigma^2$. First, let's convert $S^2$ into a better form:

$\begin{alignedat}{1}
S^2 & = \frac{\sum_{i=1}^n(X_i - \bar{X})^2}{n - 1} \\[3ex]
& = \frac{\sum_{i=1}^n X_i^2 -2X_i\bar{X} + \bar{X}^2}{n - 1} \\[3ex]
& = \frac{1}{n-1}\left(\sum_{i=1}^n X_i^2 -2X_i\bar{X} + \bar{X}^2\right) \\[3ex]
& = \frac{1}{n-1}\left(\sum_{i=1}^n X_i^2 - \sum_{i=1}^n 2X_i\bar{X} + \sum_{i=1}^n \bar{X}^2\right) \\[3ex]
\end{alignedat}$

Let's rearrange the middle sum:

$\sum_{i=1}^n 2X_i\bar{X} = 2\bar{X}\sum_{i=1}^n X_i$

Remember that $\bar{X}$ represents the average of all the $X_i$'s: $\sum X_i / n$. Thus, if we just sum the $X_i$'s and don't divide by $n$, we have a quantity equal to $n\bar{X}$:

$2\bar{X}\sum_{i=1}^n X_i = 2\bar{X}n\bar{X} = 2n\bar{X}^2$

Now, back to action:

$\begin{alignedat}{1}
S^2 & = \frac{1}{n-1}\left(\sum_{i=1}^n X_i^2 - \sum_{i=1}^n 2X_i\bar{X} + \sum_{i=1}^n \bar{X}^2\right) \\[3ex]
& = \frac{1}{n-1}\left(\sum_{i=1}^n X_i^2 - 2n\bar{X}^2 + n\bar{X}^2\right) \\[3ex]
& = \frac{1}{n-1}\left(\sum_{i=1}^n X_i^2 - n\bar{X}^2 \right) \\[3ex]
& = \frac{\sum_{i=1}^n X_i^2 - n\bar{X}^2}{n-1} \\[3ex]
\end{alignedat}$

Let's take the expected value:

$\begin{alignedat}{1}
E[S^2] & = \frac{\sum_{i=1}^n E[X_i^2] - nE[\bar{X}^2]}{n-1}
\end{alignedat}$

Note that $E[X_i^2]$ is the same for all $X_i$, so the sum is just $nE[X_1^2]$:

$\begin{alignedat}{1}
E[S^2] & = \frac{n E[X_1^2] - nE[\bar{X}^2]}{n-1} \\[2ex]
& = \frac{1}{n-1} \left(E[X_1^2] - E[\bar{X}^2]\right)
\end{alignedat}$

We know that $\text{Var}(X) = E[X^2] - (E[X])^2$, so $E[X^2] = \text{Var}(X) + (E[X])^2$. Therefore:

$\begin{alignedat}{1}
E[S^2] & = \frac{1}{n-1} \left(\text{Var}(X_1) + (E[X_1]^2) - \text{Var}(\bar{X}) - (E[\bar{X}]^2)\right)
\end{alignedat}$

Remember that $E[X_1] = E[\bar{X}]$, so:

$\begin{alignedat}{1}
E[S^2] & = \frac{1}{n-1} \left(\text{Var}(X_1) - \text{Var}(\bar{X}) \right) \\[3ex]
\end{alignedat}$

Furthermore, remember that $\text{Var}(\bar{X}) = \text{Var}(X_1) /n = \sigma_2 / n$. Therefore:

$\begin{alignedat}{1}
& = \frac{1}{n-1} \left(\sigma^2 - \sigma^2/n) \right) \\[3ex]
& = \frac{n\sigma^2}{n-1} - \frac{\sigma^2}{n-1} \\[3ex]
& = \frac{n\sigma^2- \sigma^2}{n-1} = \frac{\sigma^2(n-1)}{n-1} = \sigma^2
\end{alignedat}$

Unfortunately, while $S^2$ is unbiased for the variance $\sigma^2$, $S$ is *biased* for the standard deviation $\sigma$.

In this lesson, we'll look at mean squared error, a performance measure that evaluates an estimator by combining its bias and variance.

We want to choose an estimator with the following properties:

- Low bias (defined as the difference between the estimator's expected value and the true parameter value)
- Low variance

Furthermore, we want the estimator to have both of these properties simultaneously. If the estimator has low bias but high variance, then its estimates are meaninglessly noisy. Its average estimate is correct, but any individual estimate may be way off the mark. On the other hand, an estimator with low variance but high bias is very confident about the wrong answer.

Suppose that we have $n$ random variables, $X_1,...,X_n \overset{\text{iid}}{\sim} \text{Unif}(0,\theta)$. We know that our observations have a lower bound of $0$, but we don't know the value of the upper bound, $\theta$. As is often the case, we sample many observations from the distribution and use that sample to estimate the unknown parameter.

Consider two estimators for $\theta$:

$\begin{alignedat}{1}
Y_1 &\equiv 2\bar{X} \\[2ex]
Y_2 &\equiv \frac{n+1}{n} \max_{1 \leq i \leq X_i}X_i
\end{alignedat}$

Let's look at the first estimator. We know that $E[Y_1] = 2E[\bar{X}]$, by definition. Similarly, we know that $2E[\bar{X}] = 2E[X_i]$, since $\bar{X}$ is always unbiased for the mean. Recall how we compute the expected value for a uniform random variable:

$E[A] = (b - a) / 2, \quad A \sim \text{Unif}(a,b)$

Therefore:

$2E[X_i] = 2\left(\frac{\theta - 0}{2}\right) = \theta = E[Y_1]$

As we can see, $Y_1$ is unbiased for $\theta$.

It's also the case that $Y_2$ is unbiased, but it takes more work to demonstrate. As a first step, take the cdf of the maximum of the $X_i$'s, $M \equiv \max_iX_i$. Here's what $P(M \leq y)$ looks like:

$P(M \leq y) = P(X_1 \leq y \text{ and } X_2 \leq y \text{ and } \dots \text{ and } X_n \leq y)$

If $M \leq y$, and $M$ is the maximum, then $P(M \leq y)$ is the probability that all the $X_i$'s are less than $y$. Since the $X_i$'s are independent, we can take the product of the individual probabilities:

$P(M \leq y) = \prod_{i = i}^n P(X_i \leq y) = [P(X_1 \leq y)]^n$

Now, we know, by definition, that the cdf is the integral of the pdf. Remember that the pdf for a uniform distribution, $\text{Unif}(a,b)$, is:

$f(x) = \frac{1}{b-a}, a < x < b$

Let's rewrite $P(M \leq y)$:

$\begin{alignedat}{1}
P(M \leq y) &= \left[\int_0^yf_{X_1}(x)dx\right]^n \\[2ex]
& = \left[\int_0^y\frac{1}{\theta}dx\right]^n \\[2ex]
& = \left(\frac{y}{\theta}\right)^n
\end{alignedat}$

Again, we know that the pdf is the derivative of the cdf, so:

$\begin{alignedat}{1}
f_M(y) & = F_M(y)dy \\[2ex]
& = \frac{d}{dy} \frac{y^n}{\theta^n} \\[2ex]
& = \frac{ny^{n-1}}{\theta^n}
\end{alignedat}$

With the pdf in hand, we can get the expected value of $M$:

$\begin{alignedat}{1}
E[M] & = \int_0^\theta yf_M(y)dy \\[2ex]
& = \int_0^\theta \frac{ny^n}{\theta^n} dy \\[2ex]
& = \frac{ny^{n+1}}{(n+1)\theta^n}\Big|_0^\theta \\[2ex]
& = \frac{n\theta^{n+1}}{(n+1)\theta^n} \\[2ex]
& = \frac{n\theta}{n+1} \\[2ex]
\end{alignedat}$

Note that $E[M] \neq \theta$, so $M$ is not an unbiased estimator for $\theta$. However, remember how we defined $Y_2$:

$Y_2 \equiv \frac{n+1}{n} \max_{1 \leq i \leq X_i}X_i$

Thus:

$\begin{alignedat}{1}
E[Y_2] & = \frac{n+1}{n}E[M] \\[2ex]
& = \frac{n+1}{n} \left(\frac{n\theta}{n+1}\right) \\[2ex]
& = \theta
\end{alignedat}$

Therefore, $Y_2$ is unbiased for $\theta$.

Both indicators are unbiased, so which is better? Let's compare variances now. After similar algebra, we see:

$\text{Var}(Y_1) = \frac{\theta^2}{3n}, \quad \text{Var}(Y_2) = \frac{\theta^2}{n(n+2)}$

Since the variance of $Y_2$ involves dividing by $n^2$, while the variance of $Y_1$ only divides by $n$, $Y_2$ has a much lower variance than $Y_1$ and is, therefore, the better indicator.

The **bias** of an estimator, $T[\bold{X}]$, is the difference between the estimator's expected value and the value of the parameter its trying to estimate: $\text{Bias}(T) \equiv E[T] - \theta$. When $E[T] = \theta$, then the bias is $0$ and the estimator is unbiased.

The **mean squared error** of an estimator, $T[\bold{X}]$, the expected value of the squared deviation of the estimator from the parameter: $\text{MSE}(T) \equiv E[(T-\theta)^2]$.

Remember the equation for variance:

$\text{Var}(X) = E[X^2] - (E[X])^2$

Using this equation, we can rewrite $\text{MSE}(T)$:

$\begin{alignedat}{1}
\text{MSE}(T) & = E[(T-\theta)^2] \\[2ex]
& = \text{Var}(T) + (E[T] - \theta)^2 \\[2ex]
& = \text{Var}(T) + \text{Bias}^2 \\[2ex]
\end{alignedat}$

Usually, we use mean squared error to evaluate estimators. As a result, when selecting between multiple estimators, we might not choose the unbiased estimator, so long as that estimator's MSE is the lowest among the options.

The **relative efficiency** of one estimator, $T_1$, to another, $T_2$, is the ratio of the mean squared errors: $\text{MSE}(T_1) / \text{MSE}(T_2)$. If the relative efficiency is less than one, we want $T_1$; otherwise, we want $T_2$.

Let's compute the relative efficiency of the two estimators we used in the previous example:

$\begin{alignedat}{1}
Y_1 &\equiv 2\bar{X} \\[2ex]
Y_2 &\equiv \frac{n+1}{n} \max_{1 \leq i \leq X_i}X_i
\end{alignedat}$

Remember that both estimators are unbiased, so the bias is zero by definition. As a result, the mean squared errors of the two estimators is determined solely by the variance:

$\text{MSE}(Y_1) = \text{Var}(Y_1) = \frac{\theta^2}{3n}, \quad \text{MSE}(Y_2) = \text{Var}(Y_2) = \frac{\theta^2}{n(n+2)}$

Let's calculate the relative efficiency:

$\begin{alignedat}{1}
e(Y_1, Y_2) & = \frac{\frac{\theta^2}{3n}}{\frac{\theta^2}{n(n+2)}} \\[3ex]
& = \frac{\theta^2 * n(n+2)}{3n\theta^2} \\[2ex]
& = \frac{n(n+2)}{3n} \\[2ex]
& = \frac{n+2}{3} \\[2ex]
\end{alignedat}$

The relative efficiency is greater than one for all $n > 1$, so $Y_2$ is the better estimator just about all the time.

In this lesson, we are going to talk about maximum likelihood estimation, which is perhaps the most important point estimation method. It's a very flexible technique that many software packages use to help estimate parameters from various distributions.

Consider an iid random sample, $X_1,...,X_n$, where each $X_i$ has pdf/pmf $f(x)$. Additionally, suppose that $\theta$ is some unknown parameter from $X_i$ that we would like to estimate. We can define a **likelihood function**, $L(\theta)$ as:

$L(\theta) \equiv \prod_{i=1}^n f(x_i)$

The **maximum likelihood estimator** (MLE) of $\theta$ is the value of $\theta$ that maximizes $L(\theta)$. The MLE is a function of the $X_i$'s and is itself a random variable.

Consider a random sample, $X_1,...,X_n \overset{\text{iid}}{\sim} \text{Exp}(\lambda)$. Find the MLE for $\lambda$. Note that, in this case, $\lambda$, is taking the place of the abstract parameter, $\theta$. Now:

$L(\lambda) \equiv \prod_{i=1}^n f(x_i)$

We know that exponential random variables have the following pdf:

$f(x, \lambda) = \lambda e^{-\lambda x}$

Therefore:

$L(\lambda) \equiv \prod_{i=1}^n \lambda e^{-\lambda x_i}$

Let's expand the product:

$L(\lambda) = (\lambda e^{-\lambda x_1}) * (\lambda e^{-\lambda x_2}) * \dots * (\lambda e^{-\lambda x_n})$

We can pull out a $\lambda^n$ term:

$L(\lambda) = \lambda^n * (e^{-\lambda x_1}) * (e^{-\lambda x_2}) * \dots * (e^{-\lambda x_n})$

Remember what happens to exponents when we multiply bases:

$a^x * a^y = a^{x+y}$

Let's apply this to our product (and we can swap in $\exp$ notation to make things easier to read):

$L(\lambda) = \lambda^n \exp\left[-\lambda \sum_{i=1}^nx_i\right]$

Now, we need to maximize $L(\lambda)$ with respect to $\lambda$. We could take the derivative of $L(\lambda)$, but we can use a trick! Since the natural log function is one-to-one, the $\lambda$ that maximizes $L(\lambda)$ also maximizes $\ln(L(\lambda))$. Let's take the natural log of $L(\lambda)$:

$\ln(L(\lambda)) = \ln\left(\lambda^n \exp\left[-\lambda \sum_{i=1}^nx_i\right]\right)$

Let's remember three different rules here:

$\ln(ab) = \ln(a) + \ln(b), \quad \ln(a^b) = b\ln(a), \quad \ln(e^x) = x$

Therefore:

$\begin{alignedat}{1}
\ln(L(\lambda)) & = \ln(\lambda^n) +\ln\left(\exp\left[-\lambda \sum_{i=1}^nx_i\right]\right) \\[2ex]
& = n\ln(\lambda) +\ln\left(\exp\left[-\lambda \sum_{i=1}^nx_i\right]\right) \\[2ex]
& = n\ln(\lambda) -\lambda \sum_{i=1}^nx_i \\[2ex]
\end{alignedat}$

Now, let's take the derivative:

$\begin{alignedat}{1}
\frac{d}{d\lambda}\ln(L(\lambda)) & = \frac{d}{d\lambda}\left(n\ln(\lambda) -\lambda \sum_{i=1}^nx_i\right) \\[2ex]
& = \frac{n}{\lambda} - \sum_{i=1}^nx_i \\[2ex]
\end{alignedat}$

Finally, we set the derivative equal to $0$ and solve for $\lambda$:

$\begin{alignedat}{1}
0 \equiv \frac{n}{\lambda} & - \sum_{i=1}^nx_i \\[2ex]
\frac{n}{\lambda} & = \sum_{i=1}^nx_i \\[3ex]
\lambda & = \frac{n}{\sum_{i=1}^nx_i} \\[2ex]
\lambda & = \frac{1}{\bar{X}} \\[2ex]
\end{alignedat}$

Thus, the maximum likelihood estimator for $\lambda$ is $1 / \bar{X}$, which makes a lot of sense. The mean of the exponential distribution is $1 / \lambda$, and we usually estimate that mean by $\bar{X}$. Since $\bar{X}$ is a good estimator for $\lambda$, it stands to reason that a good estimator for $\lambda$ is $1 / \bar{X}$.

Conventionally, we put a "hat" over the $\lambda$ that maximizes the likelihood function to indicate that it is the MLE. Such notation looks like this: $\widehat{\lambda}$.

Note that we went from "little x's", $x_i$, to "big x", $\bar{X}$, in the equation. We do this to indicate that $\widehat{\lambda}$ is a random variable.

Just to be careful, we probably should have performed a second-derivative test on the function, $\ln(L(\lambda))$, to ensure that we found a maximum likelihood estimator and not a minimum likelihood estimator.

Let's look at a discrete example. Suppose we have $X_1,...,X_n \overset{\text{iid}}{\sim} \text{Bern}(p)$. Let's find the MLE for $p$. We might remember that the expected value of Bern($p$) random variable is $p$, so we shouldn't be surprised if $\bar X$ is our MLE.

Let's remember the values that $X_i$ can take:

$X_i = \begin{cases}
1 & \text{w.p. } p \\
0 & \text{w.p. } 1-p
\end{cases}$

Therefore, we can write the pmf for a Bern($p$) random variable as follows:

$f(x) = p^x(1-p)^{1-x}, \quad x = 0, 1$

Now, let's calculate $L(p)$. First:

$L(p) = \prod_{i=1}^n f(x_i) = \prod p^{x_i}(1-p)^{1-x_i}$

Remember that $a^x * a^y = a^{x+y}$. So:

$\prod p^{x_i}(1-p)^{1-x_i} = p^{\sum_{i=1}^n x_i}(1-p)^{n-\sum_{i=1}^n x_i}$

Let's take the natural log of both sides, remembering that $\ln(ab) = \ln(a) + \ln(b)$:

$\ln(L(p)) = \sum_{i=1}^n x_i\ln(p) + (n-\sum_{i=1}^n x_i)\ln(1-p)$

Let's take the derivative, remembering that the derivative of $\ln(1-p)$ equals $-1/(1-p)$:

$\frac{d}{dp}\ln(L(p)) = \frac{\sum_{i=1}^n x_i}{p} - \frac{n-\sum_{i=1}^n x_i}{1-p}$

Now we can set the derivative equal to zero, and solve for $\widehat p$:

$\begin{alignedat}{1}
\frac{\sum_{i=1}^n x_i}{\widehat p} &- \frac{n-\sum_{i=1}^n x_i}{1-\widehat p} = 0 \\
\frac{\sum_{i=1}^n x_i}{\widehat p} &= \frac{n-\sum_{i=1}^n x_i}{1-\widehat p} \\
\frac{\sum_{i=1}^n x_i}{n-\sum_{i=1}^n x_i} &= \frac{\widehat p}{1-\widehat p} \\[3ex]
\bar X - 1 &= \widehat p - 1 \\
\bar X &= \widehat p
\end{alignedat}$

In this lesson, we'll look at some additional MLE examples. MLEs will become very important when we eventually carry out goodness-of-fit tests.

Suppose we have $X_1,...,X_n \overset{\text{iid}}{\sim} \text{Nor}(\mu, \sigma^2)$. Let's find the simultaneous MLE's for $\mu$ and $\sigma^2$:

$\begin{alignedat}{1}
L(\mu, \sigma^2) &= \prod_{i=1}^n f(x_i) \\
&= \prod_{i=1}^n \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left\{-\frac{1}{2}\frac{(x_i - \mu)^2}{\sigma^2}\right\} \\
&= \frac{1}{(2\pi\sigma^2)^{n/2}}\exp\left\{-\frac{1}{2}\sum_{i=1}^n\frac{(x_i - \mu)^2}{\sigma^2}\right\}
\end{alignedat}$

Let's take the natural log:

$\ln(L(\mu, \sigma^2)) = -\frac{n}{2}\ln(2\pi) - \frac{n}{2}\ln(\sigma^2) - \frac{1}{2\sigma^2}\sum_{i=1}^n (x_i - \mu)^2$

Let's take the first derivative with respect to $\mu$, to find the MLE, $\widehat\mu$, for $\mu$. Remember that the derivative of terms that don't contain $\mu$ are zero:

$\frac{\partial}{\partial\mu} \ln(L(\mu, \sigma^2)) = \frac{\partial}{\partial\mu} \frac{-1}{2\sigma^2}\sum_{i=1}^n(x_i - \mu)^2$

What's the derivative of $(x_i - \mu)^2$ with respect to $\mu$? Naturally, it's $-2(x_i - \mu)$. Therefore:

$\frac{\partial}{\partial\mu} \ln(L(\mu, \sigma^2)) = \frac{1}{\sigma^2}\sum_{i=1}^n(x_i - \mu)$

We can set the expression on the right equal to zero and solve for $\widehat\mu$:

$0 = \frac{1}{\sigma^2}\sum_{i=1}^n(x_i - \widehat\mu)$

If we solve for $\widehat\mu$, we see that $\widehat\mu = \bar X$, which we expect. In other words, the MLE for the true mean, $\mu$, is the sample mean, $\bar X$.

Now, let's take the derivative of $\ln(L(\mu,\sigma^2))$ with respect to $\sigma^2$. Consider:

$\frac{\partial}{\partial\sigma^2} \ln(L(\mu, \sigma^2)) = -\frac{n}{2\sigma^2} + \frac{1}{2\sigma^4}\sum_{i=1}^n (x_i - \widehat\mu)^2$

We can set the expression on the right equal to zero and solve for $\widehat{\sigma^2}$:

$\begin{alignedat}
-\frac{n}{2\widehat{\sigma^2}} + \frac{1}{2\widehat{\sigma^4}}\sum_{i=1}^n (x_i - \widehat\mu)^2 &= 0 \\
\frac{1}{2\widehat{\sigma^4}}\sum_{i=1}^n (x_i - \widehat\mu)^2 &= \frac{n}{2\widehat{\sigma^2}} \\
\frac{1}{n}\sum_{i=1}^n (x_i - \widehat\mu)^2 &= \frac{2\widehat{\sigma^4}}{2\widehat{\sigma^2}} \\
\frac{\sum_{i=1}^n(X_i - \bar X)^2}{n} = \widehat{\sigma^2}
\end{alignedat}$

Notice how close $\widehat{\sigma^2}$ is to the unbiased sample variance:

$S^2 = \frac{\sum_{i=1}^n(X_i - \bar X)^2}{n - 1} = \frac{n\widehat{\sigma^2}}{n-1}$

Because $S^2$ is unbiased, we have to expect that $\widehat{\sigma^2}$ is slightly biased. However, $\widehat{\sigma^2}$ has slightly less variance than $S^2$, making it the MLE. Regardless, the two quantities converge as $n$ grows.

Let's look at the Gamma distribution, parameterized by $r$ and $\theta$. The pdf for this distribution is shown below. Recall that $\Gamma(r)$ is the gamma function.

$f(x) = \frac{\lambda^r}{\Gamma(r)}x^{r-1}e^{-\lambda x}, \quad x > 0$

Suppose we have $X_1,...,X_n \overset{\text{iid}}{\sim} \text{Gam}(r, \lambda)$. Let's find the MLE's for $r$ and $\lambda$:

$\begin{alignedat}{1}
L(r, \lambda) &= \prod_{i=1}^n f(x_i) \\[3ex]
&= \frac{\lambda^{nr}}{[\Gamma(r)]^n} \left(\prod_{i=1}^n x_i\right)^{r-1}e^{-\lambda\sum_i x_i}
\end{alignedat}$

Let's take the natural logarithm of both sides, remembering that $\ln(a/b) = \ln(a) - \ln(b)$:

$\ln(L) = rn\ln(\lambda) - n\ln(\Gamma(r)) + (r-1)\ln\left(\prod_ix_i\right) - \lambda\sum_ix_i$

Let's get the MLE of $\lambda$ first by taking the derivative with respect to $\lambda$. Notice that the middle two terms disappear:

$\frac{\partial}{\partial\lambda}\ln(L) = \frac{rn}{\lambda} - \sum_{i=1}^n x_i$

Let's set the expression on the right equal to zero and solve for $\widehat\lambda$:

$\begin{alignedat}{1}
0 &= \frac{\widehat rn}{\widehat\lambda} - \sum_{i=1}^n x_i \\
\sum_{i=1}^n x_i &= \frac{\widehat rn}{\widehat\lambda} \\
\widehat\lambda &= \frac{\widehat rn}{\sum_{i=1}^n x_i} = \widehat r / \bar X\\
\end{alignedat}$

It turns out the mean of the Gamma distribution is $r / \lambda$. If we pretend that $\bar X = \mu$ then we can see how, with a simple rearrangement, that $\widehat\lambda = \widehat r / \bar X$.

Now let's find the MLE of $r$. First, we take the derivative with respect to $r$:

$\frac{\partial}{\partial r}\ln(L) = n\ln(\lambda) - \frac{n}{\Gamma(r)}\frac{d}{dr}\Gamma(r) + \ln\left(\prod_ix_i\right)$

We can define the digamma function, $\Psi(r)$, to help us with the term involving the gamma function and it's derivative:

$\Psi(r) \equiv \Gamma'(r) / \Gamma(r)$

At this point, we can substitute in $\widehat\lambda = \widehat r/\bar X$, and then use a computer to solve the following equation, either by bisection, Newton's method, or some other method:

$n\ln(\widehat r/\bar X) - n\Psi(r) + \ln\left(\prod_ix_i\right) = 0$

The challenging part of evaluating the digamma function is computing the derivative of the gamma function. We can use the definition of the derivative here to help us, choosing our favorite small $h$ and then evaluating:

$\Gamma'(r) \approx \frac{\Gamma(r+h) - \Gamma(r)}{h}$

Suppose we have $X_1,...,X_n \overset{\text{iid}}{\sim} \text{Unif}(0, \theta)$. Let's find the MLE for $\theta$.

Remember that the pdf $f(x) = 1/\theta, 0 < x < \theta$. We can take the likelihood function as the product of the $f(x_i)$'s:

$L(\theta) = \prod_{i=1}^n f(x_i) = \begin{cases}
1/\theta^n,& \text{if } 0 \leq x_i \leq \theta, \forall i \\
0 & \text{otherwise}
\end{cases}$

In order to have $L(\theta) > 0$, we must have $0 \leq x_i \leq \theta, \forall i$. In other words, $\theta$ must be at least as large as the largest observation we've seen yet: $\theta \geq \max_i x_i$.

Subject to this constraint, $L(\theta) = 1 / \theta^n$ is **not** maximized at $\theta = 0$. Instead $L(\theta) = 1 / \theta^n$ is maximized at the smallest possible $\theta$ value, namely $\widehat\theta = \max_i X_i$.

This result makes sense in light of the similar (unbiased) estimator $Y_2 = (n+1)\max_i X_i/n$ that we saw previously.

In this lesson, we will expand the vocabulary of maximum likelihood estimators by looking at the invariance property of MLEs. In a nutshell, if we have the MLE for some parameter, then we can use the invariance property to determine the MLE for any reasonable function of that parameter.

If $\widehat{\theta}$ is the MLE of some parameter, $\theta$, and $h(\cdot)$ is a 1:1 function, then $h(\widehat{\theta})$ is the MLE of $h(\theta)$.

Remember that this invariance property does *not* hold for unbiasedness. For instance, we said previously that the sample variance is an unbiased estimator for the true variance because $E[S^2] = \sigma^2$. However, $E[\sqrt{S^2}] \neq \sigma$, so we cannot use the sample standard deviation as an unbiased estimator for the true standard deviation.

Suppose we have a random sample, $X_1,...,X_n \overset{\text{iid}}{\sim} \text{Bern}(p)$. We might remember that the MLE of $p$ is $\widehat p = \bar X$. If we consider the 1:1 function $h(\theta) = \theta^2, \theta > 0$, then the invariance property says that the MLE of $p^2$ is $\bar X^2$.

Suppose we have a random sample, $X_1,...,X_n \overset{\text{iid}}{\sim} \text{Nor}(\mu, \sigma^2)$. We saw previously that the MLE for $\sigma^2$ is:

$\widehat{\sigma^2} = \frac{1}{n} \sum_{i=1}^n (X_i - \bar X)^2$

We just said that we couldn't take the square root of $S^2$ to estimate $\sigma$ in an unbiased way. However, we can use the square root of $\widehat{\sigma^2}$ to get the MLE for $\sigma$.

If we consider the 1:1 function $h(\theta) = +\sqrt\theta$, then the invariance property says that the MLE of $\sigma$ is:

$\widehat\sigma = \sqrt{\widehat{\sigma^2}} = \sqrt\frac{\sum_{i=1}^n (X_i - \bar X)^2}{n}$

Suppose we have a random sample, $X_1,...,X_n \overset{\text{iid}}{\sim} \text{Exp}(\lambda)$. The **survival function**, $\bar{F}(x)$, is:

$\bar{F}(x) = P(X > x) = 1 - F(x) = 1 - (1 - e^{-\lambda x}) = e^{-\lambda x}$

We saw previously the the MLE for $\lambda$ is $\widehat\lambda = 1/\bar X$.Therefore, using the invariance property, we can see that the MLE for $\bar{F}(x)$ is $\bar{F}(\widehat{\lambda})$:

$\widehat{\bar{F}(x)} = e^{-\widehat{\lambda}x} = e^{-x / \bar{X}}$

The MLE for the survival function is used all the time in actuarial sciences to determine - somewhat gruesomely, perhaps - the probability that people will live past a certain age.

In this lesson, we'll finish off our discussion on estimators by talking about the Method of Moments.

The $k$th **moment** of a random variable $X$ is:

$E[X^k] = \begin{cases}
\sum_x x^k f(x) & \text{if X is discrete} \\
\int_\mathbb{R} x^k f(x)dx & \text{if X is continuous}
\end{cases}$

Suppose we have a sequence of random variables, $X_1,...,X_n$, which are iid from pmf/pdf $f(x)$. The **method of moments** (MOM) estimator for $E[X^k]$, $m_k$, is:

$m_k = \frac{1}{n} \sum_{i=1}^n X_i^k$

Note that $m_k$ is equal to the sample average of the $X_i^k$'s. Indeed, the MOM estimator for $\mu = E[X_i]$, is the sample mean, $\bar X$:

$m_1 = \frac{1}{n} \sum_{i=1}^n X_i = \bar X = E[X_i]$

Similarly, we can find the MOM estimator for $k=2$:

$m_2 = \frac{1}{n}\sum_{i=1}^n X_i^2 = E[X_i^2]$

We can combine the MOM estimators for $k=1,2$ to produce an expression for the variance of $X_i$:

$\begin{alignedat}{1}
\text{Var}(X_i) &= E[X_i^2] - (E[X_i])^2 \\
&= \frac{1}{n}\sum_{i=1}^n X_i^2 - \bar X^2 \\
&= \sum_{i=1}^n\left(\frac{X_i^2}{n}\right) - \bar X^2 \\
&= \sum_{i=1}^n\left(\frac{X_i^2}{n} - \frac{\bar X^2}{n} \right) \\
&= \frac{1}{n}\sum_{i=1}^n\left(X_i^2 - \bar X^2 \right) \\
&= \frac{n-1}{n}S^2
\end{alignedat}$

Of course, it's perfectly okay to use $S^2$ to estimate the variance, and the two quantities converge as $n$ grows.

Suppose that $X_1,...,X_n \overset{\text{iid}}{\sim} \text{Pois}(\lambda)$. We know that, for the Poisson distribution, $E[X_i] = \lambda$, so a MOM estimator for $\lambda$ is $\bar X$.

We might remember that the variance of the Poisson distribution is also $\lambda$, so another MOM estimator for $\lambda$ is:

$\frac{n-1}{n}S^2$