We've seen that each sample from the population yields a slightly different sample statistic (sample mean, sample proportion, etc.)
Previously we've quantified this value via simulation
Today we talk about some of the theory underlying sampling distributions, particularly as they relate to sample means.
Statistical inference is the act of generalizing from a sample in order to make conclusions regarding a population.
We are interested in population parameters, which we do not observe. Instead, we must calculate statistics from our sample in order to learn about them.
As part of this process, we must quantify the degree of uncertainty in our sample statistic.
Suppose we’re interested in the mean resting heart rate of students at Duke, and are able to do the following:
Suppose we’re interested in the mean resting heart rate of students at Duke, and are able to do the following:
Suppose we’re interested in the mean resting heart rate of students at Duke, and are able to do the following:
Take a random sample of size n from this population, and calculate the mean resting heart rate in this sample, ¯X1
Put the sample back, take a second random sample of size n, and calculate the mean resting heart rate from this new sample, ¯X2
Suppose we’re interested in the mean resting heart rate of students at Duke, and are able to do the following:
Take a random sample of size n from this population, and calculate the mean resting heart rate in this sample, ¯X1
Put the sample back, take a second random sample of size n, and calculate the mean resting heart rate from this new sample, ¯X2
Put the sample back, take a third random sample of size n, and calculate the mean resting heart rate from this sample, too...
Suppose we’re interested in the mean resting heart rate of students at Duke, and are able to do the following:
Take a random sample of size n from this population, and calculate the mean resting heart rate in this sample, ¯X1
Put the sample back, take a second random sample of size n, and calculate the mean resting heart rate from this new sample, ¯X2
Put the sample back, take a third random sample of size n, and calculate the mean resting heart rate from this sample, too...
...and so on.
After repeating this many times, we have a data set that has the sample means from the population: ¯X1, ¯X2, ⋯, ¯XK (assuming we took K total samples).
After repeating this many times, we have a data set that has the sample means from the population: ¯X1, ¯X2, ⋯, ¯XK (assuming we took K total samples).
Can we say anything about the distribution of these sample means (that is, the sampling distribution of the mean?)
(Keep in mind, we don't know what the underlying distribution of mean resting heart rate of Duke students looks like!)
A quick caveat...
For now, let's assume we know the underlying standard deviation, σ, from our distribution
For a population with a well-defined mean μ and standard deviation σ, these three properties hold for the distribution of sample mean ¯X, assuming certain conditions hold:
For a population with a well-defined mean μ and standard deviation σ, these three properties hold for the distribution of sample mean ¯X, assuming certain conditions hold:
For a population with a well-defined mean μ and standard deviation σ, these three properties hold for the distribution of sample mean ¯X, assuming certain conditions hold:
The mean of the sampling distribution of the mean is identical to the population mean μ.
The standard deviation of the distribution of the sample means is σ/√n.
For a population with a well-defined mean μ and standard deviation σ, these three properties hold for the distribution of sample mean ¯X, assuming certain conditions hold:
The mean of the sampling distribution of the mean is identical to the population mean μ.
The standard deviation of the distribution of the sample means is σ/√n.
For n large enough, the shape of the sampling distribution of means is approximately normally distributed.
The normal distribution is unimodal and symmetric and is described by its density function:
If a random variable X follows the normal distribution, then f(x)=1√2πσ2exp{−12(x−μ)2σ2} where μ is the mean and σ2 is the variance (σ is the standard deviation)
We often write N(μ,σ) to describe this distribution.
The central limit theorem tells us that sample means are normally distributed, if we have enough data and certain assumptions hold.
This is true even if our original variables are not normally distributed.
Click here to see an interactive demonstration of this idea.
We need to check two conditions for CLT to hold: independence, sample size/distribution.
We need to check two conditions for CLT to hold: independence, sample size/distribution.
✅ Independence: The sampled observations must be independent. This is difficult to check, but the following are useful guidelines:
We need to check two conditions for CLT to hold: independence, sample size/distribution.
✅ Independence: The sampled observations must be independent. This is difficult to check, but the following are useful guidelines:
If samples are independent, then by definition one sample's value does not "influence" another sample's value.
✅ Sample size / distribution:
rs_pop <- tibble(x = rbeta(100000, 1, 5) * 100)
The true population parameters
## # A tibble: 1 × 2## mu sigma## <dbl> <dbl>## 1 16.6 14.0
set.seed(1)samp_1 <- rs_pop %>% sample_n(size = 50) %>% summarise(x_bar = mean(x))
samp_1
## # A tibble: 1 × 1## x_bar## <dbl>## 1 16.3
set.seed(2)samp_2 <- rs_pop %>% sample_n(size = 50) %>% summarise(x_bar = mean(x))
samp_2
## # A tibble: 1 × 1## x_bar## <dbl>## 1 13.9
set.seed(3)samp_3 <- rs_pop %>% sample_n(size = 50) %>% summarise(x_bar = mean(x))
samp_3
## # A tibble: 1 × 1## x_bar## <dbl>## 1 19.1
set.seed(3)samp_3 <- rs_pop %>% sample_n(size = 50) %>% summarise(x_bar = mean(x))
samp_3
## # A tibble: 1 × 1## x_bar## <dbl>## 1 19.1
keep repeating...
set.seed(092620)sampling <- rs_pop %>% rep_sample_n(size = 50, replace = TRUE, reps = 5000) %>% group_by(replicate) %>% summarise(xbar = mean(x))
The sample statistics
## # A tibble: 1 × 2## mean se## <dbl> <dbl>## 1 16.6 1.98
How do the shapes, centers, and spreads of these distributions compare?
The true population parameters
## # A tibble: 1 × 2## mu sigma## <dbl> <dbl>## 1 16.6 14.0
The sample statistics
## # A tibble: 1 × 2## mean se## <dbl> <dbl>## 1 16.6 1.98
If certain assumptions are satisfied, regardless of the shape of the population distribution, the sampling distribution of the mean follows an approximately normal distribution.
The center of the sampling distribution is at the center of the population distribution.
If certain assumptions are satisfied, regardless of the shape of the population distribution, the sampling distribution of the mean follows an approximately normal distribution.
The center of the sampling distribution is at the center of the population distribution.
The sampling distribution is less variable than the population distribution (and we can quantify by how much).
If certain assumptions are satisfied, regardless of the shape of the population distribution, the sampling distribution of the mean follows an approximately normal distribution.
The center of the sampling distribution is at the center of the population distribution.
The sampling distribution is less variable than the population distribution (and we can quantify by how much).
What is the standard error, and how are the standard error and sample size related? What does that say about how the spread of the sampling distribution changes as n increases?
If Z∼N(0,1), what is P(−1<Z<2)?
P(Z < 2)
pnorm(2)
## [1] 0.9772499
If Z∼N(0,1), what is P(−1<Z<2)?
P(Z < -1)
pnorm(-1)
## [1] 0.1586553
If Z∼N(0,1), what is P(−1<Z<2)?
P(Z < 2) - P(Z < -1)
pnorm(2) - pnorm(-1)
## [1] 0.8185946
We will use the Central Limit Theorem and the normal distribution to conduct statistical inference.
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |