paypal / buymeacoffee
Table of Contents
Week Lecture Lecture Topic Reading M&M
1 Confidence Intervals 6.1
1
2 Significance Testing 6.2
Details and Limitations of Significance Testing and CI 6.2 + 6.3 + Document
3 Standardized Effect Sizes
2 Effect Size
Exam Review (Statistics 1a) No Lecture
4 Power 6.4
3 One-Sample t Procedures 7.1 + Document
5 Standardized Effect Sizes
Paired t Procedure
Two-Sample t Procedure 7.2 + Document
6
Standardized Effect Sizes
4
Sign Test for Matched Pairs 7.3 + 8.1
7
Inference for a Proportion
Inference for a Proportion 8.1 + 8.2
8
5 Inference for Two Proportions
9 Inference for Two-Ways Tables 9
10 Introduction to Bayesian Statistics Click Link
6
Friday Before Christmas No Lecture
I appreciate and thank you for any donation; all this money will
(probably) go toward getting more year 2nd books :)
1
, paypal / buymeacoffee
General Concepts
Lecture 1
Learn the concepts behind confidence intervals, what they can and can’t tell you, how to
calculate confidence intervals and appropriate sample sizes, being able to look up z*-values
for confidence intervals in table A.
Lecture 2
Know the concepts behind significance testing, what it can and can’t tell you, calculating the
test value (z), being able to convert test values to right-sided/left-sided/two-sided p-values.
Lecture 3
Understand the concepts behind critical values and effect sizes, what they can and can’t tell
you, know how to look up z*-values for calculating the critical values and effect size (Cohen's
d), and understand the limitations of inferential statistics.
Lecture 4
Learn what power, type I, and type II errors are, how to determine them, and how they are
related to each other.
Lecture 5
Learn about t-distributions and degrees of freedom. Learn to recognize matched pairs data.
Understand table D and how to use it for significance testing and confidence intervals.
Lecture 6
Learn to recognize two-sample data, and when two-sample z, two-sample t, and pooled
sample t are appropriate. Remember the corresponding formulas for each test.
Lecture 7
Study the assumptions going into each procedure learned so far, and what the alternatives are
when these assumptions are violated. Understand what the sign test is about.
Lecture 8
Learn how to analyze proportions, why they are always z-values, why the standard error is now
different for confidence intervals and significance testing, and the assumptions going into
testing this properly.
Lecture 9
Learn about the chi²-distribution in general, the corresponding degrees of freedom, and both
tests making use of it (including their assumptions). Note that they're conceptually significance
tests with a few extra steps. Understand table F and how to use it.
Lecture 10
Learn about the differences between Frequentist and Bayesian approaches. Learn to calculate
with conditional probabilities. Learn that the Posterior averages out the Prior & Likelihood
(Observed data).
2
, paypal / buymeacoffee
Lecture 1 - Confidence Intervals
statistical inference → concluding about a population based on sampled data
↳ tells us how much confidence we can have in our conclusions
most common types of statistical inference:
→ confidence intervals
→ tests of significance
Statistical Confidence
central limit theorem → a population with mean μ and standard deviation σ, in repeated
simple random samples of size n, the sample mean 𝑥 will be
approximately:
σ
N(μ, )
𝑛
→ in repeated sampling, 𝑥 has an approximately normal distribution, centered at the unknown
population mean μ and has a standard deviation of:
σ
σ𝑥 =
𝑛
Confidence Intervals
- their purpose is to give us a sense of the actual population mean when we only have access
to sample means 𝑥
- we will assume for now that we have access to the population standard deviation
- a confidence interval has a confidence level, that gives the probability of producing an
interval that contains the unknown parameter
CI = sample mean ± margin of error
σ
= 𝑥 ± 𝑧 *×
𝑛
margin of error
the margin of error for a level C confidence interval for the mean μ of a Normal population with
known standard deviation σ , based on a simple random sample of size n is:
σ
𝑚=𝑧*
𝑛
95% confidence interval
95% CI → an interval around the sample mean constructed such that 95% of all hypothetical
intervals constructed similarly include the population mean
- computing this confidence interval for many samples ensures that approximately 95% of
the confidence intervals contain the true population mean
- the confidence interval is symmetrical, so we have to look up the Z score for the left or the
right bound:
values of z* for common confidence levels
z* 1.645 1.960 2.576
3