Probability/Statistical Significance Flashcards Preview

RUSVM Epi Summer 17 > Probability/Statistical Significance > Flashcards

Flashcards in Probability/Statistical Significance Deck (52)
Loading flashcards...
1

What are the two ways studies can screw up?

1. caused by chance = random error

2. Not caused by chance = bias or systematic error

2

What deals with random error in studies?

Statistical inference

3

If a study has a random error, is it likely to happen again if/when the study is repeated?

NO

4

An error that is inherent to the study method being used and results in a predictable and repeatable error for each observation is labeled a _____ error. What is it due to?

Systematic error due to bias

5

T/F: If you repeat a study that had a systematic error, it is likely to happen again

TRUE

these errors are not caused by chance and there is no formal method to deal with them.

6

What tests will estimate the likelihood that a study result was caused by chance?

Tests of statistical inference

**a study result is called "statistically significant" if it is unlikely to be caused by chance

7

Do if a study is statistically significant, is it clinically significant?

Not necessarily

Those terms have two different meanings

*even very small measures of association that are not large enough to matter can be statistically significant

8

What is a chance occurrence?

Something that happens unpredictably without discernible human intention or with no observable cause: caused by chance or random variation

9

What is random variation?

There is error in every measurement. If we measure something over and over again, we will get slightly different measurements each time AND a few measurements may be extreme

10

What is statistical inference?

Tells us: if we measure something only once, how sure are we that our measurement has been caused by chance

11

What two methods are used for estimating how much random variation there is in our study and whether our result was likely to have been caused by chance?

1. Confidence intervals

2. P-values

12

_______ estimates how much random variation there is in our measurement

Confidence intervals

-the range of values where the true value of our measurement could be found

13

_____ are used to estimate whether the measure was likely to have been caused by chance or not

P values

14

Will small sample sizes have a large 95% Confidence interval or small CI?

What about large sample sizes?

The larger the sample size, the smaller the confidence interval will be = more precise

*small samples have large CIs

*Large samples have small CIs

15

How do you interpret this statement?

"prevalence of disease was 8% (95% CI: 4%-12%)"

The estimate of the prevalence from the study was 8%, but we are 95% confident that the true prevalence lies somewhere between 4% and 12%

16

T/F: If the 95% CI for the odds ratio (OR) does NOT include one, the OR is statistically significant

TRUE

Ex: The odds ration was 3 (95% CI: 0.5 - 6)

**since this includes that the OR could have the value of ONE = it is NOT statistically significant

17

How do you interpret 95% confidence intervals (95% CI) for odds ratios (OR)?

1. OR greater than one, 95% CI does NOT include one : Positive association; statistically significant

2. OR greater than one, 95% CI includes one : NO association, NOT statistically significant

3. OR less than one, 95% CI does NOT include one : Negative association, statistically significant

4. OR less than one, 95% CI included one : No association, NOT statistically significant

18

If the 95% CI for the relative risk (RR) does NOT include one, the RR (is / is not) statistically significant

IS

*remember, when the RR = one, there is no association between the two test groups

19

How do you interpret a RR greater than one, combined with a 95% CI that does NOT include one?

Positive association
Statistically significant

20

How do you interpret a RR less than one, combined with a 95% CI that includes one?

No association
Not statistically significant

21

How do you interpret a RR less than one, combined with a 95% CI that does NOT include one?

Negative association
Statistically significant

22

T/F: P-value gives you information about the size of the test sample

FALSE

**it also does NOT give you any info about the range that you can expect to find the true value

23

To be statistically significant, the p-value must be less than _____

0.05


*if the p-value is greater than 0.06 - the association is NOT statistically significant and could have been caused by chance

24

How do you interpret p-values that are less than 0.05?

We are 95% confident that an association as large as the one in our study was NOT caused by chance

or

We have 95% confidence that an association this large could not have been caused by chance

25

How do you interpret the following value?

OR or RR or PR = 3.0 (p = 0.02)

Statistically significant. There is an association. We are 95% certain that an OR of 3.0 could NOT have been caused by chance.

26

T/F: No matter how large the RR or OR; if the p-value is greater than 0.05, we must say there is no association

TRUE

27

How are p-values calculated?

Using statistical tests - tests for statistical inference:

1. Chi-squared test
2. Student's t test
3. Correlation

(need to know when/where to use these three tests - do not worry about calculations)

28

When testing a hypothesis, can you prove something is true, untrue, or both?

Untrue

You cannot prove that something is true

You can't prove an association is true

But you can prove that either is NOT true --> Hence the use of a Null hypothesis

29

What is a "Null" hypothesis?

hypothesis that suggests NO association

Used to be proven untrue and rejected - to confirm associations

30

What is the alternative hypothesis?

The actual research question that we want the answer to (that there is an association)