Lectures Flashcards Preview

Quan > Lectures > Flashcards

Flashcards in Lectures Deck (50)
Loading flashcards...
1
Q

parametric designs

A

real numbers on a continuous scale

2
Q

non-parametric designs

A

categorical responses

3
Q

comparing 2 groups with the same people?

A

within groups design

related t-test

4
Q

comparing 2 groups with the different people?

A

between subjects

independent samples t test

5
Q

bell curve

A

an assumption made when doing statistics

6
Q

bell curve of standard deviation

A

95% of people give an answer which falls between +/- 2 of the standard deviation.
The other 5% falls between the tail ends, which is after the 2 standard deviation

7
Q

3 group analysis

A

anova

8
Q

what is statistical power

A

probability of making a type 2 error

9
Q

high statistical power?

A

chance of type 2 decreases

10
Q

environmental error

A

difference between the true value or something and measurement

11
Q

reporting stats

comparisons between 2 groups-

A

t test

t(df)= test outcome, p value (p<0.05)

12
Q

reporting stats

comparisons between 3 groups

A

anova

F(dfx2)= test outcome, p value (p<0.01)

13
Q

when would be the only time you would report the p value as the actual value?

A

when a t test or anova is being conducted as a post hoc analysis

14
Q

sig or not?

p>.05

A

no

15
Q

sig or not?

p

A

sig

16
Q

p value : .325
report : p> .05
Significant?

A

no

17
Q

if the outcome of the test is less than 0?

A

never significant

18
Q

factorial designs

A

more than one factor in the study so you are manipulating more than one thing

19
Q

factors and levels

A

factor is over arching label e.g sex whereas levels would be withing e.g. female and male

20
Q

When p> 0.05

A

isnt significant because there is a more than .0.5% chance of coincidence

21
Q

when p<0.05

A

is significant

because there is a less than 0.05 chance of coincidence

22
Q

report writing

design

A

A 2(sex; female and male)x3(age group; 20-29, 30-39, 40-49) between subjects/within/mixed design.

23
Q

as a function of?

A

relevant to

24
Q

Analysis of variance reported

A

f( dfx2) = outcome, MSE= p

25
Q

what is MSE

A

mean square error

information about the error that is in your data collection

26
Q

Reporting analysis of variance

A

There was a significant main effect of location F(df)= MSE= p

27
Q

how do you write up correlations

A

r (df) = , p<0.05

28
Q

what is regression

A

a notion used to predict the relationship between variables

29
Q

two types of regression

A

simple linear regression

multiple linear regression

30
Q

what is the difference between simple linear regression and multiple linear regression

A

single is how well a single predictor variable predicts scores on a variable whereas multiple is how well predictor variables predict scores on variables

31
Q

when do you use regression

A

after a set of correlation are performed

32
Q

why would you use regression

A

to determine which predictor variables are best at predicting your dependent variable

33
Q

what are 5 assumptions of multiple linear regression

A
linear relationship
10-50 sample size
no multicollinearity 
homogeneity of variance 
predictor variable is ordinal and criterion variable is continuous
34
Q

what is a criterion variable

A

the variable is the variable being predicted-dependent variable

35
Q

one aim of MLR

A

to develop a regression model of predictors that account for as much of th variance in the criterion model as possible-more variance explained means better predictive power

36
Q

what is the goodness of fit

A

relates to how well the model is able to account for the variance in the criterion variable.

37
Q

MLR in SPSS

A

look under model summary for the adjusted R square and that tells you how much variance each variable is accounted for
You then look at the standardized coefficients beta and this tells you if the variable predicts scores on the criterion variable or nt

38
Q

how do you write the standardized coefficients beta

A

variable 1 ( known as model 1) B= SCB, p

39
Q

5 assumptions of using chi square

A

non parametric data
categories are individual
data should be frequencies
2 categorical variables with at least 2 levels each
relationship between categorical/nominal variables

40
Q

Chi square

contingency table

A

create totals for the factors and levels

41
Q

how to you write chi sqaure

A

x2(df)= outcome; p

42
Q

wilcoxon signed rank test

A

compare 2 related samples to assess whether their population means differ
Z= (z value) p=(sig value)

43
Q

Friedman’s test

A

non parametric alternative to one-way Anova with repeated measures used to test for difficulties between groups when independent variable is original
x2(df)= chi square, p=(sig value)

44
Q

Mann Whitney U

A

non parametric alternative to independent t-test. Compare 2 sample means
U=(mann on table) p=(sig value)

45
Q

multiple linear regression

A

a notion used to predict the relationship between variables-which predictor variables are best at predicting your dependent
F(df)=f value, p value
adjusted R square:- how much variance do they account for
Beta coefficient:-how predictive one on the other was, is it sig?

46
Q

kolmogorov-smirnov test

A

compares data with a known distribution to see if it’s the same
D(df)=outcome, P value

47
Q

Kruskal wallis H test

A

ranks based non parametric test used to see if stat. sig. results on 2 or more groups
X2(df) =chi square, p=sig

48
Q

mauchly’s test of sphericity

A

the condition where the variances of the differences between all combinations of related groups are equal
X2(df)=chi square, p=

49
Q

T test

A

compares 2 averages and tells you if they are different from each other-are they significant
t(df) =outcome, p=

50
Q

ANOVA

A

used to analyse the differences between group means

f(dfx2)=outcome, MSE, p value