parametric designs
real numbers on a continuous scale
non-parametric designs
categorical responses
comparing 2 groups with the same people?
within groups design
related t-test
comparing 2 groups with the different people?
between subjects
independent samples t test
bell curve
an assumption made when doing statistics
bell curve of standard deviation
95% of people give an answer which falls between +/- 2 of the standard deviation.
The other 5% falls between the tail ends, which is after the 2 standard deviation
3 group analysis
anova
what is statistical power
probability of making a type 2 error
high statistical power?
chance of type 2 decreases
environmental error
difference between the true value or something and measurement
reporting stats
comparisons between 2 groups-
t test
t(df)= test outcome, p value (p<0.05)
reporting stats
comparisons between 3 groups
anova
F(dfx2)= test outcome, p value (p<0.01)
when would be the only time you would report the p value as the actual value?
when a t test or anova is being conducted as a post hoc analysis
sig or not?
p>.05
no
sig or not?
p
sig
p value : .325
report : p> .05
Significant?
no
if the outcome of the test is less than 0?
never significant
factorial designs
more than one factor in the study so you are manipulating more than one thing
factors and levels
factor is over arching label e.g sex whereas levels would be withing e.g. female and male
When p> 0.05
isnt significant because there is a more than .0.5% chance of coincidence
when p<0.05
is significant
because there is a less than 0.05 chance of coincidence
report writing
design
A 2(sex; female and male)x3(age group; 20-29, 30-39, 40-49) between subjects/within/mixed design.
as a function of?
relevant to
Analysis of variance reported
f( dfx2) = outcome, MSE= p
what is MSE
mean square error
information about the error that is in your data collection
Reporting analysis of variance
There was a significant main effect of location F(df)= MSE= p
how do you write up correlations
r (df) = , p<0.05
what is regression
a notion used to predict the relationship between variables
two types of regression
simple linear regression
multiple linear regression
what is the difference between simple linear regression and multiple linear regression
single is how well a single predictor variable predicts scores on a variable whereas multiple is how well predictor variables predict scores on variables
when do you use regression
after a set of correlation are performed
why would you use regression
to determine which predictor variables are best at predicting your dependent variable
what are 5 assumptions of multiple linear regression
linear relationship 10-50 sample size no multicollinearity homogeneity of variance predictor variable is ordinal and criterion variable is continuous
what is a criterion variable
the variable is the variable being predicted-dependent variable
one aim of MLR
to develop a regression model of predictors that account for as much of th variance in the criterion model as possible-more variance explained means better predictive power
what is the goodness of fit
relates to how well the model is able to account for the variance in the criterion variable.
MLR in SPSS
look under model summary for the adjusted R square and that tells you how much variance each variable is accounted for
You then look at the standardized coefficients beta and this tells you if the variable predicts scores on the criterion variable or nt
how do you write the standardized coefficients beta
variable 1 ( known as model 1) B= SCB, p
5 assumptions of using chi square
non parametric data
categories are individual
data should be frequencies
2 categorical variables with at least 2 levels each
relationship between categorical/nominal variables
Chi square
contingency table
create totals for the factors and levels
how to you write chi sqaure
x2(df)= outcome; p
wilcoxon signed rank test
compare 2 related samples to assess whether their population means differ
Z= (z value) p=(sig value)
Friedman’s test
non parametric alternative to one-way Anova with repeated measures used to test for difficulties between groups when independent variable is original
x2(df)= chi square, p=(sig value)
Mann Whitney U
non parametric alternative to independent t-test. Compare 2 sample means
U=(mann on table) p=(sig value)
multiple linear regression
a notion used to predict the relationship between variables-which predictor variables are best at predicting your dependent
F(df)=f value, p value
adjusted R square:- how much variance do they account for
Beta coefficient:-how predictive one on the other was, is it sig?
kolmogorov-smirnov test
compares data with a known distribution to see if it’s the same
D(df)=outcome, P value
Kruskal wallis H test
ranks based non parametric test used to see if stat. sig. results on 2 or more groups
X2(df) =chi square, p=sig
mauchly’s test of sphericity
the condition where the variances of the differences between all combinations of related groups are equal
X2(df)=chi square, p=
T test
compares 2 averages and tells you if they are different from each other-are they significant
t(df) =outcome, p=
ANOVA
used to analyse the differences between group means
f(dfx2)=outcome, MSE, p value