research: statistics Flashcards

1
Q

frequency distribution

A

Tabulation of the number of observations (or number of participants) per distinct response for a particular variable. It is presented in table format, rows indicating each distinct response and columns presenting the frequency for which that response occurred

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

frequency polygon

A

A line graph of the frequency distribution that is used to visually display data that are ordinal, interval, or ratio. The X-axis typically indicates the possible values, and the Y-axis typically represents the frequency count for each of those values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

histogram

A

A graph of connecting bars that shows the frequency of scores for a variable. Taller bars indicate greater frequency or number of responses. Histograms are used with quantitative and continuous variables (ordinal, interval, or ratio).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

bar graph

A

A graph that displays nominal data. Each bar represents a distinct (noncontinuous) response, and the height of the bar indicates the frequency of that response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

central tendency

A

Measures of the typical or middle value of the data set. Measures of central tendency include the mean, median, and mode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Variability

A

A measure of the spread in a distribution of scores or data points. The more dispersed the data points, the more variability a distribution has. The three main indicators of variability are range, standard deviation, and variance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

interquartile range

A

The distance between the 75th percentile and the 25th percentile (i.e., the range of the middle 50% of the data). The interquartile range may be a more accurate estimate of variability when dealing with outliers or extreme values, as it eliminates the top and bottom quartiles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

standard deviation

A

The most frequently reported indicator of variability for interval or ratio data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

sum of squares (SS)

A

The sum of the squared deviation scores, computed by subtracting the mean from each data point (deviation scores), squaring each deviation score, and adding them together.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

variance

A

A type of variability equal to the standard deviation squared

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Skewness

A

An asymmetrical distribution in which the data points do not cluster systematically around a mean. Distributions can be positively skewed, with a greater number of data points clustering around the lower end, or negatively skewed, with a greater number of data points clustering around the higher end of the distribution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Kurtosis

A

The degree of peakedness of a distribution. Distributions can be mesokurtic (normal curve), leptokurtic (tall and thin), and platykurtic (flat and wide).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Inferential Statistics

A

Statistical procedures that are used to draw inferences about a population from a sample.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

degrees of freedom

A

An important concept used in inferential statistics, that refers to the number of independent variables (IVs) “free to vary.” Computing df depends on the statistical test used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Correlation coefficient

A

A numerical index that represents the relationship between two variables. Index values range from .00 to 1.00 in both the positive and negative directions, with +1.00 indicating a perfect positive relationship and a -1.00 indicating a perfect negative relationship. Four commonly used types of correlation coefficients include (a) Pearson product moment correlation coefficient (commonly referred to as Pearson r), (b) Spearman r (for comparing rank-order variables), (c) biserial correlation coefficients (comparing one continuous and one dichotomous or dummy coded variable), and (d) point biserial correlation coefficients (relating one continuous and one true dichotomous variable). A correlation provides information about the relationship between two variables, including whether there is a relationship at all, the direction of that relationship, and the strength of the relationship.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

spurious correlation

A

Occurs when a correlation overrepresents or underrepresents the actual relationship.

17
Q

attenuation

A

A misleading correlation that occurs when unreliable measures indicate a lower relationship between two variables than actually exists

18
Q

Regression studies

A

Used to predict outcomes (dependent variable) from a predictor variable(s) (independent variable). The three types of regression are (a) bivariate regression (how well scores from an independent variable (predictor variable) predict scores on the dependent variable (criterion variable]), (b) multiple regression involves more than one predictor variable when each predictor variable is weighted [beta weights) in a regression equation to determine the contribution of each variable to the criterion variable), and (c) logistic regression (used when the dependent variable is dichotomous; may be similar to a bivariate or multiple regression).

19
Q

Nonparametric Statistics

A

Statistical tests that are used when researchers are only able to make a few assumptions about the distribution of scores in the underlying population. Specifically, their use is suggested when nominal or ordinal data are involved or when interval or ratio data are not distributed normally (i.e., are skewed).

20
Q

chi-square test

A

A nonparametric statistical test used to determine whether two or more categorical or nominal variables are statistically independent.

21
Q

Mann-Whitney U test

A

A nonparametric statistical test that compares two groups on a variable that is ordinally scaled. This test is analogous to a parametric independent t-test.

22
Q

Kolmogorov-Smirnov Z procedure

A

A nonparametric statistical test similar to the Mann-Whitney U test but more appropriate to use when samples are smaller than 25 participants.

23
Q

Kruskal-Wallis test

A

: A nonparametric statistical test analogous to an ANOVA and used when there are three or more groups per independent variable as well as an ordinal-scaled dependent variable.

24
Q

Wilcoxon’s signed-ranks test

A

A nonparametric statistical test equivalent to a dependent t-test; involves ranking the amount and direction of change for each pair of scores

25
Q

Friedman’s rank test

A

A nonparametric statistical test similar to Wilcoxon’s signed-ranks test in that it is designed for repeated measures. It may be used with more than two comparison groups

26
Q

Factor Analysis

A

A statistical test used to reduce a larger number of variables (often items on an assessment) to a smaller number of factors (groups or factors). The two forms of factor analysis are (a) exploratory factor analysis (EFA), which involves an initial examination of potential models (or factor structures) that best categorize the variables and (b) confirmatory factor analysis (CFA), which refers to confirming the EFA results.

27
Q

Meta-Analysis

A

: Involves statistically comparing the results across several similar studies for particular outcome or dependent variables

28
Q

effect size

A

A measure of the strength of the relationship between two variables in a population.