Research Methods and Study Design Flashcards Preview

MCAT Psychology/Sociology > Research Methods and Study Design > Flashcards

Flashcards in Research Methods and Study Design Deck (74)
Loading flashcards...
1

Experimental Design

the technical term for a specific type of research

2

Steps to good experimental design

1) select the population
2) operationalize the independent and dependent variables
3) carefully select the control and experimental groups
4) randomly sample from the population
5) randomly assign individuals to groups
6) measure the results
7) test the hypothesis

3

1) Selecting the population

- Objective: determine the population of interest and consider what group will be pragmatic to sample
- Common Flaws: the population is too restrictive, sampling all individuals of interest is not practical

4

2) Operationalize variables

- Objective: determine the independent and dependent variables, specify exactly what is meant by each, make sure the dependent variable can be measured quantitatively within the parameters of the study
- Common Flaws: insufficient rigor in the description, manipulation of the independent variable presents practical problems

5

Dependent Variable

variable that is measured

6

Independent Variable

variable manipulated by the research team

7

Operational definition

Specification of precisely what they mean by each variable

8

Reproducibility

Quality of good experimental design, experiments can be reproduced by others. researchers

9

Quantitative

numerical

10

Qualitative

descriptive

11

3) Divide into groups

- Objective: carefully select experimental and control groups, homogenize the two groups, isolate the treatment by controlling for potential extraneous variables
- Common Flaws: control group does not resemble treatment along important variables, the experiment is not double-blind, participants can guess the experiment allowing a placebo effect to occur

12

Experimental Group

group of participants that receives treatment

13

Control group

group of participants that acts as a point of reference and comparison

14

Homogenous

a control group that is the same throughout and as similar as possible to the experimental group except for the treatment

15

Extraneous (or confounding)

variables other than the treatment that could potentially explain the results of an experiment

16

Placebo effect

believing that the treatment is being administered can lead to measurable results

17

Double blind

neither the person administering. treatment nor the. participants truly know if they are assigned to the treatment or control group

18

4) Random sampling

- Objective: make sure all members of the population are represented, ideally each member has an equal chance of being selected, meeting these criteria is often not possible for practical reasons
- Common Flaws: sampling is not truly random, sample does not represent the population of interest

19

Sampling bias

if it is not equally likely for all members of a population to be sampled

20

Selection bias

more general category of systemic flaws in a design that can compromise results, another example is purposefully selecting which studies to evaluate in a meta-analysis

21

Meta-analysis

big-picture analysis of many studies to look for trends in the data

22

Attrition

another type of selection bias, occurs when participants drop out of the study. If participants dropping out is non-random, this might introduce an extraneous variable

23

5) Random assignment

- Objective: individuals who have been sampled are equally likely to be assigned to treatment or. control, consider matching along potential extraneous variables which have been pre-selected
- Common Flaws: groups are not properly matched, assignment is not perfectly random

24

Randomized block technique

researchers evaluate. where participants fall along the variables they wish to equalize across experimental and control groups. Then randomly assign individuals from these groups so. that the treatment and control groups are similar along the variables of interest

25

6) Measurement

- Objective: make sure measurements are standardized, make sure instruments are reliable
- Common Flaws: tools are not precise enough to pick. up a result, instruments used for measurements are not reliable

26

Reliability

means that they produce stable and consistent results, measure what they're supposed to (construct validity) and that repeated measurements lead to similar results (replicability)

27

Psychometrics

study of how to measure psychological variables through testing

28

Response bias

another concern with surveys, defined as the tendency for respondents to not have perfect insight into their state and provide inaccurate responses

29

Between-subjects design

the comparisons are made between subjects from one group to another

30

Within-subjects design

compare the same group at different time points