Psychology-Research Methods-Year2 Flashcards Preview

A-Levels > Psychology-Research Methods-Year2 > Flashcards

Flashcards in Psychology-Research Methods-Year2 Deck (86)
Loading flashcards...
1
Q

What is science?

A

A systematic approach to creating knowledge

2
Q

What is the scientific method?

A

The method used to gain scientific knowledge. It starts with observations and the inductive model leads to the development of a hypothesis then ends with proposing a theory, while the deductive model constructs theory at the beginning, after observations

3
Q

What five key features is scientific knowledge based on?

A

Empirical methods, objectivity, replicability, theory construction and hypothesis testing

4
Q

What are empirical methods?

A

Information gained through direct observation or experiment rather than from unfounded beliefs or reasoned argument. As people can make claims about anything, the only way we know such things to be true is through direct testing eg empirical evidence

5
Q

What is objectivity?

A

Empirical data should be objective-not affected by the expectations of the researcher so there has to be carefully controlled conditions

6
Q

What is replicability?

A

Whether it is repeatable, resulting in the same outcome so everything has to be precise and recorded (different people are used in replications, so it isn’t reliability, it is for validity)

7
Q

What is theory construction?

A

Explanations/theories must be constructed to make sense of facts that have been found. Theories are collections of general principles that explain observations and facts. Inductive and deductive methods are used

8
Q

What is hypothesis testing?

A

How theories are modified and the validity is tested. Testable expectations are stated in a hypothesis, and if it fails, then the theory is modified

9
Q

What is falsifiability?

A

(Popper) A true scientific hypothesis and theory must have the ability to be falsified (the possibility of being proved wrong-can’t be proved right) and the search to falsify it is what leads to new research-science is constantly evolving

10
Q

What are paradigms?

A

(Kuhn)Scientific theory are not constantly changed or updated. Paradigms are a unified set of assumptions and methods that are accepted by everyone in that community. One theory is correct until the amount of disconfirming theories builds up and overthrows the original theory-this is paradigm shift

11
Q

What are the three types of experimental design?

A

Independent groups (different groups do different conditions) Repeated measures (same participants do all conditions) Matched pairs (pairs are matched on important characteristics and then one is placed in condition one, and the other in condition two)

12
Q

What is a type one error?

A

Results that have occurred due to chance. The null hypothesis is rejected and the experimental hypothesis is accepted when the null hypothesis should have been accepted

13
Q

What is a type two error?

A

Results that haven’t occurred by chance. The null hypothesis is accepted and the experimental hypothesis is rejected, when it should have been accepted

14
Q

What level of significance is most commonly used in psychology and why?

A

0.05 (5%) because it is the likelihood of making a type 1 error. If it was 0% then there would definitely be a type 2 error

15
Q

What does P ≤ 0.05 mean?

A

The probability that the results are due to chance is less than or equal to 5% which means theres a 5% chance of making a type 1 error and rejecting the null hypothesis when you should have accepted it

16
Q

What are the four types of data used when deciding which statistical test to use?

A

Nominal, ordinal, interval and ratio

17
Q

What is nominal data?

A

Data that fits into categories/no numerical value/can be tallied, eg girl or boy, yes or no, obeyed or did not obey

18
Q

What is ordinal data?

A

Numerical data that is ranked or ordered eg rating scales, position in a race etc

19
Q

What is interval data?

A

Numerical data with equal gaps between each score eg temperature and IQ score

20
Q

What is ratio data?

A

Same as interval with a true 0 point eg weight, height, commuting difference

21
Q

What are the statistical tests?

A

Pearson’s r, related t-test, unrelated t-test, chi-squared , sign test, spearman’s rho, Wilcoxon, and Mann-Whitney

22
Q

What is parametric data?

A

Interval data (equal gaps between values), population from a normal distribution, covers most psychological and physical characteristics, no big variance between scored

23
Q

When do you use Pearson’s r test?

A

When the data is parametric and a correlation, and related

24
Q

When do you use a related t-test?

A

When data is parametric (interval), test of difference, and repeated measures

25
Q

When do you use an unrelated t-test?

A

When data is parametric, test of difference and independent groups

26
Q

When do you use a chi-squared test?

A

When data is nominal and independent data

27
Q

When do you use a sign test?

A

Nominal data, repeated measures

28
Q

When do you use spearman’s rho test?

A

Ordinal data, correlation and related

29
Q

When do you Wilcoxon test?

A

Ordinal test, test of difference and repeated measures

30
Q

When do you use Mann-Whitney test?

A

Ordinal data, test of difference and independent groups

31
Q

What are the three stages of a quantitative content analysis?

A

Categorise (create list of specific behavioural categories), count (count number of times each categories appear and record it) and compare (compare two examples)

32
Q

What are the three stages of qualitative content analysis?

A

Compress (compress each item/response into a briefer statement), categorise (categorise by grouping together similar items), and categorise (categorise again into larger units)

33
Q

What are the advantages of content analysis?

A

High ecological validity as it is based on observations of what really happens. Easy to check for reliability as sources can be retained/accessed by others so it can be replicated and therefore tested for reliability

34
Q

What are the disadvantages of content analysis?

A

Cultural bias as interpretations are affected by the language/culture of the observer and behavioural categories. Also observer bias reduces objectivity/validity because different observers may interpret the meaning of the behavioural categories differently

35
Q

What is the process of thematic analysis?

A

Read, reread data and try to understand the meaning communicated and the perspective of the person involved, break data into meaningful units (small bits of text are independently able to convey meaning), assign a label or code to each unit, combine simple codes into larger categories/themes, check the themes are appropriate by applying themes to a new piece of data and check that such themes work

36
Q

What are case studies?

A

A research method that involves a detailed study of a single individual, institution or event-they are often longitudinal

37
Q

What are the strengths of case studies?

A

Often detailed insight is gained and can investigate rare situations/circumstances.

38
Q

What are the weaknesses of case studies?

A

Sometimes it can be difficult to generalise, also there can be ethical and confidentiality issues involved

39
Q

What does reliability refer to?

A

Consistency of a measurement/result

40
Q

What is inter rater reliability?

A

Two or more people observing the same person should record the same behaviour

41
Q

What is test retest reliability?

A

The same test or interview given to the same participants on two occasions should produce the same results

42
Q

What is internal reliability?

A

All measures within a test should be measuring the same thing

43
Q

What type of correlation would be ideal between the two sets of scores from two different observers?

A

Strong positive (0.8) or above, as it would show they have the same results

44
Q

How can inter rater reliability be improved?

A

Standardised behavioural categories and training

45
Q

What is meant by standardised behavioural categories?

A

Clear and operationalised behavioural categories

46
Q

How can training and categories improve reliability?

A

Training eg with videos, shows observers what to look for, so they can practice and confirm their understanding of categories so observers get the same result

47
Q

How is test retest reliability calculated?

A

Questionnaire or interview should be given to a group, a few weeks later the same test should be given to the same group, should be a strong positive correlation, ideally R=1

48
Q

How was reliability increased in Asch’s study?

A

Study had clear standardised procedure making it easy to reproduce eg standardised instructions and timings

49
Q

How was reliability decreased in Asch’s study?

A

Other researchers have replicated Asch’s method with different participants at different times and couldn’t replicate findings eg Perrin and Spencer

50
Q

What does validity refer to?

A

Are you measuring what you want to measure

51
Q

What is internal validity?

A

Concerns whether a measuring instrument measures what it sets out to measure

52
Q

What is external validity?

A

The extent to which the conclusions from your research study can be generalised to the people outside of your study

53
Q

What factors can reduce internal validity?

A

Investigator effects, poor operationalisation, social desirability, confounding variables, demand characteristics, and individual differences

54
Q

What is face validity (assessing validity)?

A

Not the deepest measure of assessing validity. Does the test look like it tests what it is supposed to? If not, it can be improved by changing this

55
Q

What is concurrent validity (assessing validity)?

A

It is a bit deeper-does it concur? Does this measure agree with a previous established measure? If not, it can be improved by changing this

56
Q

What is construct validity (assessing validity)?

A

Refers to how well a test or tool measures the construct that it was designed to measure. In other words, to what extent is the BDI measuring depression? If not, it can be improved by changing this

57
Q

What is predictive validity (assessing validity)?

A

The extent to which a score on a scale or test predicts scores on some criterion measure. For example, the validity of a cognitive test for job performance is the correlation between test scores and, for example, supervisor performance ratings. If not, it can be improved by changing this

58
Q

What can external validity be broken down into?

A

Ecological validity, temporal validity and population validity

59
Q

What is temporal validity?

A

Refers to the notion that the results from a study are relevant at all points in time and will therefore be replicable at any point in time

60
Q

What is population validity?

A

Refers to the notion that the results from the sample involved are representative of the whole population studied and can therefore be generalised

61
Q

Why did Asch’s study have low temporal and population validity?

A

Completed at time of Mccarthyism causing high conformity levels so results may differ today (Perrin and Spencer). Also all participants were US male undergraduate students so it wasn’t representative of the whole population

62
Q

What is ecological validity?

A

Refers to being able to generalise the findings from a study to other situations, specifically ‘everyday life’

63
Q

How can it be determined wether a study has high levels of ecological validity?

A

Have to look at how the dependent variable is measured and whether this is realistic to real life as opposed to the setting. Eg Godden and Baddeley’s deep sea diver experiment had low ecological validity as it lacked mundane realism (DV measured by word lists) and the participants were aware they were being studied so may not have behaved ‘naturally’

64
Q

What are the non-parametric tests of difference?

A

Wilcoxon test for related designs and Mann-Whitney test for unrelated designs

65
Q

What is the process for a Wilcoxon test for related design?

A
  1. State the hypothesis 2. place raw data in table 3. find the differences and rank 4. find calculated value of T 5. is the result in the right direction? 6. find critical value of T 7. report the conclusion
66
Q

What is the process for a Mann-Whitney test for unrelated designs?

A
  1. State the hypothesis 2. place raw data in table 3. rank each data set 4. add each set of ranks 5. dins calculated value of U 6. is the result in the right direction? 7. find critical value of U 8. report the conclusion
67
Q

What are the parametric tests of difference?

A

Related T-test and unrelated t-test

68
Q

What is the process for a related t-test?

A
  1. State the hypotheses 2. place raw data in table 3. find calculated value of t 4. is the result in the right direction? 5. find critical value of t 6. report the conclusion
69
Q

What is the process for an unrelated t-test?

A
  1. State the hypotheses 2. place raw data in table 3. find calculated value of t 4. is the result in the right direction? 5. find critical value of t 6. report the conclusion
70
Q

What are the tests of correlation?

A

Spearman’s rho (non-parametric) and Pearson’s R (parametric)

71
Q

What is the process for a Spearman’s rho test?

A
  1. State the hypotheses 2. place raw data in table 3. find calculated value of rho 4. is the result in the right direction? 5. find critical value of rho 6.report the conclusion
72
Q

What is the process for a Perason’s R test?

A
  1. State the hypotheses 2. place raw data in table 3. find calculated value of r 4. is the result in the right direction? 5. find critical value of rho 6. report the conclusion
73
Q

What is the process for a chi-squared test?

A
  1. State the hypotheses 2. place raw data in contingency table 3.find observed value of X² 4. find critical value of X² 5. state the conclusion
74
Q

How are investigations reported?

A

Studies are written up and published in peer-reviewed academic journals for everyone to read. Original reports can be accessed online and are almost always organised in this way: abstract, introduction, method, results, discussion, references

75
Q

What is the abstract section of journal articles?

A

A summary of the study covering the aims, hypothesis, the method (procedures), results and conclusions (including implications of the current study). This is usually about 150-200 words in length and allows the reader to get a quick picture of the study and its results

76
Q

What is the introduction section of journal articles?

A

The introduction begins with a review of previous research (theories and studies), so the reader knows what other research has been done and understands the reasons for the current study. The focus of this research review should lead logically to the study to be conducted so the reader is convinced of the reasons for this particular research. The introduction should be like a funnel-starting broadly and narrowing down to the particular research hypothesis. The researcher states their aims, research prediction and/or hypothesis

77
Q

What is the method section of journal articles?

A

It contains a detailed description of what the researcher did, providing enough information for replication of the study. It includes design (eg repeated measures, or covert observations etc, the design decision should be justified), participants (information on sampling methods/how many participants involved/details eg age, job etc), apparatus/materials (description of any materials used), procedures (including standardised instructions, testing environment, order of events etc), and ethics (significant ethical issues may be mentioned, as well as how they were dealt with)

78
Q

What is the results section of journal articles?

A

Details given about what the researcher found, including descriptive statistics (tables and graphs showing frequencies and measures of central tendency and dispersion), inferential statistics (statistical tests are reported, including calculated values and significance level), and in the case of qualitative research, categories and themes are described along with examples within these categories

79
Q

What is the discussion section of journal articles?

A

Researcher aims to interpret the results of the study and consider their implications for future research as well as suggesting real-world applications. Summary of results (results reported in brief with some explanation about what they show), relationship to previous research (results discussed in relation to research reported in the introduction and possible other research as well), consideration of methodology (criticisms may be made of the methods used in the study, and improvements suggested), implications for psychological theory and possible real-world applications, suggestions for future research

80
Q

What is the references section of journal articles?

A

Full details of any journal articles or books mentioned in the research report are given. Format for journal articles is generally: Last name, First initial. (Year published). Article title. Journal, Volume (Issue), Page(s). If it is a book: Last name, First Initial. (Year published). Title. City: Publisher, Page(s)

81
Q

How do you assess the reliability of observational techniques?

A

Repeat observations. The second set of recorded results should be more or less the same as the first set of recorded results. However if the observer is biased, then this would not accurately assess the reliability, and so inter-observer reliability can be used

82
Q

How do you improve the reliability of observational techniques?

A

Behavioural categories (operationalise them or allow some observers more practice using the categories if they need it, so they can be quicker to respond)

83
Q

How do you assess the reliability of self-report techniques?

A

Test-retest reliability for psychological tests or other self report measures. Inter-interviewer reliability for interviews

84
Q

How do you improve the reliability of self-report techniques?

A

Reduce ambiguity so test items are not interpreted in many different ways

85
Q

How do you assess the reliability of experiments?

A

Reliability in an experiment may be concerned with whether the method used to measure the dependent variable is consistent, i.e. the observations or the self report method. (The same is true of how co-variable in correlations are measured)

86
Q

How do you improve the reliability of experiments?

A

Standardisation so results/performance of participants can be compared. Standardise procedure/instructions etc

Decks in A-Levels Class (89):