Validity and Reliability Flashcards

1
Q

what is validity?

what is it dependant on?

A

the ability of a test to measure accurately, the degree to which a test measures what it purports to measure
–> dependant on reliability, relevance and appropriateness of scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is reliability

A

the consistency or repeatability of an observation, the degree to which repeated measurements of a trait are reproducible under the same conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

4 common types of validity evidence

A
  1. construct validity
  2. logical validity
  3. criterion validity
  4. convergent validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

all types of validity can be estimated either _______ or _______

A

logically, statistically

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

the test effectively measures the desired construct

A

construct validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

the measures obviously involves the performance being measured
- no statistical evidence is required

A

logical/face validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

degree to which scores on a test are related to recognize standard or criterion (gold standard)

A

criterion validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

AKA statistical or correlation validity

A

criterion validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

how is criterion validity obtained?

A

by determining the correlation/ validity coefficient (r) between scores for a test and the criterion measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The criterion is measured at approximately the same time as the alternate measure and the scores are compared
- ex?

A

concurrent validity

ex, skin folds and hydrostatic weighing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

the criterion is measured in the future (week, months, years later)
- ex?

A

predictive validity

ex, the pre selection test battery score and the selection success

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

2 or more measurements are conducted to collect data and establish that a test (battery) is measuring what is purports to measure

A

convergent validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

the consistency of repeatability of an observation

- the degree to which repeated measurements of a trait are reproducible under the same conditions

A

reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

how do calculate reliability

A

rest retest scores to calculate reliability coefficient

eg. r= 0.99 to sit and reach (very high reliability)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

3 types of reliability

A
  1. stability reliability
  2. internal - consistency reliability
  3. objectivity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what is stability reliability ?

A

when scores do not change across days

–> look at the relationship b/w multiple trials across multiple days

17
Q

3 factors that contribute to low stability

A
  1. the people tested may perform differently
  2. the measuring instruments may operate or be applied differently
  3. the person administering the measurement may change
18
Q

what is internal consistency reliability ?

A

evaluator gives at least 2 trials of the test within a single day

19
Q

the internal consistency reliability coefficient is not comparable to stability reliability coefficient, the I-C coefficient is almost always?

A

higher

20
Q

what is objectivity reliability

A

rater/judge reliability

- inter-tester reliability

21
Q

2 factors affecting objectivity

A
  1. the clarity of the score system

2. the degree to which the ‘judge’ can assign a score accurately

22
Q

9 considerations for reducing measurement error

A
  1. validity and reliable test
  2. instructions
  3. test complexity
  4. warm up and test trials (learning effect - may need 5 trials)
  5. equipment quality and preparation (e.g calibration)
  6. testing environment
  7. scoring accuracy
  8. experience and state of mind of person conducting the test
  9. state of mind of person being tested
23
Q

Why calibrate?

How calibrate?

A

important to confirm accuracy of what equipment is telling you

  • requires comparison between measurements (one of known magnitude and one of unknown magnitude that need sot be confirmed)
  • check equipment is up to date and proper functioning
24
Q

how often do you calibrate?

A

at least every 6 months, or according to manufacturer’s guidelines

25
Q

reliability can be expected when ? (3)

A
  1. the testing environment is favorable to good performance
  2. ppl are motivated , ready to be tested and familiar
  3. the person administering the test is trained and competent