Exam Review Flashcards

1
Q

Theory

The credibility of a theory depends on the extent to which:

A
  1. Observations empirically support it
    - and-
  2. its components are systematically organized in a logical fashion that helps us better understand the world
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Contribution of Research to Theory

A
  1. Test of existing theory
  2. Clarifies concepts
  3. Initiates, reformulates, refocuses theory
  4. Deflects the theory
  5. Serendipity
  6. Reduce intuitive decisions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Hypothesis

A

Proposition deduced from theory or experience - permits deduction which may be empirically verified

Is a prediction on how specific processes and classes of phenomenon covered in a theory will interact

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Hypothesis May State:

A
  1. Relationship exists (we mostly here)
  2. What relationships are - greater or less (high, low)
  3. Explanation of relationships under such and such circumstances this will happen
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A Hypothesis Not Validated May Be Due To:

A
  1. Inadequate theory
  2. Hypothesis improperly stated
  3. Wrong phenomenon looked at
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Social work _______ follows essentially the same _________ process as does social work _______

A
  1. Research
  2. Problem-Solving
  3. Practice
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How is Social Work Research similar to Social Work Practice?

A
  1. Each follows similar phases
  2. Moving to the next stage depends on successfully completing earlier phases
  3. At any point in the process, unanticipated obstacles may necessitate looping back to earlier phases
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Phases of the Research Process

A
  1. Problem Formulation
  2. Study Design
  3. Data Collection
  4. Processing
  5. Data Analysis
  6. Interpretation of findings
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Research Process: Problem Formulation

A
  • Problem/phenomenon is recognized for which more knowledge is needed
  • Review of literature is conducted
  • Research question is posed
  • Purpose of research is finalized
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Initially stating a hypothesis, compiling units of analysis, identifying variables and operationally defining them usually means…

A

The purpose of a research topic has been finalized

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Research Process: Study Design

A
  • Involves all of the decisions to be made in planning the study
  • Design
  • Sampling
  • Sources and procedures for collecting data
  • Measurement
  • Data Analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Research Process: Data Collection/Processing

A
  • Design is implemented
  • Observation/collection of relevant data
  • Classification/coding of observations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Research Process: Data Analysis

A
  • Data is manipulated for the purpose of answering the research question
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Research Process: Interpretation of Findings

A
  • Alternate ways to interpret results elaborated in discussion
  • Generalizations that can/cannot be made
  • Strengths/weaknesses
  • Avenues for future research
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A good research topic should…

A
  • Pass the “so what?” test
  • be specific
  • capable of being answered by observable evidence
  • feasible to study
  • open to doubt
  • answerable in more than one possible way
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a paradigm?

A
  • Paradigms are general frameworks for understanding aspects of life
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a theory?

A
  • Theory is a systematic set of interrelated statements to explain aspects of life or how people conduct and find meaning in their life
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Higher rates of instant messaging and/or emailing during the first several sessions of class are associated with lower grades on midterm exams and fewer class participation points in a given subject.

This is an example of what?

A

A Hypothesis

see slide 15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the role of the IRB?

A

To protect human subjects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How does research protect from Ethical Issues?

A
  • Provide confidentiality or anonymity to subjects
  • Provide consent forms
  • Assure voluntary participation
  • Assure informed consent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What would be the most essential elements necessary if you were to conduct community based research in a minority/oppressed population?

A
  • Obtained Endorsement from community leaders
  • Culturally sensitive approaches (especially regarding confidentiality)
  • Employ local community members as research staff
  • Provide adequate compensation
  • Alleviate basic barriers
  • Cultural Competence
  • Bilingual staff
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

The use of interviewers whose personal characteristics or interviewing styles offend or intimidate minority respondents or in other ways make them reluctant to divulge relevant and valid information would be negatively affect what?

A

Culturally Competent Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

The other two main threats to culturally competent measurement are:

A
  • The use of language minority respondents do not understand

- Cultural bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

The practice of Linguistic Relevance includes:

A
  • Using bilingual interviewers
  • Translating measures
  • Pretesting these measures to assure they are understood as intended
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

An instrument and instructions are translated into a target language by one person then translated back to the original language by another person.

This is called what and what does it do in research?

A
  • Back Translation
  • The original instrument is compared to the back-translated version to assess if their are any discrepancies in the items to then be modified further
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Measurement Equivalence

A
  • A measurement procedure developed in one culture will have the same value and meaning when administered to people in another culture
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

When an instrument has been translated and back translated successfully

A

Linguistic Equivalence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

When instruments and observed behaviors have the same meaning across cultures, ______ has been reached

A

Conceptual Equivalence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Metric Equivalence

A
  • The scores on a measure are comparable across cultures

- RARE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

To contend with difficulties and act to overcome them is a…

A

Conceptual Definition of Coping

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Coping will be measured with the CISS instrument. Examinee responds using a 5-point scale. Scoring is accomplished using a scoring grid.

This is an example of what?

A

Operational Definition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

The simplest level of measurement is

A

A Nominal Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Nominal Measures

A
  • A variable whose attributes are classified into distinct groups (groups are not ordered)

Examples:
- Gender: Male or female
- Service type: mental health, substance abuse, vocational, parent training, etc.
SW track: Adv. clinical practice, policy, social enterprise administration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Ordinal Measures

A
  • Attributes of variables are ordered (ranked) from lower to higher levels or vice versa
  • However, differences between attributes or categories do not have equal distances

Examples:

  • Satisfaction of current field placement
    • scale from 1-5: very satisfied to very dissatisfied
  • Rating of client symptoms
    • scale from 1/5: very sever to very mild
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Interval Measures

A
  • Include variables whose attributes are classified and rank-ordered
  • Have an equal distance between rankings or classifications
  • However, no fixed and meaningful zero point (arbitrary)

Examples:

  • IQ scores
  • Degrees Fahrenheit
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Ratio Measures

A
  • Include variables whose attributes are classified and rank-ordered
  • Have an equal distance between rankings or classifications
  • Have meaningful zero point allowing us to form ratios of one value relative to another value

Examples:

  • # of days hospitalized
  • # of children
  • # of service delivery contacts
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

When data does not accurately portray the concept we attempt to measure, this is a…

A

Measurement Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Systematic Error

A
  • When the information we collect consistently reflects a false picture
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

The most common way our measures systematically measure something other than what we think they do is…

A
  • Through Biases

Examples:

  • Acquiescent response set
  • Social desirability bias
  • Cultural bias
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Random Error

A
  • No consistent pattern of effects
  • Do not bias the measures

Example:
- Two research assistants do not consistently rate/code/count a certain phenomenon during an observation

41
Q

Reliability

A
  • A particular measurement technique, when applied repeatedly to the same object, would yield the same result each time
  • The more reliable the measure, the less random error
42
Q

Types of Reliability: Interobserver & Interrater

A
  • The degree of agreement or consistency between/among observers
43
Q

Types of Reliability:

Test-retest

A
  • Assessing a measure’s stability over time
44
Q

Types of Reliability:

Internal Consistency

A
  • Assess whether the items of a measure are internally consistent
45
Q

Face Validity

A
  • A measure appears to measure what it is supposed to measure
  • Determined by subjective assessment made by the researcher or other experts
46
Q

Content Validity

A
  • The degree to which a measure covers the range of meanings included within the concept
  • Established based on judgements as well
47
Q

Validity based on some external criterion is…

A

Criterion-related Validity

48
Q

Subtype of Criterion-Validity: Concurrent Validity

A
  • Measure corresponds to a criterion that is known concurrently
49
Q

Subtype of Criterion-Validity: Predictive Validity

A
  • Measure can predict a criterion that will occur in the future
50
Q

Assess whether a measure fits theoretical expectations

A

Construct Validity

Example:
Does liking coffee fit the criteria of depression such as fatigue, hopelessness, and sadness does?

51
Q

The process of selecting observations

A

Sampling

52
Q

A sample

A
  • A subset of a population that is observed for purposes of making inferences about the nature of the total population
53
Q

Nonprobability Sampling

A
  • Used when probability or random sampling is not possible or appropriate (ex. homeless individuals)
  • Generally less reliable
  • Often easier and cheaper
54
Q

The 4 types of Nonprobability Sampling

A
  1. Reliance on Available Subjects
  2. Purposive or Judgmental Sampling
  3. Quota Sampling
  4. Snowball Sampling
55
Q

Reliance on Available Subjects

A

A type of Nonprobability Sampling:

  • Sampling from subjects who are available

Example:
How much an agency’s services help a particular client or group of clients

56
Q

Purposive or Judgmental Sampling

A

A type of Nonprobability Sampling

  • When a researcher uses his or her own judgment in selecting sample members

Example:
Handpick community leaders or experts known for their expertise on target population

57
Q

Quota Sampling

A

Type of Nonprobability Sampling

  • A relative proportion of the total population is assigned for the target population’s characteristics (ex. gender, ethnic groups), grouped into strata or cells, and the required number of subjects from each stratum or cell (given set of characteristics) is then selected
58
Q

Snowball Sampling

A

Type of Nonprobability Sampling

Process of accumulation as each located subject suggests other subjects

59
Q

The primary method for selecting large, representative samples with equal chances of being chosen is

A

Probability Sampling

60
Q

Probability Sampling is used to…

A
  • Select a set of elements from a population in such a way that descriptions of those elements accurately portray the total population from which elements are selected
61
Q

What is the key to the process of Probability Sampling?

A
  • Random Selection
62
Q

Random Selection

A
  • Each element has an equal chance of selection independent of any other event in the selection process
  • This is the key to the process of Probability Sampling
63
Q

Probability Sampling Designs: Simple Random Sampling

A
  • Each element in sampling frame is assigned a number
  • A table of random numbers is then used to select elements for the sample
  • Most fundamental technique in probability sampling, but laborious
64
Q

Probability Sampling Designs: Systematic Sampling

A
  • Involves the selection of every Kth element or member of the sampling frame
  • Important to carefully examine the nature of the list and whether a particular order of the elements will bias the sample selected
65
Q

What is the difference in Systematic Sampling from Simple Random Sampling?

A
  • Elements chosen based on sampling interval

- First element selected at random to avoid bias

66
Q

Probability Sampling Designs: Stratified Sampling

A
  • Involves the process of grouping members of a population into homogeneous strata before sampling (ex. by ethnic group or gender)
  • Stratified sampling improves the representativeness of a sample by reducing the degree of sampling error
67
Q

Probability Sampling Designs: Probability Proportionate to Size (PPS)

A
  • Efficient yet complex
  • Multistage sampling method
  • Used when a list of all members of a population does not exist
68
Q

What is the process for the Probability Sampling Design of Probability Proportionate to Size (PPS)?

A
  1. Select elements and create clusters based on those elements (ex. clustering neighborhoods across the US based on degree of urbanicity)
  2. Randomly select equal or weighted amounts of clusters
  3. Randomly select subjects from each cluster
69
Q

Internal Validity

A
  • Degree to which the results of a study (usually an experiment) can be attributed to the treatments rather than to flaws in the research design
70
Q

External Validity

A
  • Extent to which the findings of a study are relevant to subjects and settings beyond those in the study
  • Another term for generalization
  • Random Sampling is usually key External validity
71
Q

Basic Criteria for the determination of causation in scientific reserach

A
  • The independent (cause) and dependent (effect) variables must be empirically related to each other
  • The independent variable must occur earlier in time than the dependent variable
  • The observed relationship between these two variables cannot explained away as being due to the influence of some third variable that causes both of them
72
Q

A probabilistic explanation takes the form:

A

X tends to be Y

73
Q

Threats to Internal Validity:

A
  • History (event)
  • Maturation (effects due to the passage of time)
  • Testing (Applies to use of multiple assessments)
  • Instrumentation
  • Statistical Regression (effects due to fact that subjects started out in extreme positions)
  • Selection Bias (due to groups not comparable to pretest)
  • Experimental Mortality (effects due to subjects dropping out)
  • Ambiguity about the direction of the Causal Interference
  • Diffusion or imitation of treatments
74
Q

History as a threat to Internal Validity

A
  • An event that occurs between pretest and post test measures that may cause change as opposed to independent variable
75
Q

Maturation as a threat to Internal Validity

A
  • Effects due to the passage of time

- Subjects in extreme crisis may improve due to the passage of time irrespective of the intervention employed

76
Q

Testing as a threat to Internal Validity

A
  • Applies to use of multiple assessments
  • Changes from pretest to post test scores may be due to students’ remembering questions and looking up responses or due to the recollection of questions asked at pretest
77
Q

Instrumentation as a threat to Internal Validity

A
  • If different measures used pre and post test, effects may be due to instrument and not actual change
78
Q

Statistical Regression as a threat to Internal Validity

A
  • Effects due to fact that subjects started out in extreme positions
  • Scores will automatically statistically regress: move toward their true core
79
Q

Selection Bias as a threat to Internal Validity

A
  • Due to groups not comparable at pretest
  • Effects may be due to the unique characteristics of subjects electing to participate
  • Example: motivation
80
Q

Experimental Mortality as a threat to Internal Validity

A
  • Effects due to subjects dropping out
  • Those who drop out may do so because they feel no improvement
  • Those that remain may show improvement because they are the only ones who experienced this
81
Q

Ambiguity as a threat to Internal Validity

A
  • Ambiguity about the direction of causal inference
  • Those dropping out of program abuse substances less
  • Did program help them stop or did abstinence help them complete?
82
Q

Diffusion or imitation of treatments as a threat to Internal Validity

A
  • Psychoeducational program for battered women as compared to traditional package of services-overlap?
83
Q

Classic Experimental Design: Controls for history and maturation

A

Both groups would be impacted by these events

84
Q

Classic Experimental Design: Random assignment ensures…

A

Neither group is more likely to statistically regress

85
Q

Classic Experimental Design: Similarities/differences between groups…

A

Assessed at pretest

86
Q

“Quasi” in Quasi-Experimental Designs is due to

A

Lack of random assignment of subjects to Experimental (E) and Control (C) groups

87
Q

When it is not possible to create Experimental or Control groups by random assignment, researchers may find…

A

an existing group that appears to be similar to Experimental (E) group
– Called comparison group in Quasi-Experimental Designs

88
Q

Campbell and Stanley: Pre-experimental design

O

A

The one-shot case study

89
Q

Campbell and Stanley: Pre-experimental design

O X O

A

The one-group pretest-post test design

90
Q

Campbell and Stanley: Pre-experimental design

X O
___
O

A

Post test-only design with non equivalent groups

91
Q

Ecological Fallacy

A

Danger of making assertions about individuals based solely upon observations of groups

92
Q

Topics appropriate to survey research can be used for…

A
  • Descriptive
  • Explanatory
  • Exploratory purposes
93
Q

In survey research, individuals are typically the _________. _____ or ______ can also be studied

A
  • Unit of analysis
  • Groups
  • Interactions
94
Q

Topics appropriate to survey research are useful for the collection of…

A

Data on a population too large to observe directly

95
Q

Topics appropriate to survey research are useful when measuring….

A

Attitudes and orientations in a large population

96
Q

Questionnaire in comparison to Interview

A
  • Cheaper and quicker
  • Same cost for national and local survey
  • Mail surveys require less staff
  • Appropriate for sensitive issues
  • Can offer anonymity
97
Q

Interview in comparison to Questionnaire

A
  • Fewer incomplete questions
  • Higher return rate
  • Interviewer can observe
  • Can conduct over phone
  • Appropriate for complicated issues
98
Q

Strengths of survey research

A
  • Reliability
  • Describes characteristics of a large population
  • Flexibility in analysis
  • Same questions asked of all respondents
  • Large samples
99
Q

Weaknesses of survey research

A
  • Validity
  • Standardization may yield superficiality
  • Doesn’t deal with context
  • Cannot be modified in the field
  • Artificial - cannot measure action