CAST Full Deck Flashcards

1
Q

Acceptance Criteria

A

A key prerequisite for test planning is a clear understanding of what must be accomplished for the test project to be deemed successful.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Acceptance Testing

A

The objective of acceptance testing is to determine throughout the development cycle that all aspects of the development process meet the user’s needs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Act

A

If your checkup reveals that the work is not being performed according to plan or that results are not as anticipated, devise measures for appropriate action. (Plan-Do-Check-Act)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Access Modeling

A

Used to verify that data requirements (represented in the form of an entity-relationship diagram) support the data demands of process requirements (represented in data flow diagrams and process specifications.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Active Risk

A

Risk that is deliberately taken on. For example, the choice to develop a new product that may not be successful in the marketplace.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Actors

A

Interfaces in a system boundary diagram. (Use Cases)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Alternate Path

A

Additional testable conditions are derived from the exceptions and alternative course of the Use Case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Affinity Diagram

A

A group process that takes large amounts of language data, such as a list developed by brainstorming, and divides it into categories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Analogous

A

The analogy model is a nonalgorithmic costing model that estimates the size, effort, or cost of a project by relating it to another similar completed project. Analogous estimating takes the actual time and/or cost of a historical project as a basis for the current project.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Analogous Percentage Method

A

A common method for estimating test effort is to calculate the test estimate as a percentage of previous test efforts using a predicted size factor (SF) (e.g., SLOC or FPA).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Application

A

A single software product that may or may not fully support a business function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Appraisal Costs

A

Resources spent to ensure a high level of quality in all
development life cycle stages which includes conformance to quality standards and delivery of products that meet the user’s requirements/needs. Appraisal costs include the cost of in-process reviews, dynamic testing, and final inspections.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Appreciative or Enjoyment Listening

A

One automatically switches to this type of listening when it is perceived as a funny situation or an explanatory example will be given of a situation. This listening type helps understand real-world situations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Assumptions

A

A thing that is accepted as true.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Audit

A

This is an inspection/assessment activity that verifies compliance with plans, policies, and procedures, and ensures that resources are conserved. Audit is a staff function; it serves as the “eyes and ears” of management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Backlog

A

Work waiting to be done; for IT this includes new systems to be developed and enhancements to existing systems. To be included in the development backlog, the work must have been cost-justified and approved for development. A product backlog in Scrum is a prioritized featured list containing short descriptions of all functionality desired in the product.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Baseline

A

A quantitative measure of the current level of performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Benchmarking

A

Comparing your company’s products, services, or processes against best practices, or competitive practices, to help define superior performance of a product, service, or support process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Benefits Realization Test

A

A test or analysis conducted after an application is moved into production to determine whether it is likely to meet the originating business case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Black-Box Testing

A

A test technique that focuses on testing the functionality of the program, component, or application against its specifications without knowledge of how the system is constructed; usually data or business process driven.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Bottom-Up

A
Begin testing from the bottom of the hierarchy and work up to the top. Modules are added in ascending hierarchical order. Bottom-up testing requires the development of driver modules, which provide
the test input, that call the module or program being tested, and display test output.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Bottom-Up Estimation

A

In this technique, the cost of each single activity is determined with the greatest level of detail at the bottom level and then rolls up to calculate the total project cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Boundary Value Analysis

A

A data selection technique in which test data is chosen from the “boundaries” of the input or output domain classes, data structures, and procedure parameters. Choices often include the actual minimum and maximum boundary values, the maximum value plus or minus one, and the minimum value plus or minus one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Brainstorming

A

A group process for generating creative and diverse ideas.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Branch Combination Coverage

A

Branch Condition Combination Coverage is a very thorough structural testing technique, requiring 2n test cases to achieve 100% coverage of a condition containing n Boolean operands.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Branch/Decision Testing

A

A test method that requires that each possible branch on each decision point be executed at least once.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Bug

A

A general term for all software defects or errors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Calibration

A

This indicates the movement of a measure so it becomes more valid, for example, changing a customer survey so it better reflects the true opinions of the customer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Candidate

A

An individual who has met eligibility requirements for a credential awarded through a certification program, but who has not yet earned that certification through participation in the required skill and knowledge assessment instruments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Causal Analysis

A

The purpose of causal analysis is to prevent problems by
determining the problem’s root cause. This shows the relation between an effect and its possible causes to eventually find the root cause of the issue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Cause and Effect Diagrams

A

A cause and effect diagram visualizes results of brainstorming and affinity grouping through major causes of a significant process problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Cause-Effect Graphing

A

Cause-effect graphing is a technique which focuses on modeling the dependency relationships between a program’s input conditions (causes) and output conditions (effects). CEG is considered a Requirements-Based test technique and is often referred to as Dependency modeling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Certificant

A

An individual who has earned a credential awarded through a certification program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Certification

A

A voluntary process instituted by a nongovernmental agency by which individual applicants are recognized for having achieved a measurable level of skill or knowledge. Measurement of the skill or knowledge makes certification more restrictive than simple registration, but much less restrictive than formal licensure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Change Management

A

Managing software change is a process. The process is the primary responsibility of the software development staff. They must assure that the change requests are documented, that they are tracked through approval or rejection, and then incorporated into the development process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Check

A

Check to determine whether work is progressing according to the plan and whether the expected results are obtained. Check for performance of the set procedures, changes in conditions, or abnormalities that may appear. As often as possible, compare the results of the work with the objectives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Check Sheets

A

A check sheet is a technique or tool to record the number of occurrences over a specified interval of time; a data sample to determine the frequency of an event.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Checklists

A

A series of probing questions about the completeness and attributes of an application system. Well-constructed checklists cause evaluation of areas, which are prone to problems. It both limits the scope of the test and directs the tester to the areas in which there is a high probability of a problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Checkpoint Review

A

Held at predefined points in the development process to evaluate whether certain quality factors (critical success factors) are being adequately addressed in the system being built. Independent experts for the purpose of identifying problems conduct the reviews as early as possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Client

A

The customer that pays for the product received and receives the benefit from the use of the product.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

CMMI-Dev

A

A process improvement model for software development.
Specifically, CMMI for Development is designed to compare an organization’s existing development processes to proven best practices developed by members of industry, government, and academia.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Coaching

A

Providing advice and encouragement to an individual or individuals to promote a desired behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

COCOMO II

A

The best recognized software development cost model is the Constructive Cost Model II.COCOMO®II is an enhancement over the original COCOMO® model. COCOMO®II extends the capability of the model to include a wider collection of techniques and technologies. It provides support for object-oriented software, business software, software created via spiral or evolutionary development models and software using COTS application utilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Code Comparison

A

One version of source or object code is compared to a second version. The objective is to identify those portions of computer programs that have been changed. The technique is used to identify those segments of an application program that have been altered as a result of a program change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Common Causes of Variation

A

Common causes of variation are typically due to a large number of small random sources of variation. The sum of these sources of variation determines the magnitude of the process’s inherent variation due to common causes; the process’s control limits and current process capability can then be determined.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Compiler-Based Analysis

A

Most compilers for programming languages include diagnostics that identify potential program structure flaws. Many of these diagnostics are warning messages requiring the programmer to conduct additional investigation to determine whether or not the problem is real. Problems may include syntax problems, command
violations, or variable/data reference problems. These diagnostic messages are a useful means of detecting program problems, and should be used by the programmer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Complete Test Set

A

A test set containing data that causes each element of prespecified set of Boolean conditions to be true. In addition, each element of the test set causes at least one condition to be true.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Completeness

A

The property that all necessary parts of an entity are included. Often, a product is said to be complete if it has met all requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Complexity-Based Analysis

A

Based upon applying mathematical graph theory to programs and preliminary design language specification (PDLs) to determine a unit’s complexity. This analysis can be used to measure and control complexity when maintainability is a desired attribute. It can also be used to estimate test effort required and identify paths that
must be tested.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Compliance Checkers

A

A parse program looking for violations of company standards. Statements that contain violations are flagged. Company standards are rules that can be added, changed, and deleted as needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Comprehensive Listening

A

Designed to get a complete message with minimal distortion. This type of listening requires a lot of feedback and summarization to fully understand what the speaker is communicating.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Compromise

A

An intermediate approach – Partial satisfaction is sought for both parties through a “middle ground” position that reflects mutual sacrifice. Compromise evokes thoughts of giving up something, therefore earning the name “lose-lose.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Condition Coverage

A

A white-box testing technique that measures the number of, or percentage of, decision outcomes covered by the test cases designed. 100% condition coverage would indicate that every possible outcome of each decision had been executed at least once during testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Condition Testing

A

A structural test technique where each clause in every condition is forced to take on each of its possible values in combination with those of other clauses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Configuration Management

A

Software Configuration Management (CM) is a process for tracking and controlling changes in the software. The ability to maintain control over changes made to all project artifacts is critical to the success of a project. The more complex an application is, the more important it is to control change to both the application and its supporting artifacts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Configuration Management Tools

A

Tools that are used to keep track of changes made to systems and all related artifacts. These are also known as version control tools.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Configuration Testing

A

Testing of an application on all supported hardware and software platforms. This may include various combinations of hardware types, configuration settings, and software versions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Consistency

A

The property of logical coherence among constituent parts. Consistency can also be expressed as adherence to a given set of rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Consistent Condition Set

A

A set of Boolean conditions such that complete test sets for the conditions uncover the same errors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Constraints

A

A limitation or restriction. Constraints are those items that will likely force a dose of “reality” on a test project. The obvious constraints are test staff size, test schedule, and budget.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Constructive Criticism

A

A process of offering valid and well-reasoned opinions about the work of others, usually involving both positive and negative comments, in a friendly manner rather than an oppositional one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Control

A

Control is anything that tends to cause the reduction of risk. Control can accomplish this by reducing harmful effects or by reducing the frequency of occurrence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Control Charts

A

A statistical technique to assess, monitor and maintain the stability of a process. The objective is to monitor a continuous repeatable process and the process variation from specifications. The intent of a control chart is to monitor the variation of a statistically stable process where activities are repetitive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Control Flow Analysis

A

Based upon graphical representation of the program process. In control flow analysis, the program graph has nodes, which represent a statement or segment possibly ending in an unresolved branch. The graph illustrates the flow of program control from one segment to another as illustrated through branches. The objective of control flow analysis is to determine potential problems in logic branches that might result in a loop condition or improper processing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Conversion Testing

A

Validates the effectiveness of data conversion processes, including field-to-field mapping, and data translation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Corrective Controls

A

Corrective controls assist individuals in the investigation and correction of causes of risk exposures that have been detected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

Correctness

A

The extent to which software is free from design and coding defects (i.e., fault-free). It is also the extent to which software meets its specified requirements and user objectives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Cost of Quality (COQ)

A

Money spent beyond expected production costs (labor, materials, equipment) to ensure that the product the customer receives is a quality (defect free) product. The Cost of Quality includes prevention, appraisal, and failure costs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

COTS

A

Commercial Off the Shelf (COTS) software products that are ready-made and available for sale in the marketplace.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Coverage

A

A measure used to describe the degree to which the application under test (AUT) is tested by a particular test suite.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Coverage-Based Analysis

A

A metric used to show the logic covered during a test session, providing insight to the extent of testing. The simplest metric for coverage would be the number of computer statements executed during the test compared to the total number of statements in the program. To completely test the program structure, the test data chosen should cause the execution of all paths. Since this is not generally possible outside of unit test, general metrics have been developed which give a measure of the quality of test data based on the proximity to this ideal coverage. The metrics should take into consideration the existence of infeasible paths, which are those paths in the program that have been designed so that no data will cause the execution of those paths.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

Critical Listening

A

The listener is performing an analysis of what the speaker said. This is most important when it is felt that the speaker is not in complete control of the situation, or does not know the complete facts of a situation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

Critical Success Factors

A

Critical Success Factors (CSFs) are those criteria or factors that must be present in a software application for it to be successful.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Customer

A

The individual or organization, internal or external to the producing organization that receives the product.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Customer’s/User’s of Software View of Quality

A

Fit for use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

Cyclomatic Complexity

A

The number of decision statements, plus one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

Damaging Event

A

Damaging Event is the materialization of a risk to an organization’s assets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

Data Dictionary

A

Provides the capability to create test data to test validation for the defined data elements. The test data generated is based upon the attributes defined for each data element. The test data will check both the normal variables for each data element as well as abnormal or error conditions for each data element.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Data Flow Analysis

A

In data flow analysis, we are interested in tracing the behavior of program variables as they are initialized and modified while the program executes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

DD(Decision-to- Decision) Path

A

A path of logical code sequence that begins at a decision statement or an entry and ends at a decision statement or an exit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

Debugging

A

The process of analyzing and correcting syntactic, logic, and other errors identified during testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Decision Analysis

A

This technique is used to structure decisions and to represent real- world problems by models that can be analyzed to gain insight and understanding. The elements of a decision model are the decisions, uncertain events, and values of outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

Decision Coverage

A

A white-box testing technique that measures the number of, or percentage of, decision directions executed by the test case designed. 100% decision coverage would indicate that all decision directions had been executed at least once during testing. Alternatively, each logical path through the program can be tested. Often, paths through the program are grouped into a finite set of classes, and one path from each class is tested.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

Decision Table

A

A tool for documenting the unique combinations of conditions and associated results in order to derive unique test cases for validation testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

Decision Trees

A

This provides a graphical representation of the elements of a decision model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Defect

A

Operationally, it is useful to work with two definitions of a defect:
• From the producer’s viewpoint a defect is a product requirement that has not been met or a product attribute possessed by a product or a function performed by a product that is not in the statement of requirements that define the
product;
• From the customer’s viewpoint a defect is anything that causes
customer dissatisfaction, whether in the statement of requirements or not.
A defect is an undesirable state. There are two types of defects: process and product.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

Defect Management

A

Process to identify and record defect information whose primary goal is to prevent future defects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Defect Tracking Tools

A

Tools for documenting defects as they are found during testing and for tracking their status through to resolution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

Deliverables

A

Any product or service produced by a process. Deliverables can be interim or external. Interim deliverables are produced within the process but never passed on to another process. External deliverables may be used by one or more processes. Deliverables serve as both inputs to and outputs from a process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

Design Level

A

The design decomposition of the software item (e.g., system, subsystem, program, or module).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

Desk Checking

A

The most traditional means for analyzing a system or a program. Desk checking is conducted by the developer of a system or program. The process involves reviewing the complete product to ensure that it is structurally sound and that the standards and requirements have been met. This tool can also be used on artifacts created during analysis and design.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

Detective Controls

A

Detective controls alert individuals involved in a process so that they are aware of a problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

Discriminative Listening

A

Directed at selecting specific pieces of information and not the entire communication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

Do

A

Create the conditions and perform the necessary teaching and training to ensure everyone understands the objectives and the plan. (Plan-Do-Check-Act)

The procedures to be executed in a process. (Process Engineering)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

Driver

A

Code that sets up an environment and calls a module for test. A driver causes the component under test to exercise the interfaces. As you move up the drivers are replaced with the actual components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

Dynamic Analysis

A

Analysis performed by executing the program code. Dynamic analysis executes or simulates a development phase product, and it detects errors by analyzing the response of a product to sets of input data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

Dynamic Assertion

A

A dynamic analysis technique that inserts into the program code assertions about the relationship between program variables. The truth of the assertions is determined as the program executes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

Ease of Use and Simplicity

A

These are functions of how easy it is to capture and use the measurement data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

Effectiveness

A

Effectiveness means that the testers completed their assigned responsibilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

Efficiency

A

Efficiency is the amount of resources and time required to complete test responsibilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

Empowerment

A

Giving people the knowledge, skills, and authority to act within their area of expertise to do the work and improve the process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

Entrance Criteria

A

Required conditions and standards for work product quality that must be present or met for entry into the next stage of the software development process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

Environmental Controls

A

Environmental controls are the means which management uses to manage the organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

Equivalence Partitioning

A

The input domain of a system is partitioned into classes of representative values so that the number of test cases can be limited to one-per-class, which represents the minimum number of test cases that must be executed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

Error or Defect

A

A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

Error Guessing

A

Test data selection technique for picking values that seem likely to cause defects. This technique is based upon the theory that test cases and test data can be developed based on the intuition and experience of the tester.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

Exhaustive Testing

A

Executing the program through all possible combinations of values for program variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

Exit Criteria

A

Standards for work product quality, which block the promotion of incomplete or defective work products to subsequent stages of the software development process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

Exploratory Testing

A

The term “Exploratory Testing” was coined in 1983 by Dr. Cem Kaner. Dr. Kaner defines exploratory testing as “a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

Failure Costs

A

All costs associated with defective products that have been delivered to the user and/or moved into production. Failure costs can be classified as either “internal” failure costs or “external” failure costs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

File Comparison

A

Useful in identifying regression errors. A snapshot of the correct expected results must be saved so it can be used for later comparison.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

Fitness for Use

A

Meets the needs of the customer/user.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

Flowchart

A

Pictorial representations of data flow and computer logic. It is frequently easier to understand and assess the structure and logic of an application system by developing a flow chart than to attempt to understand narrative descriptions or verbal explanations. The flowcharts for systems are normally developed manually, while flowcharts of programs can be produced.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

Force Field Analysis

A

A group technique used to identify both driving and restraining forces that influence a current situation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

Formal Analysis

A

Technique that uses rigorous mathematical techniques to analyze the algorithms of a solution for numerical properties, efficiency, and correctness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

FPA

A

Function Point Analysis a sizing method in which the program’s functionality is measured by the number of ways it must interact with the users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

Functional System Testing

A

Functional system testing ensures that the system requirements and specifications are achieved. The process involves creating test conditions for use in evaluating the correctness of the application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

Functional Testing

A

Application of test data derived from the specified functional requirements without regard to the final program structure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

Gap Analysis

A

This technique determines the difference between two variables. A gap analysis may show the difference between perceptions of importance and performance of risk management practices. The gap analysis may show discrepancies between what is and what needs to be done. Gap analysis shows how large the gap is and how far the leap is to cross it. It identifies the resources available to deal with the gap.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

Happy Path

A

Generally used within the discussion of Use Cases, the happy path follows a single flow uninterrupted by errors or exceptions from beginning to end.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
121
Q

Heuristics

A

Experience-based techniques for problem solving, learning, and discovery.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
122
Q

Histogram

A

A graphical description of individually measured values in a data set that is organized according to the frequency or relative frequency of occurrence. A histogram illustrates the shape of the distribution of individual values in a data set along with information regarding the average and variation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
123
Q

Incremental Model

A

The incremental model approach subdivides the requirements specifications into smaller buildable projects (or modules). Within each of those smaller requirements subsets, a development life cycle exists which includes the phases described in the Waterfall approach.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
124
Q

Incremental Testing

A

Incremental testing is a disciplined method of testing the interfaces between unit-tested programs as well as between system components. It involves adding unit-tested programs to a given module or component one by one, and testing each resultant combination.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
125
Q

Infeasible Path

A

A sequence of program statements that can never be executed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
126
Q

Influence Diagrams

A

Provides a graphical representation of the elements of a decision model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
127
Q

Inherent Risk

A

Inherent Risk is the risk to an organization in the absence of any actions management might take to alter either the risk’s likelihood or impact.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
128
Q

Inputs

A

Materials, services, or information needed from suppliers to make a process work, or build a product.

129
Q

Inspection

A

A formal assessment of a work product conducted by one or more qualified independent reviewers to detect defects, violations of development standards, and other problems. Inspections involve authors only when specific questions concerning deliverables exist. An inspection identifies defects, but does not attempt to correct them. Authors take corrective actions and arrange follow-up reviews as needed.

130
Q

Instrumentation

A

The insertion of additional code into a program to collect information about program behavior during program execution.

131
Q

IntegrationTesting

A

This test begins after two or more programs or application components have been successfully unit tested. It is conducted by the development team to validate the technical quality or design of the application. It is the first level of testing which formally integrates a set of programs that communicate among themselves via messages or files (a client and its server(s), a string of batch programs, or a set of online modules within a dialogue or conversation.)

132
Q

Invalid Input

A

Test data that lays outside the domain of the function the program represents.

133
Q

ISO 29119

A

ISO 29119 is a set of standards for software testing that can be used within any software development life cycle or organization.

134
Q

Iterative Model

A

The project is divided into small parts allowing the development team to demonstrate results earlier on in the process and obtain valuable feedback from system users.

135
Q

Judgment

A

A decision made by individuals that is based on three criteria which are: fact, standards, and experience.

136
Q

Keyword-Driven Testing

A

Keyword-driven testing, also known as table-driven testing or action word based testing, is a testing methodology whereby tests are driven wholly by data. Keyword-driven testing uses a table format, usually a spreadsheet, to define keywords or action words for each function that will be executed.

137
Q

Leadership

A

The ability to lead, including inspiring others in a shared vision of what can be, taking risks, serving as a role model, reinforcing and rewarding the accomplishments of others, and helping others to act.

138
Q

Life Cycle Testing

A

The process of verifying the consistency, completeness, and correctness of software at each stage of the development life cycle.

139
Q

Management

A

A team or individuals who manage(s) resources at any level of the organization.

140
Q

Mapping

A

Provides a picture of the use of instructions during the execution of a program. Specifically, it provides a frequency listing of source code statements showing both the number of times an instruction was executed and which instructions were not executed. Mapping can be used to optimize source code by identifying the frequently used instructions. It can also be used to determine unused code, which can demonstrate code, which has not been tested, code that is infrequently used, or code that is non-entrant.

141
Q

Mean

A

A value derived by adding several quantities and dividing the sum by the number of these quantities..

142
Q

Measures

A

A unit to determine the dimensions, quantity, or capacity (e.g., lines of code are a measure of software size).

143
Q

Mentoring

A

Mentoring is helping or supporting an individual in a non- supervisory capacity. Mentors can be peers, subordinates, or superiors. What is important is that the mentor does not have a managerial relationship to the mentored individual when performing the task of mentoring.

144
Q

Metric

A

A software metric is a mathematical number that shows a relationship between two measures.

145
Q

Metric-Based Test Data Generation

A

The process of generating test sets for structural testing based on use of complexity or coverage metrics.

146
Q

Mission

A

A customer-oriented statement of purpose for a unit or a team.

147
Q

Model Animation

A

Model animation verifies that early models can handle the various types of events found in production data. This is verified by “running” actual production transactions through the models as if they were operational systems.

148
Q

Model Balancing

A

Model balancing relies on the complementary relationships between the various models used in structured analysis (event, process, data) to ensure that modeling rules/standards have been followed; this ensures that these complementary views are consistent and complete.

149
Q

Model-Based Testing

A

Test cases are based on a simple model of the application. Generally, models are used to represent the desired behavior of the application being tested. The behavioral model of the application is derived from the application requirements and specification.

150
Q

Moderator

A

Manages the inspection process, is accountable for the effectiveness of the inspection, and must be impartial.

151
Q

Modified Condition Decision Coverage

A

A compromise which requires fewer test cases than Branch Condition Combination Coverage.

152
Q

Motivation

A

Getting individuals to do work tasks they do not want to do or to perform those work tasks in a more efficient or effective manner.

153
Q

Mutation Analysis

A

A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants (i.e., mutants) of it.

154
Q

Network Analyzers

A

A tool used to assist in detecting and diagnosing network problems.

155
Q

Non-functional Testing

A

Non-functional testing validates that the system quality attributes and characteristics have been considered during the development process. Non-functional testing is the testing of a software application for its non-functional requirements.

156
Q

Objective Measures

A

An objective measure is a measure that can be obtained by counting.

157
Q

Open Source

A

Open Source: “pertaining to or denoting software whose source code is available free of charge to the public to use, copy, modify, sublicense, or distribute.”

158
Q

Optimum Point of Testing

A

The point where the value received from testing no longer exceeds the cost of testing.

159
Q

Oracle

A

A (typically automated) mechanism or principle by which a problem in the software can be recognized. For example, automated test oracles have value in load testing software (by signing on to an application with hundreds or thousands of instances simultaneously), or in checking for intermittent errors in software.

160
Q

Outputs

A

Products, services, or information supplied to meet customer needs.

161
Q

Pair-Wise

A

Pair-wise testing (also known as all-pairs testing) is a combinatorial method used to generate the least number of test cases necessary to test each pair of input parameters to a system.

162
Q

Parametric Modeling

A

A mathematical model based on known parameters to predict cost/ schedule of a test project. The parameters in the model can vary based on the type of project.

163
Q

Pareto Analysis

A

The Pareto Principle states that only a “vital few” factors are responsible for producing most of the problems. This principle can be applied to risk analysis to the extent that a great majority of problems (80%) are produced by a few causes (20%). If we correct these few key causes, we will have a greater probability of success.

164
Q

Pareto Charts

A

A special type of bar chart to view the causes of a problem in order of severity: largest to smallest based on the 80/20 premise.

165
Q

Pass/Fail Criteria

A

Decision rules used to determine whether a software item or feature passes or fails a test.

166
Q

Passive Risk

A

Passive Risk is that which is inherent in inaction. For example, the choice not to update an existing product to compete with others in the marketplace.

167
Q

Path Expressions

A

A sequence of edges from the program graph that represents a path through the program.

168
Q

Path Testing

A

A test method satisfying the coverage criteria that each logical path through the program be tested. Often, paths through the program are grouped into a finite set of classes and one path from each class is tested.

169
Q

Performance Test

A

Validates that both the online response time and batch run times meet the defined performance requirements.

170
Q

Performance/ Timing Analyzer

A

A tool to measure system performance.

171
Q

Phase(orStage) Containment

A

A method of control put in place within each stage of the development process to promote error identification and resolution so that defects are not propagated downstream to subsequent stages of the development process.

172
Q

Plan

A

Define your objective and determine the conditions and methods required to achieve your objective. Clearly describe the goals and policies needed to achieve the objective at this stage. (Plan-Do- Check-Act)

173
Q

Plan-Do-Check- Act model

A

One of the best known process improvement models is the Plan- Do-Check-Act model for continuous process improvement.

174
Q

Planning Poker

A

In Agile Development, Planning Poker is a consensus-based technique designed to remove the cognitive bias of anchoring.

175
Q

Policy

A

Managerial desires and intents concerning either process (intended objectives) or products (desired attributes).

176
Q

Population Analysis

A

Analyzes production data to identify, independent from the specifications, the types and frequency of data that the system will have to process/produce. This verifies that the specs can handle types and frequency of actual data and can be used to create validation tests.

177
Q

Post Conditions

A

A list of conditions, if any, which will be true after the Use Case finished successfully.

178
Q

Pre-Conditions

A

A list of conditions, if any, which must be met before the Use Case can be properly executed.

179
Q

Prevention Costs

A

Resources required to prevent defects and to do the job right the first time. These normally require up-front expenditures for benefits that will be derived later. This category includes money spent on establishing methods and procedures, training workers, acquiring tools, and planning for quality. Prevention resources are spent before the product is actually built.

180
Q

Preventive Controls

A

Preventive controls will stop incorrect processing from occurring.

181
Q

Problem-Solving

A

Cooperative mode – Attempts to satisfy the interests of both parties. In terms of process, this is generally accomplished through identification of “interests” and freeing the process from initial positions. Once interests are identified, the process moves into a phase of generating creative alternatives designed to satisfy identified interests (criteria).

182
Q

Procedure

A

Describe how work must be done and how methods, tools, techniques, and people are applied to perform a process. There are Do procedures and Check procedures. Procedures indicate the “best way” to meet standards.

183
Q

Process

A

The process or set of processes used by an organization or project to plan, manage, execute, monitor, control, and improve its software related activities. A set of activities and tasks. A statement of purpose and an essential set of practices (activities) that address that purpose.

184
Q

Process Improvement

A

To change a process to make the process produce a given product faster, more economically, or of higher quality.

185
Q

Process Risk

A

Process risk is the activities such as planning, resourcing, tracking, quality assurance, and configuration management.

186
Q

Producer/Author

A

Gathers and distributes materials, provides product overview, is available for clarification, should contribute as an inspector, and must not be defensive.

187
Q

Producer’s View of Quality

A

Meeting requirements.

188
Q

Product

A

The output of a process: the work product. There are three useful classes of products: Manufactured Products (standard and custom), Administrative/Information Products (invoices, letters, etc.), and Service Products (physical, intellectual, physiological, and psychological). A statement of requirements defines products; one or more people working in a process produce them.

189
Q

Production Costs

A

The cost of producing a product. Production costs, as currently reported, consist of (at least) two parts; actual production or right- the-first time costs (RFT) plus the Cost of Quality (COQ). RFT costs include labor, materials, and equipment needed to provide the product correctly the first time.

190
Q

Productivity

A

The ratio of the output of a process to the input, usually measured in the same units. It is frequently useful to compare the value added to a product by a process, to the value of the input resources required (using fair market values for both input and output).

191
Q

Proofof Correctness

A

The use of mathematical logic techniques to show that a relationship between program variables assumed true at program entry implies that another relationship between program variables holds at program exit.

192
Q

Quality

A

A product is a quality product if it is defect free. To the producer, a product is a quality product if it meets or conforms to the statement of requirements that defines the product. This statement is usually shortened to: quality means meets requirements. From a customer’s perspective, quality means “fit for use.”

193
Q

QualityAssurance (QA)

A

The set of support activities (including facilitation, training, measurement, and analysis) needed to provide adequate confidence that processes are established and continuously improved to produce products that meet specifications and are fit for use.

194
Q

Quality Control (QC)

A

The process by which product quality is compared with applicable standards, and the action taken when nonconformance is detected. Its focus is defect detection and removal. This is a line function; that is, the performance of these tasks is the responsibility of the people working within the process.

195
Q

Quality Improvement

A

To change a production process so that the rate at which defective products (defects) are produced is reduced.

196
Q

RAD Model

A

A variant of prototyping, is another form of iterative development. The RAD model is designed to build and deliver application prototypes to the client while in the iterative process.

197
Q

Reader (Inspections)

A

Must understand the material, paraphrases the material during the inspection, and sets the inspection pace.

198
Q

Recorder (Inspections)

A

Must understand error classification, is not the meeting stenographer (captures enough detail for the project team to go forward to resolve errors), classifies errors as detected, and reviews the error list at the end of the meeting.

199
Q

Recovery Test

A

Evaluates the contingency features built into the application for handling interruptions and for returning to specific points in the application processing cycle, including checkpoints, backups, restores, and restarts. This test also assures that disaster recovery is possible.

200
Q

Regression Analysis

A

A means of showing the relationship between two variables. Regression analysis will provide two pieces of information. The first is a graphic showing the relationship between two variables. Second, it will show the correlation, or how closely related the two variables are.

201
Q

Regression Testing

A

Testing of a previously verified program or application following program modification for extension or correction to ensure no new defects have been introduced.

202
Q

Reliability

A

This refers to the consistency of measurement. Two different individuals take the same measurement and get the same result. The measure is reliable.

203
Q

Requirement

A

A formal statement of:

  1. An attribute to be possessed by the product or a function to be performed by the product
  2. The performance standard for the attribute or function; and/or
  3. The measuring process to be used in verifying that the standard has been met.
204
Q

Requirements- Based Testing

A

RBT focuses on the quality of the Requirements Specification and requires testing throughout the development life cycle. Specifically, RBT performs static tests with the purpose of verifying that the requirements meet acceptable standards defined as: complete, correct, precise, unambiguous, and clear, consistent, relevant, testable, and traceable.

205
Q

Residual Risk

A

Residual Risk is the risk that remains after management responds to the identified risks.

206
Q

Reuse Model

A

The premise behind the Reuse Model is that systems should be built using existing components, as opposed to custom-building new components. The Reuse Model is clearly suited to Object-Oriented computing environments, which have become one of the premiere technologies in today’s system development industry.

207
Q

Risk

A

Risk can be measured by performing risk analysis.

208
Q

Risk Acceptance

A

Risk Acceptance is the amount of risk exposure that is acceptable to the project and the company and can be either active or passive.

209
Q

Risk Analysis

A

Risk Analysis is an analysis of an organization’s information resources, its existing controls, and its organization and computer system or application system vulnerabilities. It combines the loss potential for each resource or combination of resources with an estimated rate of occurrence to establish a potential level of damage in dollars or other assets.

210
Q

Risk Appetite

A

Risk Appetite defines the amount of loss management is willing to accept for a given risk.

211
Q

Risk Assessment

A

Risk Assessment is an examination of a project to identify areas of potential risk. The assessment can be broken down into analysis, identification, and prioritization.

212
Q

Risk Avoidance

A

Risk avoidance is a strategy for risk resolution to eliminate the risk altogether. Avoidance is a strategy to use when a lose-lose situation is likely.

213
Q

Risk Event

A

Risk Event is a future occurrence that may affect the project for better or worse. The positive aspect is that these events will help you identify opportunities for improvement while the negative aspect will be the realization of threats and losses.

214
Q

Risk Exposure

A

Risk Exposure is the measure that determines the probability of likelihood of the event times the loss that could occur.

215
Q

Risk Identification

A

Risk Identification is a method used to find risks before they become problems. The risk identification process transforms issues and concerns about a project into tangible risks, which can be described and measured.

216
Q

RiskLeverage

A

Risk leverage is a measure of the relative cost-benefit of performing various candidate risk resolution activities.

217
Q

Risk Management

A

Risk Management is the process required to identify, quantify, respond to, and control project, process, and product risk.

218
Q

Risk Mitigation

A

Risk Mitigation is the action taken to reduce threats and/or vulnerabilities.

219
Q

Risk Protection

A

Risk protection is a strategy to employ redundancy to mitigate (reduce the probability and/or consequence of) a risk.

220
Q

Risk Reduction

A

Risk reduction is a strategy to decrease risk through mitigation, prevention, or anticipation. Decreasing either the probability of the risk occurrence or the consequence when the risk is realized reduces risk. Reduction is a strategy to use when risk leverage exists.

221
Q

Risk Reserves

A

A risk reserve is a strategy to use contingency funds and built-in schedule slack when uncertainty exists in cost or time.

222
Q

Risk Transfer

A

Risk transfer is a strategy to shift the risk to another person, group, or organization and is used when another group has control.

223
Q

Risk-Based Testing

A

Risk-based testing prioritizes the features and functions to be tested based on the likelihood of failure and the impact of a failure should it occur.

224
Q

Run Chart

A

A run chart is a graph of data (observation) in chronological order displaying shifts or trends in the central tendency (average). The data represents measures, counts or percentages of outputs from a process (products or services).

225
Q

Sad Path

A

A path through the application which does not arrive at the desired result.

226
Q

Scatter Plot Diagram

A

A graph designed to show whether there is a relationship between two changing variables.

227
Q

Scenario Testing

A

Testing based on a real-world scenario of how the system is supposed to act.

228
Q

Scope of Testing

A

The scope of testing is the extensiveness of the test process. A narrow scope may be limited to determining whether or not the software specifications were correctly implemented. The scope broadens as more responsibilities are assigned to software testers.

229
Q

Selective Regression Testing

A

The process of testing only those sections of a program where the tester’s analysis indicates programming changes have taken place and the related components.

230
Q

Self-Validating Code

A

Code that makes an explicit attempt to determine its own correctness and to proceed accordingly.

231
Q

SLOC

A

Source Lines of Code

232
Q

Smoothing

A

An unassertive approach – Both parties neglect the concerns involved by sidestepping the issue, postponing the conflict, or choosing not to deal with it.

233
Q

Soft Skills

A

Soft skills are defined as the personal attributes which enable an individual to interact effectively and harmoniously with other people.

234
Q

Software Feature

A

A distinguishing characteristic of a software item (e.g., performance, portability, or functionality).

235
Q

Software Item

A

Source code, object code, job control code, control data, or a collection of these.

236
Q

Software Quality Criteria

A

An attribute of a quality factor that is related to software development.

237
Q

Software Quality Factors

A

Software quality factors are attributes of the software that, if they are wanted and not present, pose a risk to the success of the software and thus constitute a business risk.

238
Q

Software Quality Gaps

A

The first gap is the producer gap. It is the gap between what was specified to be delivered, meaning the documented requirements and internal IT standards, and what was actually delivered. The second gap is between what the producer actually delivered compared to what the customer expected.

239
Q

Special Causes of Variation

A

Variation not typically present in the process. They occur because of special or unique circumstances.

240
Q

Special Test Data

A

Test data based on input values that are likely to require special handling by the program.

241
Q

Spiral Model

A

Model designed to include the best features from the Waterfall and Prototyping, and introduces a new component risk-assessment.

242
Q

Standardize

A

Procedures that are implemented to ensure that the output of a process is maintained at a desired level.

243
Q

Standardizer

A

Must know IT standards & procedures, ensures standards are met and procedures are followed, meets with project leader/manager, and ensures entrance criteria are met (product is ready for review).

244
Q

Standards

A

The measure used to evaluate products and identify nonconformance. The basis upon which adherence to policies is measured.

245
Q

Statement of Requirements

A

The exhaustive list of requirements that define a product. Note that the statement of requirements should document requirements proposed and rejected (including the reason for the rejection) during the requirement determination process.

246
Q

Statement Testing

A

A test method that executes each statement in a program at least once during program testing.

247
Q

Static Analysis

A

Analysis of a program that is performed without executing the program. It may be applied to the requirements, design, or code.

248
Q

Statistical Process Control

A

The use of statistical techniques and tools to measure an ongoing process for change or stability.

249
Q

Story Points

A

Measurement of a feature’s size relative to other features. Story Points are an analogous method in that the objective is to compare the sizes of features to other stories and reference stories.

250
Q

Stress Testing

A

This test subjects a system, or components of a system, to varying environmental conditions that defy normal expectations. For example, high transaction volume, large database size or restart/ recovery circumstances. The intention of stress testing is to identify constraints and to ensure that there are no performance problems.

251
Q

Structural Analysis

A

Structural analysis is a technique used by developers to define unit test cases. Structural analysis usually involves path and condition coverage.

252
Q

Structural System Testing

A

Structural System Testing is designed to verify that the developed system and programs work. The objective is to ensure that the product designed is structurally sound and will function correctly.

253
Q

Structural Testing

A

A testing method in which the test data is derived solely from the program structure.

254
Q

Stub

A

Special code segments that when invoked by a code segment under testing, simulate the behavior of designed and specified modules not yet constructed.

255
Q

Subjective Measures

A

A person’s perception of a product or activity.

256
Q

Supplier

A

An individual or organization that supplies inputs needed to generate a product, service, or information to a customer.

257
Q

System Boundary Diagram

A

A system boundary diagram depicts the interfaces between the software under test and the individuals, systems, and other interfaces. These interfaces or external agents are referred to as “actors.” The purpose of the system boundary diagram is to establish the scope of the system and to identify the actors (i.e., the interfaces) that need to be developed. (Use Cases)

258
Q

System Test

A

The entire system is tested to verify that all functional, information, structural and quality requirements have been met. A predetermined combination of tests is designed that, when executed successfully, satisfy management that the system meets specifications. System testing verifies the functional quality of the system in addition to all external interfaces, manual procedures, restart and recovery, and human-computer interfaces. It also verifies that interfaces between the application and the open environment work correctly, that JCL functions correctly, and that the application functions appropriately with the Database Management System, Operations environment, and any communications systems.

259
Q

Test

A
  1. A set of one or more test cases.

2. A set of one or more test cases and procedures.

260
Q

TestCase Generator

A

A software tool that creates test cases from requirements specifications. Cases generated this way ensure that 100% of the functionality specified is tested.

261
Q

Test Case Specification

A

An individual test condition, executed as part of a larger test that contributes to the test’s objectives. Test cases document the input, expected results, and execution conditions of a given test item. Test cases are broken down into one or more detailed test scripts and test data conditions for execution.

262
Q

Test Cycle

A

Test cases are grouped into manageable (and schedulable) units called test cycles. Grouping is according to the relation of objectives to one another, timing requirements, and on the best way to expedite defect detection during the testing event. Often test cycles are linked with execution of a batch process.

263
Q

Test Data

A

Data points required to test most applications; one set of test data to confirm the expected results (data along the happy path), a second set to verify the software behaves correctly for invalid input data (alternate paths or sad path), and finally data intended to force incorrect processing (e.g., crash the application).

264
Q

Test Data Management

A

A defined strategy for the development, use, maintenance, and ultimately destruction of test data.

265
Q

Test Data Set

A

Set of input elements used in the testing process.

266
Q

Test Design Specification

A

A document that specifies the details of the test approach for a software feature or a combination of features and identifies the associated tests.

267
Q

Test Driver

A

A program that directs the execution of another program against a collection of test data sets. Usually, the test driver also records and organizes the output generated as the tests are run.

268
Q

Test Environment

A

The Test Environment can be defined as a collection of hardware and software components configured in such a way as to closely mirror the production environment. The Test Environment must replicate or simulate the actual production environment as closely as possible.

269
Q

Test Harness

A

A collection of test drivers and test stubs

270
Q

Test Incident Report

A

A document describing any event during the testing process that requires investigation.

271
Q

Test Item

A

A software item that is an object of testing.

272
Q

Test Item Transmittal Report

A

A document that identifies test items and includes status and location information.

273
Q

Test Labs

A

Test labs are another manifestation of the test environment which is more typically viewed as a brick and mortar environment (designated, separated, physical location).

274
Q

Test Log

A

A chronological record of relevant details about the execution of tests.

275
Q

Test Plan

A

A document describing the intended scope, approach, resources, and schedule of testing activities. It identifies test items, the features to be tested, the testing tasks, the personnel performing each task, and any risks requiring contingency planning.

276
Q

Test Point Analysis (TPA)

A

Calculates test effort based on size (derived from FPA), strategy (as defined by system components and quality characteristics to be tested and the coverage of testing), and productivity (the amount of time needed to perform a given volume of testing work).

277
Q

Test Procedure Specification

A

A document specifying a sequence of actions for the execution of a test.

278
Q

Test Scripts

A

A specific order of actions that should be performed during a test session. The script also contains expected results. Test scripts may be manually prepared using paper forms, or may be automated using capture/playback tools or other kinds of automated scripting tools.

279
Q

Test Stubs

A

Simulates a called routine so that the calling routine’s functions can be tested. A test harness (or driver) simulates a calling component or external environment, providing input to the called routine, initiating the routine, and evaluating or displaying output returned.

280
Q

Test Suite Manager

A

A tool that allows testers to organize test scripts by function or other grouping.

281
Q

TestSummary Report

A

A document that describes testing activities and results and evaluates the corresponding test items.

282
Q

Testing

A
  1. The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.
  2. The process of analyzing a software item to detect the differences between existing and required conditions, i.e. bugs, and to evaluate the features of the software items. See: dynamic analysis, static analysis, software engineering.
283
Q

Testing Process Assessment

A

Thoughtful analysis of testing process results, and then taking corrective action on the identified weaknesses.

284
Q

Testing Schools of Thought

A

A school of thought simply defined as “a belief (or system of beliefs) shared by a group.” Generally accepted test schools of that are: the Analytical School, the Factory School, the Quality (control) School, Context-driven School, and the Agile School.

285
Q

Therapeutic Listening

A

The listener is sympathetic to the speaker’s point of view. During this type of listening, the listener will show a lot of empathy for the speaker’s situation.

286
Q

Thread Testing

A

This test technique, which is often used during early integration testing, demonstrates key functional capabilities by testing a string of units that accomplish a specific function in the application.

287
Q

Threat

A

Threat is something capable of exploiting a vulnerability in the security of a computer system or application. Threats include both hazards (any source of potential damage or harm) and events that can trigger vulnerabilities.

288
Q

Threshold Values

A

Threshold values define the inception of risk occurrence. Predefined thresholds act as a warning level to indicate the need to execute the risk action plan.

289
Q

Timeliness

A

This refers to whether the data was reported in sufficient time to impact the decisions needed to manage effectively

290
Q

TMMi

A

A process improvement model for software testing. The Test Maturity Model integration (TMMi) is a detailed model for test process improvement and is positioned as being complementary to the CMMI.

291
Q

Tools

A

Any resources that are not consumed in converting the input into the deliverable.

292
Q

Top-Down

A

Begin testing from the top of the module hierarchy and work down to the bottom using interim stubs to simulate lower interfacing modules or programs. Modules are added in descending hierarchical order.

293
Q

Top-Down Estimation

A

Generate an overall estimate based on initial knowledge. It is used at the initial stages of the project and is based on similar projects. Past data plays an important role in this form of estimation.

294
Q

Tracing

A

A process that follows the flow of computer logic at execution time. Tracing demonstrates the sequence of instructions or a path followed in accomplishing a given task. The two main types of trace are tracing instructions in computer programs as they are executed, or tracing the path through a database to locate predetermined pieces of information.

295
Q

Triangulation

A

Story Triangulation is a form of estimation by analogy. After the first few estimates have been made, they are verified by relating them to each other. (Agile methods)

296
Q

Triggers

A

A device used to activate, deactivate, or suspend a risk action plan. Triggers can be set by the project tracking system.

297
Q

Use Case Points (UCP)

A

A derivative of the Use Cases method is the estimation technique known as Use Case Points. Use Case Points are similar to Function Points and are used to estimate the size of a project.

298
Q

Unit Test

A

Testing individual programs, modules, or components to demonstrate that the work package executes per specification, and validate the design and technical quality of the application. The focus is on ensuring that the detailed logic within the component is accurate and reliable according to pre-determined specifications. Testing stubs or drivers may be used to simulate behavior of interfacing modules.

299
Q

Usability Test

A

The purpose of this event is to review the application user interface and other human factors of the application with the people who will be using the application. This is to ensure that the design (layout and sequence, etc.) enables the business functions to be executed as easily and intuitively as possible. This review includes assuring that the user interface adheres to documented User Interface standards, and should be conducted early in the design stage of development. Ideally, an application prototype is used to walk the client group through various business scenarios, although paper copies of screens, windows, menus, and reports can be used

300
Q

Use Case

A

A Use Case is a technique for capturing the functional requirements of systems through the interaction between an Actor and the System.

301
Q

User

A

The customer that actually uses the product received.

302
Q

User Acceptance Testing

A

User Acceptance Testing (UAT) is conducted to ensure that the system meets the needs of the organization and the end user/ customer. It validates that the system will work as intended by the user in the real world, and is based on real world business scenarios, not system requirements. Essentially, this test validates that the right system was built.

303
Q

User Story

A

A short description of something that a customer will do when they use an application (software system). The User Story is focused on the value or result a customer would receive from doing whatever it is the application does.

304
Q

Validation

A

Validation physically ensures that the system operates according to the desired specifications by executing the system functions through a series of tests that can be observed and evaluated

305
Q

Validity

A

This indicates the degree to which a measure actually measures what it was intended to measure.

306
Q

Values (Sociology)

A

The ideals, customs, instructions, etc., of a society toward which the people have an affective regard. These values may be positive, as cleanliness, freedom, or education, or negative, as cruelty, crime, or blasphemy. Any object or quality desired as a means or as an end in itself.

307
Q

Verification

A

The process of determining whether the products of a given phase of the software development cycle fulfill the requirements established during the previous phase.
The act of reviewing, inspecting, testing, checking, auditing, or otherwise establishing and documenting whether items, processes, services, or documents conform to specified requirements.

308
Q

Virtualization

A

The concept of virtualization (within the IT space) usually refers to running multiple operating systems on a single machine.

309
Q

Vision

A

A vision is a statement that describes the desired future state of a unit.

310
Q

V-Model

A

The V-Model is considered an extension of the Waterfall Model. The purpose of the “V” shape is to demonstrate the relationships between each phase of specification development and its associated dynamic testing phase.

311
Q

Vulnerability

A

Vulnerability is a design, implementation, or operations flaw that may be exploited by a threat. The flaw causes the computer system or application to operate in a fashion different from its published specifications and results in destruction or misuse of equipment or data.

312
Q

Walkthroughs

A

An informal review (static) testing process in which the author “walks through” the deliverable with the review team looking for defects.

313
Q

Waterfall

A

A development model in which progress is seen as flowing steadily downwards through the phases of conception, initiation, requirements, design, construction, dynamic testing, production/ implementation, and maintenance.

314
Q

WBS

A

A Work Breakdown Structure (WBS) groups project components into deliverable and accountable pieces.

315
Q

White-Box Testing

A

A testing technique that assumes that the path of the logic in a program unit or component is known. White-box testing usually consists of testing paths, branch by branch, to produce predictable results. This technique is usually used during tests executed by the development team, such as Unit or Component testing,

316
Q

Wideband Delphi

A

A method for the controlled exchange of information within a group. It provides a formal, structured procedure for the exchange of opinion, which means that it can be used for estimating.

317
Q

Withdrawal

A

Conflict is resolved when one party attempts to satisfy the concerns of others by neglecting its own interests or goals. This is a lose-win approach.

318
Q

Workbench

A

The objective of the workbench is to produce the defined output products (deliverables) in a defect-free manner. The procedures and standards established for each workbench are designed to assist in this objective.