Definitions from Jon Hagar’s
“Software Test Attacks to Break Mobile and Embedded Devices” book,
ISO 29119 (part 1) and beyond

(linked from,   Jon D. Hagar)

The following are definitions which I believe are important in reading anything about testing, since currently the industry does not have universally accepted definitions…yet.
But there are efforts to find common terminology that readers should be aware of, such as:

1)      ISO 29119 defines many testing terms that will slowly be accepted in many areas of testing (select terms repeated and referenced here)

2)      SEVocab is an on-line system which has the backing of groups such as ISO, IEC, IEEE, OMG and other standards bodies.  Terms in SEVocab are/or will be universal.

From “Software Test Attacks to Break Mobile and Embedded Devices” (Hagar, Jon Duncan, CRC Press Taylor Francis Group, 2013)


Definition and/or Reference


Analog to digital


A software module or software application that makes a device function or compliments another app (application). This is a common term for applications for mobile and embedded devices.


The configuration of a product as it will be delivered and used.  This is as opposed to special product configurations for testing, prototypes, etc.


Confirmation or check of identity of humans by their characteristics or traits see


Built in test—building functionality into hardware and/or software to facilitate testing e.g., test circuits, test logic code, data input ports, data output values, all of which have a primary goal of making testing easier.


A group effort where "out of the box" thinking is encouraged,


See error


The basic activities of confirming that software meets its requirements (see verification).


Also known as a programmer or developer. Someone who writes code in any software or computing language.

Concept of operations

How a system is to be used, usually over a series of activities, see


Commercial off the shelf (can be hardware or software); throughout this book I have used “off-the-shelf.”

Critical thinking

A type of reasonable, deep, and reflective human mental activity that is aimed at deciding what to do (here, during testing) see


Digital to Analog


The process of diagnosing the precise cause of a known error and then correcting the error. A developer activity that is performed before and after testing.

Developer test

Testing done at a structural or “white box” at the statement or code level, also known as unit testing.


A human action that produces an incorrect result, which could be in software, process, documentation, system, and so on.


Electro–magnetic interference

Exploratory testing

Software testing which simultaneously learns, designs tests and executes them, see


Termination of the ability of a product to perform a required function or its inability to perform within previously specified limits.


When an error in software manifests itself.

Field testing

Full–system test done at an operational site or in the real world.


Failure Modes Effects Analysis and/or Failure Modes Effects and Criticality Analysis,_effects,_and_criticality_analysis.


Field Programmable Gate Array

Functional test

Testing done to show that the features (requirements, customer needs, etc.) of the software are present.

Hard deadline

A deadline which must be met exactly for software functions to be provided to a customer or user.


Concepts which can solve a problem but cannot guarantee a solution in every case.


human machine interface


industrial control system


Institute of Electrical and Electronics Engineers


How software is coded using models, languages, constructs and others.

Implementation testing

Also known as developer testing


Hardware–based signal generated to the software for action


Data, which is not expected as input into the system, but may be received anyway.


International Electrotechnical Commission


International Standards Organization


Information technology


Independent Verification and Validation


The process of removing restrictions imposed by vendor(s) on devices running various operating systems through the use of hardware/software exploits to gain root access and circumvent vendor “safegaurd” features e.g., what you can load or do.  For example,

Load (test)

Testing, which puts the software under conditions where you can determine how much processing a computer performs, for example usage of CPU, memory, time, network bandwidth, or others.


Short for malicious software, which is code constructed to do “harm” e.g.,  virus, see

Mind Map

A method, usually a diagram, which capture a humans understanding, see


A representation of a real world process, device, software or concept, which can be logical, physical, and/or mental.

Mutation testing analysis

A test technique in which variations of data or code are created and then used in the test activities, see


In the physics world and analog electronics, noise is mostly an unwanted random addition to a signal picked up by sensors or electronics, which can impact software processing


Typical usage


Optical character recognition using a system and special software


Non typical usage


Any approach to defining or judging results generated by test. Oracles can include: tester judgment, mental models, secondary software programs, formal models, and others.

Performance test

Testing focused on requirements and issues related to system execution in areas of speed, load, response, etc. There are numerous techniques and tools that support performance testing.

Pesticide paradox

A concept in software testing where if the exact same test is used over and over, the likelihood of it finding errors decreases with each use.


Programmable logic controller, a digital computer used for automation of industrial processes, such as machinery control in factories


Turn off a system


Turn on a system

Priority inversion

Priority inversion is a scheduling problem which happens when a low priority task grabs a resource that a higher priority task needs and so the high priority task is forced to wait for it, but then another priority task runs, preventing the low priority task from finishing with the resource and releasing it, which prevents the high priority task from ever running. This can create deadlock and system failures.  This problem is often associated with interrupt driven software systems.


Also known as a developer or coder.  Someone who writes code in any software or computing language.

Race (conditions)

See priority and inversion and

Regression Testing

Regression testing involves retesting portions of software items after modification of associated software products. Modifications that may influence previous testing can include changes to: code, patches, data, requirements, interfaces, operational uses, hardware, etc.


The probability that software will not cause the failure of a system for a specified time under specified conditions. This probability is a function of the inputs and use of the system, as well as a function of the existence of faults in the software. The inputs to the system determine whether existing faults are encountered.

Risk analysis



Logic or hardware constructs which place a device into a “safe state” after a negative event such as: a fault, a failure, hardware breakage, network communication problems, and others.


(Supervisory Control and Data Acquisition) A type of industrial control system (ICS).


Testing in which written or automated information is generated before the test to determine the "course" (or execution sequence) of the test.

Side effect

A situation where code is changed or a bug occurs in one location in the software logic, but another area of code is impacted. This is associated with the concepts of coupling and cohesion in software.

Smart device

Any device that exhibits some processing capability (either computer or Integrated Circuit, FPGA, or other). These range from smart light switches to handheld systems (phones and tablets).

Social Engineering

Soft deadline

A deadline that must be met for functionality to be provided, but has some degree of time flexibility

Stress (test)

Tests with emphasis on robustness, availability, and error handling of the software under some load. These cases can be valid or invalid test cases.

Structural testing

Also known as white–box testing.

Success Criteria

The information (data) that defines when and how a particular test case is satisfied. This is specified before the test is run.


An activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made on some aspect of the software system or component providing information on these to interested parties.

Test Case

A single set of data inputs that result in one set of test outputs for any given test environment. (A test attack may have one or more test cases.)

Test Like You Fly

The test environment is as close to a production, field, or operational environment as possible. Environment includes hardware, connections, data, communications, and operations. There may be practical limitations to testing, so while this is a good idea, it is often not possible to achieve, which leads to field testing.

Test strategy

The set of ideas (i.e., methods and objectives) that guide test design and execution.

Test technique

Test method; a heuristic or algorithm for designing and/or executing a test.

Test Tools

Hardware and/or software aids that help to automate some aspect of testing. There are varying levels of test tool automation.


The ability of an item to be tested in a reasonable manner.


Questioning a product in order to evaluate it (Bach version); technical investigation of a product, on behalf of stakeholders, with the objective of exposing quality–related information of the kind they seek (Kaner’s version).

Time box

A time management and scheduling approach where limits of time (start and stop) are placed on an activity, see


A clock that measures time that can be absolute or relative.

Time lines

An order sequence of time (linear).


A logically ordered sequence of test activities. For example, stories, techniques, or attacks, which are centered around a theme or concept for example, a world tour, an error tour, or a hacking tour.


Testing in which there is no (or minimal) written or automated information generated before the test to determine the "course" (execution sequence) of the test.

Unverified failure

A bug or error that cannot be repeated or confirmed (that it is a bug) and fixed (you cannot fix what you cannot repeat or find). This is a problem for testing because if we see a potential problem (say the system crashes), but cannot make it happen again, you know there is some kind of bug, but not how to repeat it, find it, nor fix it.


Data or test cases that are within the "expected" usage of the system software.


Created environments that are "not the real thing" (not actual), such as a hardware platform, operating system (OS), storage device, real world, or network resources, see

Walled garden

The area where a service provider limits applications, content, and/or media to set platforms and/or restrictions on content.  For example, on a wireless network, to an app in a store, or other vendor control aspects, see This concept can make testing and testing with some devices difficult (see rooting and jailbreaking in Wikipedia).

White box testing

Also known as structural testing.


Zero Insert Force


Selected items from ISO 29119, part 1.

Section: ISO definitions (partial) 2013 version (Reference)

actual result

set of behaviors or conditions of a test item, or set of conditions of associated data or the test environment, observed as a result of test execution

dynamic testing

testing that requires the execution of the test item

equivalence partitioning

test design technique in which test cases are designed to exercise equivalence partitions by using one or more representative members of each partition

error guessing

test design technique in which test cases are derived on the basis of the tester’s knowledge of past failures, or general knowledge of failure modes


exploratory testing

type of unscripted experience-based testing in which the tester spontaneously designs and executes tests based on the tester's existing relevant knowledge, prior exploration of the test item (including the results of previous tests), and heuristic "rules of thumb" regarding common software behaviours and types of failure

Incident Report

documentation of the occurrence, nature, and status of an incident

pass/fail criteria

decision rules used to determine whether a test item, or feature of a test item, has passed or failed after testing

product risk

risk that a product could be defective in some specific aspect of its function, quality, or structure

project risk

risk related to the management of a project

regression testing

testing following modifications to a test item or to its operational environment, to identify whether regression failures occur

risk-based testing

testing in which the management, selection, prioritization, and use of testing activities and resources is consciously based on corresponding types and levels of analyzed risk

scenario testing

class of test design technique in which tests are designed to execute individual scenarios; where a scenario can be a user story, use-case, operational concept, or sequence of events the software may encounter etc.

scripted testing

dynamic testing in which the tester's actions are prescribed by written instructions in a test case

specification-based testing

testing in which the principal test basis is the external inputs and outputs of the test item, commonly based on a specification, rather than its implementation in source code or executable software

static testing

testing in which a test item is examined against a set of quality or other criteria without code being executed

test basis

body of knowledge used as the basis for the design of tests and test cases

test case

set of test case preconditions, inputs (including actions, where applicable), and expected results, developed to drive the execution of a test item to meet test objectives, including correct implementation, error identification, checking quality, and other valued information

test condition

testable aspect of a component or system, such as a function, transaction, feature, quality attribute, or structural element identified as a basis for testing

test data

data created or selected to satisfy the input requirements for executing one or more test cases, which may be defined in the Test Plan, test case or test procedure

test design technique

activities, concepts, processes, and patterns used to construct a test model that is used to identify test conditions for a test item, derive corresponding test coverage items, and subsequently derive or select test cases

test environment

facilities, hardware, software, firmware, procedures, and documentation intended for or used to perform testing of software 

test execution

process of running a test on the test item, producing actual result(s)

test item

work product  that is an object of testing

Test Plan

detailed description of test objectives to be achieved and the means and schedule for achieving them, organised to coordinate testing activities for some test item or set of test items

test procedure

sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap up activities post execution

test process

provides information on the quality of a software product, often comprised of a number of activities, grouped into one or more test sub-processes

test result

indication of whether or not a specific test case has passed or failed, i.e. if the actual result observed as test item output corresponds to the expected result or if deviations were observed   

test specification

complete documentation of the test design, test cases and test procedures for a specific test item

test status report

report that provides information about the status of the testing that is being performed in a specified reporting period

test strategy

part of the Test Plan that describes the approach to testing for a specific test project or test sub-process or sub-processes


set of activities conducted to facilitate discovery and/or evaluation of properties of one or more test items

unscripted testing

dynamic testing in which the tester's actions are not prescribed by written instructions in a test case