Definitions from Jon Hagar’s
“Software Test Attacks to Break Mobile and Embedded Devices” book,
ISO 29119 (part 1) and beyond

(linked from http://breakingembeddedsoftware.com,   Jon D. Hagar)

The following are definitions which I believe are important in reading anything about testing, since currently the industry does not have universally accepted definitions…yet.
But there are efforts to find common terminology that readers should be aware of, such as:

1)      ISO 29119 defines many testing terms that will slowly be accepted in many areas of testing (select terms repeated and referenced here)

2)      SEVocab is an on-line system which has the backing of groups such as ISO, IEC, IEEE, OMG and other standards bodies.  Terms in SEVocab are/or will be universal.

From “Software Test Attacks to Break Mobile and Embedded Devices” (Hagar, Jon Duncan, CRC Press Taylor Francis Group, 2013)

Term

Definition and/or Reference

A2D

Analog to digital

app

A software module or software application that makes a device function or compliments another app (application). This is a common term for applications for mobile and embedded devices.
See http://en.wikipedia.org/wiki/Mobile_app

As–built

The configuration of a product as it will be delivered and used.  This is as opposed to special product configurations for testing, prototypes, etc.

Biometrics

Confirmation or check of identity of humans by their characteristics or traits see

http://en.wikipedia.org/wiki/Biometrics

BIT

Built in test—building functionality into hardware and/or software to facilitate testing e.g., test circuits, test logic code, data input ports, data output values, all of which have a primary goal of making testing easier.

Brainstorming

A group effort where "out of the box" thinking is encouraged, http://en.wikipedia.org/wiki/Brainstorming

Bug

See error

Checking

The basic activities of confirming that software meets its requirements (see verification).

Coder

Also known as a programmer or developer. Someone who writes code in any software or computing language.

Concept of operations

How a system is to be used, usually over a series of activities, see http://en.wikipedia.org/wiki/Concept_of_operations

COTS

Commercial off the shelf (can be hardware or software); throughout this book I have used “off-the-shelf.”

Critical thinking

A type of reasonable, deep, and reflective human mental activity that is aimed at deciding what to do (here, during testing) see http://en.wikipedia.org/wiki/Critical_thinking

D2A

Digital to Analog

Debug

The process of diagnosing the precise cause of a known error and then correcting the error. A developer activity that is performed before and after testing.

Developer test

Testing done at a structural or “white box” at the statement or code level, also known as unit testing.

Error

A human action that produces an incorrect result, which could be in software, process, documentation, system, and so on.

EMI

Electro–magnetic interference

Exploratory testing

Software testing which simultaneously learns, designs tests and executes them, see http://en.wikipedia.org/wiki/Exploratory_test

Failure

Termination of the ability of a product to perform a required function or its inability to perform within previously specified limits.

Fault

When an error in software manifests itself.

Field testing

Full–system test done at an operational site or in the real world.

FMEA/FMECA

Failure Modes Effects Analysis and/or Failure Modes Effects and Criticality Analysis

http://en.wikipedia.org/wiki/Failure_mode,_effects,_and_criticality_analysis.

FPGA

Field Programmable Gate Array

Functional test

Testing done to show that the features (requirements, customer needs, etc.) of the software are present.

Hard deadline

A deadline which must be met exactly for software functions to be provided to a customer or user.

Heuristic

Concepts which can solve a problem but cannot guarantee a solution in every case.

HMI

human machine interface

ICS

industrial control system

IEEE

Institute of Electrical and Electronics Engineers

Implementation

How software is coded using models, languages, constructs and others.

Implementation testing

Also known as developer testing

Interrupts

Hardware–based signal generated to the software for action http://en.wikipedia.org/wiki/Interrupt

Invalid

Data, which is not expected as input into the system, but may be received anyway.

IEC

International Electrotechnical Commission

ISO

International Standards Organization

IT

Information technology

IV&V

Independent Verification and Validation

Jailbreaking

The process of removing restrictions imposed by vendor(s) on devices running various operating systems through the use of hardware/software exploits to gain root access and circumvent vendor “safegaurd” features e.g., what you can load or do.  For example, http://en.wikipedia.org/wiki/Jailbreak_(iPhone_OS)

Load (test)

Testing, which puts the software under conditions where you can determine how much processing a computer performs, for example usage of CPU, memory, time, network bandwidth, or others.

Malware

Short for malicious software, which is code constructed to do “harm” e.g.,  virus, see http://en.wikipedia.org/wiki/Malware

Mind Map

A method, usually a diagram, which capture a humans understanding, see http://en.wikipedia.org/wiki/Mind_map

Model

A representation of a real world process, device, software or concept, which can be logical, physical, and/or mental.

Mutation testing analysis

A test technique in which variations of data or code are created and then used in the test activities, see http://en.wikipedia.org/wiki/Mutation_testing

Noise

In the physics world and analog electronics, noise is mostly an unwanted random addition to a signal picked up by sensors or electronics, which can impact software processing

Normal

Typical usage

OCR

Optical character recognition using a system and special software

Off–Normal

Non typical usage

Oracle

Any approach to defining or judging results generated by test. Oracles can include: tester judgment, mental models, secondary software programs, formal models, and others.

Performance test

Testing focused on requirements and issues related to system execution in areas of speed, load, response, etc. There are numerous techniques and tools that support performance testing.

Pesticide paradox

A concept in software testing where if the exact same test is used over and over, the likelihood of it finding errors decreases with each use.

PLC

Programmable logic controller, a digital computer used for automation of industrial processes, such as machinery control in factories

Power–down

Turn off a system

Power–up

Turn on a system

Priority inversion

Priority inversion is a scheduling problem which happens when a low priority task grabs a resource that a higher priority task needs and so the high priority task is forced to wait for it, but then another priority task runs, preventing the low priority task from finishing with the resource and releasing it, which prevents the high priority task from ever running. This can create deadlock and system failures.  This problem is often associated with interrupt driven software systems.

Programmer

Also known as a developer or coder.  Someone who writes code in any software or computing language.

Race (conditions)

See priority and inversion and http://en.wikipedia.org/wiki/Race_condition

Regression Testing

Regression testing involves retesting portions of software items after modification of associated software products. Modifications that may influence previous testing can include changes to: code, patches, data, requirements, interfaces, operational uses, hardware, etc.

Reliability

The probability that software will not cause the failure of a system for a specified time under specified conditions. This probability is a function of the inputs and use of the system, as well as a function of the existence of faults in the software. The inputs to the system determine whether existing faults are encountered.

Risk analysis

See http://en.wikipedia.org/wiki/Risk_analysis_(engineering)

Safing

Logic or hardware constructs which place a device into a “safe state” after a negative event such as: a fault, a failure, hardware breakage, network communication problems, and others.

SCADA

(Supervisory Control and Data Acquisition) A type of industrial control system (ICS).

Scripted

Testing in which written or automated information is generated before the test to determine the "course" (or execution sequence) of the test.

Side effect

A situation where code is changed or a bug occurs in one location in the software logic, but another area of code is impacted. This is associated with the concepts of coupling and cohesion in software.

Smart device

Any device that exhibits some processing capability (either computer or Integrated Circuit, FPGA, or other). These range from smart light switches to handheld systems (phones and tablets).

Social Engineering

http://en.wikipedia.org/wiki/Social_engineering_(security)

Soft deadline

A deadline that must be met for functionality to be provided, but has some degree of time flexibility

Stress (test)

Tests with emphasis on robustness, availability, and error handling of the software under some load. These cases can be valid or invalid test cases.

Structural testing

Also known as white–box testing.

Success Criteria

The information (data) that defines when and how a particular test case is satisfied. This is specified before the test is run.

Test

An activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made on some aspect of the software system or component providing information on these to interested parties.

Test Case

A single set of data inputs that result in one set of test outputs for any given test environment. (A test attack may have one or more test cases.)

Test Like You Fly

The test environment is as close to a production, field, or operational environment as possible. Environment includes hardware, connections, data, communications, and operations. There may be practical limitations to testing, so while this is a good idea, it is often not possible to achieve, which leads to field testing.

Test strategy

The set of ideas (i.e., methods and objectives) that guide test design and execution.

Test technique

Test method; a heuristic or algorithm for designing and/or executing a test.

Test Tools

Hardware and/or software aids that help to automate some aspect of testing. There are varying levels of test tool automation.

Testability

The ability of an item to be tested in a reasonable manner.

Testing

Questioning a product in order to evaluate it (Bach version); technical investigation of a product, on behalf of stakeholders, with the objective of exposing quality–related information of the kind they seek (Kaner’s version).

Time box

A time management and scheduling approach where limits of time (start and stop) are placed on an activity, see http://en.wikipedia.org/wiki/Timeboxing

Timers

A clock that measures time that can be absolute or relative.

Time lines

An order sequence of time (linear).

Tours

A logically ordered sequence of test activities. For example, stories, techniques, or attacks, which are centered around a theme or concept for example, a world tour, an error tour, or a hacking tour.

Unscripted

Testing in which there is no (or minimal) written or automated information generated before the test to determine the "course" (execution sequence) of the test.

Unverified failure

A bug or error that cannot be repeated or confirmed (that it is a bug) and fixed (you cannot fix what you cannot repeat or find). This is a problem for testing because if we see a potential problem (say the system crashes), but cannot make it happen again, you know there is some kind of bug, but not how to repeat it, find it, nor fix it.

Valid

Data or test cases that are within the "expected" usage of the system software.

Virtualization

Created environments that are "not the real thing" (not actual), such as a hardware platform, operating system (OS), storage device, real world, or network resources, see http://en.wikipedia.org/wiki/Virtualization

Walled garden

The area where a service provider limits applications, content, and/or media to set platforms and/or restrictions on content.  For example, on a wireless network, to an app in a store, or other vendor control aspects, see http://en.wikipedia.org/wiki/Walled_garden_(technology). This concept can make testing and testing with some devices difficult (see rooting and jailbreaking in Wikipedia).

White box testing

Also known as structural testing.

ZIF

Zero Insert Force

 

Selected items from ISO 29119, part 1.

Section: ISO definitions (partial) 2013 version (Reference)

actual result

set of behaviors or conditions of a test item, or set of conditions of associated data or the test environment, observed as a result of test execution

dynamic testing

testing that requires the execution of the test item

equivalence partitioning

test design technique in which test cases are designed to exercise equivalence partitions by using one or more representative members of each partition

error guessing

test design technique in which test cases are derived on the basis of the tester’s knowledge of past failures, or general knowledge of failure modes

 

exploratory testing

type of unscripted experience-based testing in which the tester spontaneously designs and executes tests based on the tester's existing relevant knowledge, prior exploration of the test item (including the results of previous tests), and heuristic "rules of thumb" regarding common software behaviours and types of failure

Incident Report

documentation of the occurrence, nature, and status of an incident

pass/fail criteria

decision rules used to determine whether a test item, or feature of a test item, has passed or failed after testing

product risk

risk that a product could be defective in some specific aspect of its function, quality, or structure

project risk

risk related to the management of a project

regression testing

testing following modifications to a test item or to its operational environment, to identify whether regression failures occur

risk-based testing

testing in which the management, selection, prioritization, and use of testing activities and resources is consciously based on corresponding types and levels of analyzed risk

scenario testing

class of test design technique in which tests are designed to execute individual scenarios; where a scenario can be a user story, use-case, operational concept, or sequence of events the software may encounter etc.

scripted testing

dynamic testing in which the tester's actions are prescribed by written instructions in a test case

specification-based testing

testing in which the principal test basis is the external inputs and outputs of the test item, commonly based on a specification, rather than its implementation in source code or executable software

static testing

testing in which a test item is examined against a set of quality or other criteria without code being executed

test basis

body of knowledge used as the basis for the design of tests and test cases

test case

set of test case preconditions, inputs (including actions, where applicable), and expected results, developed to drive the execution of a test item to meet test objectives, including correct implementation, error identification, checking quality, and other valued information

test condition

testable aspect of a component or system, such as a function, transaction, feature, quality attribute, or structural element identified as a basis for testing

test data

data created or selected to satisfy the input requirements for executing one or more test cases, which may be defined in the Test Plan, test case or test procedure

test design technique

activities, concepts, processes, and patterns used to construct a test model that is used to identify test conditions for a test item, derive corresponding test coverage items, and subsequently derive or select test cases

test environment

facilities, hardware, software, firmware, procedures, and documentation intended for or used to perform testing of software 

test execution

process of running a test on the test item, producing actual result(s)

test item

work product  that is an object of testing

Test Plan

detailed description of test objectives to be achieved and the means and schedule for achieving them, organised to coordinate testing activities for some test item or set of test items

test procedure

sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap up activities post execution

test process

provides information on the quality of a software product, often comprised of a number of activities, grouped into one or more test sub-processes

test result

indication of whether or not a specific test case has passed or failed, i.e. if the actual result observed as test item output corresponds to the expected result or if deviations were observed   

test specification

complete documentation of the test design, test cases and test procedures for a specific test item

test status report

report that provides information about the status of the testing that is being performed in a specified reporting period

test strategy

part of the Test Plan that describes the approach to testing for a specific test project or test sub-process or sub-processes

testing

set of activities conducted to facilitate discovery and/or evaluation of properties of one or more test items

unscripted testing

dynamic testing in which the tester's actions are not prescribed by written instructions in a test case