IEEE Std 829-1983 has described the test plan components. These components are listed below,
- Test Plan Identifier
- Test item to be tested
- Features to be tested
- Features not to be tested
- Item Pass/Fail Criteria
- Suspension Criteria and Resumption require
- Test deliverable
- Testing tasks
- Environmental needs
- Staffing and training needs
- Risks and contingencies
- Testing costs
Test Plan Identifier
Each test plan is tagged with a unique identifier so that it is associated with a project.
The test planner gives an overall description of the project with:
- Summary of the items and features to be tested
- Requirement and history of each item (optional)
- High-level description of testing goals
- References to related documents, such as project authorization, project plan, QA plan, configuration management plan, relevant policies, and relevant standards
Test item to be Tested
- Name, identifier, and version of test items
- Characteristics of their transmitting media where the items are stored, for example, disk, CD, etc.
- References to related documents, such as requirements specification, design specification, users guide, operations guide, installation guide
- References to bug reports related to test items
- Items which are specifically not going to be tested (optional)
Features to be Tested
This is a list of what needs to be tested from the user's viewpoint. The features may be interpreted in terms of functional and quality requirements.
- All software features and combinations of features are to be tested.
- References to test-design specifications associated with each feature and combination of features.
Features Not to be Tested
This is a list of what should 'not' be tested from both the user's viewpoint and the configuration management /version control view:
- All the features and the significant combinations of features, which will not be tested.
- Identify why the feature is not to be tested. There can be many reasons:
(i) Not to be included in this release of the software
(ii) Low-risk has been used before, and was considered stable
(iii) Will be released but not tested or documented as a functional part of the release of this version of the software
Let us discuss the overall approach to testing.
- For each major group of features or combinations of features, specify the approach.
- Specify major activities, techniques, and tools, which are to be used to test the groups.
- Specify the metrics to be collected.
- Specify the number of configurations to be tested.
- Specify a minimum degree of comprehensiveness required.
- Identify the techniques, which will be used to judge comprehensiveness.
- Specify any additional completion criteria.
- Specify techniques which are to be used to trace requirements.
- Identify significant constraints on testing, such as test-item availability, testing-resource availability, and deadline.
Item Pass/Fail Criteria
This component defines a set of criteria based on which a test case is passed or failed. The failure criteria is based on the severity levels of the defect. Thus, an acceptable severity level for the failures revealed by each test case is specified and used by the tester. If the severity level is beyond an acceptable limit, the software fails.
Suspension Criteria and Resumption Requirements
Suspension criteria specify the criteria to be used to suspend all or a portion of the testing activities, whereas resumption criteria specify when the testing can resume after it has been suspended.
For example, system integration testing in the integration environment can be suspended in the following circumstances:
- Unavailability of external dependent systems during execution.
- When a tester submits a 'critical' or 'major' defect, the testing team will call for a break in testing while an impact assessment is done.
System integration testing in the integration environment may be resumed under the following circumstances:
- When the "critical' or 'major' defect is resolved.
- When a fix is successfully implemented and the testing team is notified to continue testing.
- Identify deliverable documents: test plan, test design specifications, test case specifications, test item transmittal reports, test logs, test incident reports, test summary reports, and test harness (stubs and drivers).
- Identify test input and output data.
- Identify the tasks necessary to prepare for and perform testing.
- Identify all the task interdependencies. .- Identify any special skills required.
All testing-related tasks and their interdependencies can be shown through a work breakdown structure (WBS). WBS is a hierarchical or tree-like representation of all testing tasks that need to be completed in a project.
- Specify necessary and desired properties of the test environment: physical characteristics of the facilities including hardware, communications and system software, the mode of usage (i.e., stand-alone), and any other software or supplies needed.
- Specify the level of security required.
- Identify any special test tools needed.
- Identify any other testing need.
- Identify the source for all needs, which are currently not available.
- Identify the groups responsible for managing, designing, preparing, executing, checking, and resolving.
- Identify the groups responsible for providing the test items identified in the test items section.
- Identify the groups responsible for providing the environmental needs identified in the environmental needs section.
Stating and Training Needs
Specify staffing needs by skill level.
Identify training options for providing necessary skills.
- Specify test milestones.
- Specify all item transmittal events.
- Estimate the time required to perform each testing task.
- Schedule all testing tasks and test milestones.
- For each testing resource, specify a period of use.
Risks and Contingencies
Specify the following overall risks to the project with an emphasis on the testing process:
- Lack of personnel when testing is to begin
- Lack of availability of required hardware, software, data, or tools.
- Late delivery of the software, hardware, or tools
- Delays in training on the application and/or tools
- Changes to the original requirements or designs
- Complexities involved in testing the applications
Specify the actions to be taken for various events. An example is given below.
Requirements definition will be complete by January 1, 20XX and, if the requirements change after that date, the following actions will be taken:
- The test schedule and the development schedule will move out an appropriate number of days.This rarely occurs, as most projects tend to have fixed delivery dates.
- The number of tests performed will be reduced.
- The number of acceptable defects will increase.
- These two items may lower the overall quality of the delivered product,
- Resources will be added to the team.
- The test team will work overtime.
- The scope of the plan may be changed.
- There may be some optimization of resources. This should be avoided, if possible, for obvious reasons.
The IEEE standard has not included this component in its specification. However, it is a usual component of any test plan, as test costs are allocated in the total project plan. To estimate the costs testers will need tools and techniques. The following is a list of costs to be included:
- Cost of planning and designing the tests
- Cost of acquiring the hardware and software required for the tests
- Cost to support the environment
- Cost of executing the tests
- Cost of recording and analysing the test results
- Cost of training the testers, if any
- Cost of maintaining the test database
- Specify the names and titles of all the people who must approve the plan.
- Provide space for signatures and dates.