1. Reliability Testing
Since software, today, is an important part of any field, the importance of software reliability testing has also increased. Whether it is a commercial business system or a military system, reliability of software is desired. We want failure-free software under a specified environment and for a specific period of time, that is, we want a reliable software. Software reliability testing is the method to uncover those failures earlier, which are most likely to appear in the actual operation. Consequently, important bugs are fixed, thereby increasing the reliability of the software. The primary purpose of executing reliability testing is to test the performance of a software under some given conditions. There may be some secondary objectives of reliability testing as given here:
- It may be the case that a failure is being repeated several times. So the objective is to find perceptual structure of repeating failures and the reason.
- To find the mean life of a software.
- To find the number of failures in a specified period of time.
To measure software reliability, two metrics are used: Mean Time to Failure (MTTF), that is, the difference between two consecutive failures, and Mean Time to Repair (MTTR), that is, the time required, to fix the failure. For a quality software, the reliability is between 0 and $1 .$ As the bugs/errors are removed from the software, its reliability increases. The following are the three types of reliability testing:
Feature test: The purpose is to ensure that each functionality of all the modules in the software and their interfaces are tested.
Regression test: The purpose is to re-check the software functionality whenever there is a change in it.
Load test: The purpose is to check the software under maximum load put on every aspect and even beyond that (i.e,, to check the reliability of the software under maximum load and breakdown).
2. Recovery Testing
Recovery is just like the exception handling feature of a programming language. It is the ability of a system to restart operations after the integrity of the application has been lost. It reverts to a point where the system was functioning correctly and then reprocesses the transactions to the point of failure.
Some software systems (e.g., operating systems and database management systems) must recover from programming errors, hardware failures, data errors, or any disaster in the system. So the purpose of this type of system testing is to show that these recovery functions do not work correctly.
The main purpose of this test is to determine how good the developed software is when it faces a disaster. Disaster can be anything from unplugging the system, which is running the software from power, network to stopping the database, or crashing the developed software itself. Thus, recovery testing is the activity of testing how well the software is able to recover from crashes, hardware failures, and other similar problems. It is the forced failure of the software in various ways to verify that the recovery is properly performed. Some examples of recovery testing are listed here:
- While the application is running, suddenly restart the computer and then, check the validity of the application's data integrity.
- While the application receives data from the network, unplug the cable and plug-in after a while, and analyse the application's ability to continue receiving data from that point, when the network connection disappeared.
- Restart the system while the browser has a definite number of sessions and after rebooting, check that it is able to recover all of them.
Recovery tests would determine if the system can return to a well known state, and that no transactions have been compromised. Systems with automated recovery are designed for this purpose. There can be a provision for multiple CPUs and/or multiple instances of devices, and mechanisms to detect the failure of a device. A 'checkpoint'system that meticulously records transactions and system states periodically can also be put to preserve them in case of failure. This information allows the system to return to a known state after the failure.
Testers should work on the following areas during recovery testing:
Restart: If there is a failure and we want to recover and start again, then first the current system state and transaction states are discarded. Following the criteria of checkpoints, as discussed earlier, the most recent checkpoint record is retrieved and the system is initialized to the states in the checkpoint record. Thus, by using checkpoints, a system can be recovered and started again from a new state. Testers must ensure that all transactions have been reconstructed correctly and that all devices are in proper states. The system now is in a position to begin to process new transactions.
Switchover: Recovery can also be done if there are standby components and in case of failure of one component, the standby takes over the control. The ability of the system to switch to a new component must be tested.
A good way to perform recovery testing is under maximum load. Maximum load would give rise to transaction inaccuracies and the system would crash, resulting in defects and design flaws.
3. Security Testing
Safety and security issues are gaining importance due to the proliferation of commercial applications on the Internet and the increasing concern about privacy. Security is a protection system that is needed to assure the customers that their data will be protected. For example, if Internet users feel that their personal data information is not secure, the system loses its accountability. Security may include controlling access to data, encrypting data in communication, ensuring secrecy of stored data, and auditing security events. The effects of security breaches could be extensive and can cause loss of information, corruption of information, misinformation, privacy violations, denial of service, etc.
4. Usability Testing
This type of system testing is related to a system's presentation rather than its functionality. System testing can be performed to find human-factors or usability problems on the system. The idea is to adapt the software to users' actual work styles rather than forcing the users to adapt according to the software. Thus, the goal of usability testing is to verify that intended users of the system are able to interact properly with the system while having a positive and convenient experience. Usability testing identifies discrepancies between the user interfaces of a product and the human engineering requirements of its potential users.
Area experts: The usability problems or expectations can be best understood by the subject or area experts who have worked for years in the same area. They analyse the system's specific requirements from the user's perspective and provide valuable suggestions.
Group meetings: Group meeting is an interesting idea to elicit usability requirements from the user. These meetings result in potential customers' comments on what they would like to see in an interface.
Surveys: Surveys are another medium to interact with the user. It can also yield valuable information about how potential customers would use a software product to accomplish their tasks.
Analyse similar products: We can also analyse the experiences in similar kinds of projects done previously and use it in the current project. This study will also give us clues regarding usability requirements.
Ease of use The users enter, navigate, and exit with relative ease. Each user interface must be tailored to the intelligence, educational background, and environmental pressures of the end-user.
Interface steps: User interface steps should not be misleading. The steps should also not be very complex to understand either.
Response time: The time taken in responding to the user should not be so high that the user is frustrated or will move to some other option in the interface.
Help system: A good user interface provides help to the user at every step. The help documentation should not be redundant; it should be very precise and easily understood by every user of the system.
Error messages: For every exception in the system, there must be an error message in text form so that users can understand what has happened in the system. Error messages should be clear and meaningful.
An effective tool in the development of a usable application is the user-interface prototype. This kind of prototype allows interaction between potential users, requirements personnel, and developers to determine the best approach to the system's interface. Prototypes are far superior because they are interactive and provide a more realistic preview. of what the system will look like.
5. Compatibility/Conversion/Configuration Testing
Compatibility testing is to check the compatibility of a system being developed with different operating system, hardware and software configuration available, etc. Many software software systems interact with multiple CPUs. Some software control real-time process and embedded software interact with devices. In many cases, users require that the devices be interchangeable, removable, or re-configurable. Very often the software will have a set of commands or menus that allow users to make these configuration changes. Configuration testing allows developers to evaluate system performance and availability when hardware exchanges and reconfiguration occur. The number of possible combinations for checking the compatibilities of available hardware and software can too high, making this type of testing a complex job.
Operating systems: The specifications must state all the targeted end-user operating systems on which the system being developed will be run.
Software/Hardware: The product may need to operate with certain versions of Web browsers, with hardware devices like printers, or with other software such as virus scanners or processors.
Conversion testing: Compatibility may also extend to upgrades from previous versions of the software Therefore, in this case, the system must be upgraded properly and all the data and information from the previous version should also'be considered. It should be specified whether the new system will be backward compatible with the previous version. Also, if other user's preferences or settings are to be preserved or not.
Ranking of possible configurations: Since there will be a large set of possible configurations and the compatibility concerns, the testers must rank the possible configurations in order, from the most to the least common, for the target system.
Identification of test cases: Testers must identify appropriate test cases and data for compatibility, testing. It is usually not feasible to run the entire set of possible test cases on every possible configuration, because this would take too much time. Therefore, it is necessary to select the most representative set of test cases that confirms the application's proper functioning on a particular platform.
Updating the compatibility test cases The compatibility test cases must also be continually updated with the following:
(i) Tech-support calls provide a good source of information for updating the compatibility test suite.
(ii) A beta-test program can also provide a large set of data regarding real end-user configurations and compatibility issues prior to the final release of a system.