ComputersSoftware

Methods of software testing and their comparison. Black-box testing and white-box testing

Software testing (software) reveals flaws, flaws and errors in the code that need to be eliminated. It can also be defined as the process of evaluating the functionality and correctness of software using analysis. The main methods of integration and testing of software products ensure the quality of applications and consist in checking the specification, design and code, reliability evaluation, validation and verification.

Methods

The main goal of software testing is to confirm the quality of the software package by systematically debugging applications under carefully controlled conditions, determining their completeness and correctness, and also detecting hidden errors.

Methods for testing (testing) programs can be divided into static and dynamic.

The first include informal, control and technical review, inspection, step-by-step analysis, audit, as well as static analysis of data flow and management.

Dynamic techniques are as follows:

  1. White-box testing. This is a detailed study of the internal logic and structure of the program. This requires knowledge of the source code.
  2. Black box testing. This technique does not require any knowledge of the internal operation of the application. We consider only the main aspects of the system that are not related or little related to its internal logical structure.
  3. The method of the gray box. Combines the previous two approaches. Debugging with limited knowledge of the internal functioning of the application is combined with knowledge of the basic aspects of the system.

Transparent testing

The white box method uses test scenarios for the control structure of the procedural project. This technique allows you to identify implementation errors, such as poor code management, by analyzing the internal workings of a piece of software. These test methods are applicable at the integration, modular and system levels. The tester must have access to the source code and, using it, find out which block behaves in an inappropriate way.

Testing programs using the white box method has the following advantages:

  • Allows you to detect an error in the hidden code when deleting unnecessary rows;
  • The possibility of using side effects;
  • The maximum coverage is achieved by writing a test scenario.

Disadvantages:

  • A high-cost process that requires a qualified debugger;
  • Many paths will remain unexplored, since a thorough check of all possible hidden errors is very complicated;
  • Some of the missing code will go unnoticed.

White-box testing is sometimes also called transparent or open box testing, structural, logical testing, source-based testing, architecture and logic.

The main varieties are:

1) flow control testing is a structural strategy that uses the program control flow as a model and gives preference to more simple paths over a smaller number of more complex ones;

2) branch debugging is designed to examine each option (true or false) of each control operator, which also includes a combined solution;

3) testing the main path, which allows the tester to establish a measure of the logical complexity of the procedural project to allocate a basic set of execution paths;

4) checking the flow of data - the control flow research strategy by annotating the graph with information about declaring and using program variables;

5) testing of cycles - is fully focused on the correct execution of cyclic procedures.

Behavioral debugging

Black box testing considers the software as a "black box" - information about the internal work of the program is not taken into account, and only the main aspects of the system are checked. In this case, the tester needs to know the system architecture without access to the source code.

Advantages of this approach:

  • Efficiency for a large code segment;
  • Simplicity of perception by the tester;
  • The perspective of the user is clearly separated from the perspective of the developer (the programmer and the tester are independent of each other);
  • More rapid test creation.

Testing programs using the black box method has the following drawbacks:

  • In fact, a selected number of test scenarios are performed, resulting in a limited coverage;
  • Lack of a clear specification makes it difficult to develop test scenarios;
  • Low efficiency.

Other names of this technique are behavioral, opaque, functional testing, and debugging by the closed box method.

This category includes the following methods of software testing:

1) equivalent partitioning, which can reduce the set of test data, because the input data of the program module are divided into separate parts;

2) edge analysis focuses on verifying boundaries or extreme boundary values - minima, maxima, erroneous and typical values;

3) fuzzing - used to find implementation errors by entering distorted or semi-disaggregated data in automatic or semi-automatic mode;

4) graphs of cause-effect relationships - a technique based on the creation of graphs and the establishment of a connection between the action and its causes: identity, negation, logical OR and logical AND are the four basic symbols expressing the interdependence between cause and effect;

5) verification of orthogonal arrays, applied to problems with a relatively small area of input, exceeding the possibilities of an exhaustive study;

6) testing of all pairs - a technique whose set of test values includes all possible discrete combinations of each pair of input parameters;

7) debugging state transitions is a technique useful for checking the state machine, and also for navigating through the graphical user interface .

Black-box testing: examples

Black box technology is based on specifications, documentation, as well as descriptions of the software interface or system. In addition, it is possible to use models (formal or informal) that represent the expected behavior of the software.

Typically, this debugging method is used for user interfaces and requires interaction with the application by entering data and collecting the results - from the screen, from reports or printouts.

The tester, in this way, interacts with the software by input, acting on switches, buttons or other interfaces. The choice of input data, the order of their introduction or the sequence of actions can lead to a gigantic total number of combinations, as seen in the following example.

How many tests do you need to perform to check all possible values for the 4 checkboxes and one two-position field specifying the time in seconds? At first glance, the calculation is simple: 4 fields with two possible states - 24 = 16, which must be multiplied by the number of possible positions from 00 to 99, that is 1600 possible tests.

Nevertheless, this calculation is erroneous: we can determine that the two-position field can also contain a blank, that is, it consists of two alphanumeric positions and can include symbols of the alphabet, special symbols, spaces, etc. Thus, if The system is a 16-bit computer, then we get 216 = 65,536 variants for each position resulting in 4,294,967,296 test cases that need to be multiplied by 16 combinations for the flags, which gives a total of 68,719,476,736. If you execute them At a rate of 1 test per second, then the total The test duration will be 2 177.5 years. For 32 or 64-bit systems, the duration is even greater.

Therefore, there is a need to reduce this time to an acceptable value. Thus, techniques should be used to reduce the number of test cases without reducing the coverage of testing.

An equivalent partition

Equivalent partitioning is a simple method that is applicable to any variables present in the software, whether input or output values, symbolic, numeric, etc. It is based on the principle that all data from one equivalent partition will be treated in the same way by those Same instructions.

During testing, one representative is selected from each defined equivalent partition. This allows you to systematically reduce the number of possible test cases without losing the scope of commands and functions.

Another consequence of this partitioning is the reduction of the combinatorial explosion between the various variables and the associated reduction in test cases.

For example, in (1 / x) 1/2 three data sequences are used, three equivalent partitions:

1. All positive numbers will be processed in the same way and should give the correct results.

2. All negative numbers will be treated the same way, with the same result. This is not true, since the root of a negative number is imaginary.

3. Zero will be processed separately and will give an error "division by zero". This is a section with one value.

Thus, we see three different sections, one of which is reduced to a single value. There is one "correct" section, giving reliable results, and two "wrong", with incorrect results.

Boundary analysis

Data processing at the boundaries of an equivalent partition can be performed differently than expected. The study of boundary values is a well-known way of analyzing software behavior in such areas. This technique allows us to identify such errors:

  • Misuse of relational operators (<,>, =, ≠, ≥, ≤);
  • Single errors;
  • Problems in cycles and iterations,
  • Incorrect types or size of variables used to store information;
  • Artificial constraints associated with data and types of variables.

Semi-transparent testing

The gray box method increases the scope of verification, allowing you to focus on all levels of a complex system by combining white and black techniques.

Using this technique, a tester for developing test values must have knowledge of internal data structures and algorithms. Examples of testing methods for a gray box are:

  • Architectural model;
  • Unified Modeling Language (UML);
  • State model (finite state machine).

In the gray box method for developing test cases, white module codes are examined, and the actual test is performed on the interfaces of the black technology program.

Such testing methods have the following advantages:

  • Combination of advantages of techniques of white and black boxes;
  • The tester relies on the interface and the functional specification, rather than on the source code;
  • The debugger can create excellent test scripts;
  • The check is made from the point of view of the user, and not of the program designer;
  • Creation of customized test development;
  • objectivity.

Disadvantages:

  • Test coverage is limited, as there is no access to the source code;
  • The complexity of detecting defects in distributed applications;
  • Many ways remain unexplored;
  • If the software developer has already started the test, then further research may be redundant.

Another name for the gray box technique is a semi-transparent debugging.

This category includes such testing methods:

1) orthogonal array - use of a subset of all possible combinations;

2) matrix debugging using program state data;

3) a regression check conducted during the introduction of new changes in the software;

4) a template test that analyzes the design and architecture of a solid application.

Comparison of software testing methods

The use of all dynamic methods leads to a combinatorial explosion of the number of tests that must be developed, implemented and conducted. Each technique should be used pragmatically, taking into account its limitations.

The only true method does not exist, there are only those that are better suited for a specific context. Structural techniques allow us to find useless or malicious code, but they are complex and inapplicable to large programs. Methods based on the specification are the only ones that are able to identify the missing code, but they can not identify an outsider. Some techniques are more suitable for a particular level of testing, such as errors or context, than others.

Below are the main differences between the three dynamic testing techniques - given a comparison table between the three forms of debugging software.

Aspect

Black box method

Gray box method

White box method

Availability of information about the composition of the program

Only basic aspects are analyzed

Partial knowledge of the internal device program

Full access to the source code

Degree of fragmentation of the program

Low

Average

High

Who makes the debugging?

End users, testers and developers

End users, debuggers and developers

Developers and testers

Base

Testing is based on external freelance situations.

DB diagrams, data flow diagrams, internal states, knowledge of algorithm and architecture

The internal arrangement is fully known

Degree of coverage

Least exhaustive and requires a minimum of time

Average

Potentially the most comprehensive. It takes a long time

Data and internal boundaries

Simple debugging by trial and error

Data domains and internal boundaries can be checked if they are known

Better testing of data domains and internal boundaries

Suitability for testing the algorithm

No

No

Yes

Automation

Automatic methods of testing software products greatly simplify the verification process regardless of the technical environment or software context. They are used in two cases:

1) to automate the execution of tedious, repetitive or scrupulous tasks, such as comparing files in several thousand lines in order to free the time of the tester to concentrate on more important points;

2) to perform or track tasks that can not be easily implemented by people, such as performance checks or response time analysis, which can be measured in hundredths of a second.

Test tools can be classified in different ways. The following division is based on the tasks they support:

  • Test management, which includes support for project management, versions, configurations, risk analysis, tracking tests, errors, defects and reporting tools;
  • Requirements management, which includes the storage of requirements and specifications, their verification for completeness and ambiguity, their priority and traceability of each test;
  • Critical review and static analysis, including monitoring of the flow and tasks, recording and storing comments, detecting defects and planned corrections, managing references to checklists and rules, tracking the connection of source documents and code, static analysis with detection of defects, ensuring compliance with code writing standards, Analysis of structures and their dependencies, calculation of metric parameters of code and architecture. In addition, compilers, link analyzers and cross-reference generators are used;
  • Modeling, which includes tools for modeling business behavior and testing the models created;
  • Test development provides generation of expected data based on user conditions and interface, models and code, managing them to create or modify files and databases, messages, data validation based on management rules, analysis of conditions and risks statistics;
  • Critical viewing by entering data through the graphical user interface, API, command lines using comparators to help determine successful and failed tests;
  • Support for debugging environments that allows you to replace missing hardware or software, including hardware simulators based on a subset of deterministic output, terminal emulators, mobile phones or network equipment, language verification environments, OS and hardware by replacing missing components with drivers, bogus Modules, etc., as well as tools for intercepting and modifying OS requests, simulating CPU limitations, RAM, ROM or network;
  • Comparison of data files, databases, checking expected results during and after testing, including dynamic and batch comparison, automatic "oracles";
  • The measurement of coverage for the localization of memory leaks and improper management of it, assessing the behavior of the system under simulated load conditions, generating application, database, network or server load under realistic growth scenarios, for measuring, analyzing, verifying and reporting system resources;
  • security;
  • Testing performance, load, and dynamic analysis;
  • Other tools, including for checking spelling and syntax, network security, the presence of all pages of the website, etc.

Perspective

With the change in trends in the software industry, the debugging process is also subject to change. Existing new methods of testing software products, such as service-oriented architecture (SOA), wireless technologies, mobile services, etc., have opened new ways of testing software. Some of the changes that are expected in this industry over the next few years are listed below:

  • Testers will provide lightweight models that developers can use to test their code;
  • The development of test methods, including the viewing and modeling of programs at an early stage, will eliminate many contradictions;
  • The presence of multiple test intercepts will shorten the time of error detection;
  • The static analyzer and detection tools will be applied more widely;
  • The use of useful matrices, such as specification coverage, model coverage and code coverage, will determine the development of projects;
  • Combinatorial tools will allow testers to determine the priority directions of debugging;
  • Testers will provide more visible and valuable services throughout the software development process;
  • Debuggers will be able to create tools and methods for testing software written in and interacting with different programming languages;
  • Debugging specialists will become more professionally trained.

New business-oriented methods for testing programs will replace them, the ways of interaction with systems and information provided by them will change while reducing risks and increasing the benefits of business changes.

Similar articles

 

 

 

 

Trending Now

 

 

 

 

Newest

Copyright © 2018 en.atomiyme.com. Theme powered by WordPress.