Testing is an important step in developing software that is useful, usable, and, ultimately, successful. Quality assurance (QA) engineers are a critical part of the testing process. They are mainly responsible for reviewing specifications included in technical design documents, creating and prioritizing detailed test plans and cases, and conducting quality testing, while keeping detailed documentation with regard to bugs and other issues, on the software being developed.

Testing all possible formats of input data is extremely time-consuming. Many QA engineers use randomly generated input data to conduct tests, but this isn’t always the most suitable or efficient way to conduct tests. By making effective use of a combination of established test design and testing techniques, QA engineers are able to write fewer possible test cases with a higher chance of detecting defects, saving them a lot of time when conducting testing.

What is a test design?

Test design (TD) consists of creating and writing a collection of test cases, also known as a test suite, for testing a piece of software. At this point of the testing process, it’s important to analyse what must be tested and define the test conditions. These test conditions must be converted into test cases using test data.

Test design techniques can be broken up into three main categories, specification-based (also known as black-box) techniques, structure-based (also known as white-box) techniques, and experience-based techniques. These are generally considered either static or dynamic. Using various individual or combined TD techniques from these categories to write test suites is the best way to save time for a QA engineer.

At SteelKiwi, we prefer using 4 main techniques to increase the speed of testing activities, without compromising the quality of the testing we conduct on developed software. Our preferred methods include equivalence partitioning, boundary values, pairwise, and state transition diagrams.

Test design technique #1: Combining equivalence partitioning (EP) with boundary values (BV)

The equivalence partitioning (EP) technique, in combination with boundary values (BV), are two test design techniques that could save time needed for testing, while still providing the necessary testing coverage. Both of these techniques are black-box techniques, simply testing functionality without looking at internal structure, and are closely related.

The equivalence partitioning technique divides input data and groups it into equivalence classes. The system handles these classes in the same way, applying selected test conditions to each equivalence group.

When using the EP technique, only one condition from each group should be applied in testing. This means that testing other instances of a class is no longer necessary. If one condition in the group doesn’t work, we can assume that all conditions of this group won’t work either.

Test example

If you buy a product that is cheaper than $100, you receive 2% cash back. If your purchase ranges between $100 and $500, you get 5% cash back. When you buy a product that costs over $500, you get 9% cash back.

In this example, we divide the data set into equivalence classes, with the condition that $1 is worth 100 cents.

Cashback: 2% -> Buy: 0.01 - 99.99$
Cashback: 5% -> Buy: 100 - 499.99$
Cashback: 9% -> Buy: 500+$

To reduce the quantity of test cases required, we need to take only one value from each equivalence class to conduct a test.

The EP technique works especially well when used together with the boundary value (BV) technique. In most applications, the majority of errors occur at the boundary of values being used. That is why the BV technique is employed to test the extreme ends of each equivalence class, identifying errors to make sure that the system is working properly.

Let’s apply the BV technique to our equivalence classes example above.

Test example

Take an extreme boundary value for each equivalence class

The extreme values are:
Buy: 0.01 - 99.99$ -> 0.01; 99.99; 100
Buy: 100 - 499.99$ -> 100; 499.99, 500
Buy:500+$ -> 500

We see that some test values are similar. We should use 0.01; 99.99; 100; 499.99; 500 and one value above 500, chosen randomly (e.g 1000), for our test cases. By using these two test design techniques together, we can decrease the quantity of test cases that we need to run, while still providing the necessary test coverage for this data set.

Test design technique #2: Pairwise (PW)

Pairwise (PW) is another commonly used test design technique at SteelKiwi. This TD technique fits nicely into the testing phase for both software and hardware development. According to IBM research, up to 97% of issues in software are caused by interaction between just two parameters.

PW can save time for a QA engineer because all possible combinations of the values of two parameters can be covered by test cases. This takes less time than conducting exhaustive testing when searching for defects in software.

There are many tools available to employ the PW technique in testing. The most widely used tools are PICT, Pairwiser, Hexawise, and Allpairs, among others. All of these tools have a similar underlying principle: we input all parameters with relevant values and, as output, we have a table with results which could be used as test cases.

Test example

We need to test a web application on Linux, Windows, and MacOS operating systems with 1920x1080 and 1024x768 screen resolutions using Google Chrome, Opera, Safari, and Internet explorer browsers.
In total, that means there are 3 operating systems, 2 screen resolutions, and 4 browsers that need to be tested, for a total test case amount of 24 (3x2x4).

OSScreen resolutionBrowser
OSLinuxScreen resolution1920x1080BrowserGoogle Chrome
OSWindowsScreen resolution1024x768BrowserOpera
OSMacOSScreen resolutionBrowserSafari
OSScreen resolutionBrowserInternet Explorer

A table should be built in such a way that each parameter creates a unique pair with another parameter from each group.

OSScreen resolutionBrowser
OSGoogle ChromeScreen resolutionLinuxBrowser1920x1080
OSGoogle ChromeScreen resolutionWindowsBrowser1920x1080
OSGoogle ChromeScreen resolutionMacOSBrowser1920x1080
OSOperaScreen resolutionLinuxBrowser1920x1080
OSOperaScreen resolutionWindowsBrowser1920x1080
OSOperaScreen resolutionMacOSBrowser1920x1080
OSSafariScreen resolutionLinuxBrowser1920x1080
OSSafariScreen resolutionWindowsBrowser1920x1080
OSSafariScreen resolutionMacOSBrowser1920x1080
OSInternet ExplorerScreen resolutionLinuxBrowser1920x1080
OSInternet ExplorerScreen resolutionWindowsBrowser1920x1080
OSInternet ExplorerScreen resolutionMacOSBrowser1920x1080
OSGoogle ChromeScreen resolutionLinuxBrowser1024x768
OSGoogle ChromeScreen resolutionWindowsBrowser1024x768
OSGoogle ChromeScreen resolutionMacOSBrowser1024x768
OSOperaScreen resolutionLinuxBrowser1024x768
OSOperaScreen resolutionWindowsBrowser1024x768
OSOperaScreen resolutionMacOSBrowser1024x768
OSSafariScreen resolutionLinuxBrowser1024x768
OSSafariScreen resolutionWindowsBrowser1024x768
OSSafariScreen resolutionMacOSBrowser1024x768
OSInternet ExplorerScreen resolutionLinuxBrowser1024x768
OSInternet ExplorerScreen resolutionWindowsBrowser1024x768
OSInternet ExplorerScreen resolutionMacOSBrowser1024x768

We need to exclude any non-existent pairs (marked as red in the table above): Linux - Safari, Linux - Internet explorer. We then check unique parameter pairs in the first and second groups, then the first and third groups, and, finally, the second and third groups, leaving only the unique lines of parameters.

BrowserOSScreen resolution
BrowserGoogle ChromeOSLinuxScreen resolution1920x1080
BrowserGoogle ChromeOSWindowsScreen resolution1024x768
BrowserGoogle ChromeOSMacOSScreen resolution
BrowserOperaOSLinuxScreen resolution1024x768
BrowserOperaOSWindowsScreen resolution1920x1080
BrowserOperaOSMacOSScreen resolution
BrowserSafariOSWindowsScreen resolution1024x768
BrowserSafariOSMacOSScreen resolution1920x1080
BrowserInternet ExplorerOSWindowsScreen resolution1024x768
BrowserInternet ExplorerOSMacOSScreen resolution1920x1080

The green font color indicated unique pair values of parameters. Blank cells could be filled with any values from this parameter group. For this example, the test cases decrease in quantity, from a total of 24 to 10.

The difficulty of the task of testing increases as the quantity of parameters increase. However, by using PW tools to assist in pairing, QA engineers can save lots of time.

Test design technique #3: State transition diagram (STD)

Another technique we frequently use at SteelKiwi to save time for our QA engineers is the state transition diagram technique, also known as the Harel state chart or a state machine diagram. This technique is especially useful when needing to test different system transitions.

STD describes the behavior of an object, identifying how it works within the system. Every system has a number of states. These states are an orthogonal approach to looking at the behavior of a system. Events can change states of a system from one state to another possible state, creating transitions between the two. Based on states, events and transitions need to create an STD graph. This graph should display all possible states of the system, transitions between states, and events which trigger states and/or transitions.

Test conditions can be derived from the state graph in different ways. Each state can be noted as a test condition, as can each transition. The diagram will show which test cases might be most useful.

The advantages of this test design technique can be easily seen in small systems. Unfortunately, this technique is not suitable for large systems because of the need to define all possible states within a system. Larger systems make this task difficult, if not impossible, in many cases.

Test example

Take a small system, like online hotel booking, and define all of the states of the system, as well as trigger events and transitions between the various states.

The states of a reservation are: Requested, Confirmed, Canceled, and Booked.

All associated events and transitions are displayed in the picture below.

ID: F1
Name: Checking the successful booking
1. Request available room
2. Wait for room confirmation
3. Pay for room
Expected result: The room was successfully booked and the available room counter decremented.

ID: F2
Name: Checking (and cancelling) a successful request when a room is not available.
1 . Request unavailable room
Expected result: Request is successfully canceled as no room is available.

ID: F3
Name:Checking a successful cancellation of a booked room
1. Request available room
2. Wait for room confirmation
3. Cancel room
Expected result: The room was successfully canceled and room counter was incremented.

By running through all possible states, trigger events, and transitions within a small system using STD, QA engineers are able to write better test cases that cover all possible results.

Using test design techniques to save QA time

Badly designed tests provide bad testing coverage which, ultimately, leads to a failure in identifying defects within a system or piece of software. For each specific case, we need to select the necessary TD technique or use a combination of several TD techniques in order to get the best, most efficient testing results.

By employing the TD techniques we’ve covered above, QA engineers can reduce the number of written test cases, helping them achieve the required level of testing coverage while also increasing the detection of system bugs. These techniques assist QA engineers in saving both time and effort when conducting their testing on developed software.

Useful links

  1. What is a test design technique?
  2. Test Design Techniques You Need to Know
  3. Test Design Techniques overview