Live demos are back. See Testim Mobile in action on Feb 21 | Save your spot

Test Traceability: What This Means and Why It Matters

Why test software? This question is so fundamental to software development that all of us would answer, "It's simple. Testing…

Testim
By Testim,

Why test software? This question is so fundamental to software development that all of us would answer, “It’s simple. Testing is the only way to verify that the application works as expected.” If we put that in software development terms, we say, “We test in order to ensure that the application meets its requirements or specifications.” Based on this simple answer, we build useful test cases that cover our application’s functionality. So, testing proves that our application works properly. But how do we develop a testing strategy? Well, being able to trace requirements to test cases forms the basis of a testing strategy. In this post, we’ll explore what test traceability is and why it is important.

The Basics of Test Traceability

For demonstration purposes, I’ll use a very simple example. Let’s assume that our application has a login screen that takes user input and validates that the user can have access to the application. Currently, a user must enter a valid username and password combination in order to access the application.

Now let’s ask another simple question: Does the login functionality work? The answer is, again, simple. All you need to do is look at the login test case and go through its test steps to find out. Great! There’s nothing difficult here, but this is where test traceability starts. To simplify it, test traceability points to which test case verifies which piece of application functionality.

I will keep using our simple login example in mind as I illustrate three basic ways that test traceability helps software development.

1. Forward Traceability

First, running the test case that verifies the login functionality can tell you if the application meets the login requirements. Usually, product owners and business people work on creating requirements that are translated by engineers to software specifications. Quality assurance people normally work with product owners and engineers to develop meaningful test cases that capture the intent and functionality of specific requirements.

In our example, when the login test passes, it means that the login functionality works as expected. This simple ability to go from a requirement to a particular test case is very important for product owners as they plan new features and products. This is called forward traceability.

2. Backward Traceability

Second, as I run multiple tests (preferably automatically) and the login test case fails, this test failure quickly points to a specific area of the application that fails. This simple ability to go from test case back to application functionality is very important for developers and quality assurance. This is how quality assurance folks discover problems and creates descriptive bug reports. Additionally, developers can get a specific context for a bug. This is called backward traceability.

3. Extending Traceability

Third, think about enhancing your application. Let’s say that your marketing department says you need to add the ability to log into the application using the user’s email address in addition to their username. So, software developers start coding and merge the new code into the baseline. Now it’s up to quality assurance to test this new functionality. It’s only logical that the new email address user authentication testing is added to the existing login test cases. Once that’s done, you have extended our existing test cases to cover your extended login functionality. I don’t have an official term for this, but I like to call it extending traceability.

Now that we have several test cases (or a test suite) that ensure our login functionality works as expected, we can perform test runs. A test run means running all the tests in our test suite against a particular version of our application. The key here is that the application code doesn’t change during a particular test run.

Now that we have the benefits from this simple example in mind, we can talk about defining test traceability.

What Is Test Traceability?

We have already established that in test tracing we link requirements to a specific test case or group of test cases.  Fundamentally, test traceability proves that the application functions as expected. We can now provide a definition of test traceability.

Test traceability is the ability to link a test to a set of requirements and confidently verify that the application works as expected. Additionally, as the application itself evolves, the tests themselves evolve to continually verify application functionality.

Why Test Traceability Matters?

Simply put, test traceability ensures that all functionality is verified. It enables you to quickly test specific areas of the application whenever you apply software fixes or you add new functionality.

When your application is small, you know exactly what’s going on when things are broken. In addition, if your development team consists of one or two people, things are simple. When bugs appear, you simply fix them. However, the more complex your application becomes, and the more people become involved with the code, the more precise you must become to identify areas that need fixing.

Complexity Complicates Testing

Most small companies start tracing requirements to test cases using a spreadsheet that tracks requirement numbers to test case number (a traceability matrix). Even though this method sounds antiquated today, many software shops that build custom software rely on spreadsheets. Also, when your company is small, you might also rely on a lot of manual testing—because we all know that writing efficient automated tests takes time. However, as application complexity increases, things get complicated quickly.

When application complexity increases and multiple application versions are being developed in parallel, things get really interesting.

Parallel Development Complicates Testing

Let’s say you’re trying to release version X of your software. However, some members of your development team also are finishing functionality for the next version (version Y) of your software.

In this scenario of parallel development, the quality assurance team must know which login tests to run against which version of the application. When testing version X of the application, they should run the login tests appropriate to version X. When testing version Y of the application, QA must ensure they’re running login tests appropriate to version Y.

At this point, using a simple spreadsheet and manual test cases makes test tracing a nightmare. This is the point when companies start taking automated testing seriously and switch to investing time and resources in developing automated testing. Robust testing that covers most of your application code brings confidence in application functionality and stability.

Automated Testing Brings Confidence

Current CI/CD practices integrate and deploy new code very often. As a result, the system changes frequently. That means maintaining system stability while ensuring that the application meets business requirements becomes a delicate balancing act. Having automated testing in place ensures system stability and integrity as the system changes with development efforts.

Unit Tests

Usually, the first step to adding automated testing is for developers to create unit tests and integrate them in the build process.  Ideally, developing unit tests should be part of any application development. However, due to many pressures, developers ignore testing at the beginning stages of development. Once efficient unit tests cover most application code, confidence in system stability rises. Using modern tools, developers can understand how much of their code is covered with testing. This way they can trace unit tests.

Multiple-Layer Tests

The next layer of automatic testing creates automated tests that span several layers of the application. As compared to unit tests, multiple-layer tests span the API layer to the domain objects and storage layers. They test a slice of the application and thus cover the integration of the various layers of the application. Developers can trace groups of requirements to these multiple layer tests as they span a lot of the codebase. At this stage, however, developers focus on testing the back-end layers, while user interface testing remains largely manual.

User Interface Driven Tests

The third layer of testing is more complex as it requires more effort. The goal here is to develop automated tests that exercise the code from the user interface through the REST layer and down to storage. Whether it’s through using dummy data or mocking, being able to run tests from the top of the application through to the bottom of it (depending on how you look at it) will bring high confidence in the code and application. Developers can trace many high-level requirements to these comprehensive tests as they cover many code paths.

Having all these layers of testing that cover your application code will require some tools that can help you maximize your confidence and keep track of your testing and requirements. This is when companies invest in conducting trade studies and tool evaluation to select a good testing suite that will help them move forward.

Test Tracing Proves Things Work

We have discussed what we are tracing in our test cases and how tracing helps us focus. In addition, we have explored the benefits of automated testing. I know that having 100% test coverage for an application is a lofty ideal that often comes into conflict with the daily pressures of delivering software. However, having a clear testing strategy with an efficient way to trace application functionality to actual test cases will bring strong confidence in your product. In the end, test tracing proves your application works … or that it doesn’t.

This post was written by Vlad Georgescu. Vlad is a senior system architect and full-stack enterprise software developer with almost two decades of experience in the development lifecycle. He also has experience as a SCRUM master, agile coach, and team leader. On the communications front, Vlad is also effective: he’s created online communities and worked on social media marketing strategies.