There are many types of automated testing out there: front-end testing, smoke testing, load testing, end-to-end (E2E) testing, and that’s to name only a few. If you want to design a sound testing strategy with the best possible ROI—and who doesn’t want that? It’s crucial to understand all of these types of testing, learning the pros and cons of each one and which scenario they fit best. Today, we help you take another step in that direction by bringing you a list of unit testing best practices.
Unit testing is one of the most valuable types of automated testing. Getting started with it isn’t the easiest thing, though. Many teams start wrong and then give up due to not reaping the benefits they were looking for. In today’s post, we share nine best practices to help you not fall into the same trap. When examples are applicable, those will be in JavaScript, but the tips themselves are understandable and applicable to most programming languages.
The First Unit Test Best Practice? Learning About Unit Testing
Before getting to the list of best practices, it’s important we’re on the same page when it comes to unit testing itself. What is a unit test? Who writes them, and why? Those may appear like they’re basic questions. However, you’d be surprised to know the amount of controversy that can be found within a topic that’s such a staple of modern software engineering.
After covering those fundamentals, we explain the difference between unit tests and another common and valuable type of testing, integration tests. We then explore the fact that not all code is created equal: some pieces of code might be easy to unit test, while others might be downright untestable. Understanding that difference is crucial: you can’t achieve good code coverage without writing testable code.
Let’s dig in.
What Is a Unit Test?
Unit testing is one of the many different types of automated testing. Unit tests exercise very small parts of the application in complete isolation, comparing their actual behavior with the expected behavior. The “complete isolation” part means that, when unit testing, you don’t typically connect your application with external dependencies such as databases, the filesystem, or HTTP services. That allows unit tests to be fast and more stable since they won’t fail due to problems with those external services.
Unit tests, rather than being codeless tests, are created with code. You can think of unit tests as small programs that exercise your application, interacting with tiny portions of it. Each unit test is like a specification or example of how that tiny portion—i.e. a unit—behaves under a specific scenario. By executing the suite of tests, developers can get immediate feedback when they change some codebase.
Why Write Unit Tests?
When it comes to software testing, there are many ways to go about it, both manually and in an automated fashion. So, you can interpret the question “why to write unit tests?” in at least two ways. First, what are the benefits you can expect from using unit tests? Secondly, why unit tests as opposed to other types of tests? We’ll answer both questions.
The Benefits of Unit Tests
What follows is a non-comprehensive list of the benefits you get from adopting unit testing:
- Unit tests help you find and fix bugs earlier. Organizations that incorporate unit testing into their development process and start testing as early as possible in the lifecycle are able to detect and fix issues earlier.
- Your suite of unit tests becomes a safety net for developers. A comprehensive suite of unit tests can act as a safety net for developers. By frequently running the tests, they can assure their recent modifications to the code haven’t broken anything. In other words, unit tests help prevent regressions.
- Unit tests can contribute to higher code quality. This item is a natural consequence of the previous one. Since unit tests act as a safety net, developers become more confident when changing the code. They can refactor the code without fear of breaking things, driving the general quality of the codebase up.
- Unit tests might contribute to better application architecture. We’ll go back to this later. For now, know that if you can add unit tests easily to a codebase, that’s usually a good sign regarding the quality of the app’s architecture. So, the drive to write testable code can be an incentive for better architecture. This is why using TDD (test-driven development) can be so effective.
- Unit tests can act as documentation. Unit tests are examples of how to use the code under test. So, you can also think of them as executable specifications or documentation.
- Detect code smells in your codebase. If ease of adding unit tests to a codebase is a good sign, the opposite is also true. Having a hard time creating unit tests for a given piece of code might be a sign of code smells in the code—e.g. functions that are too complex.
Unit Tests Role In A QA Strategy
There are so many types of software testing. Why should you bother with unit testing? What can unit tests give you that other types of tests can’t?
The main advantage of unit tests over other types of software tests is their laser-sharp focus. Unit tests give super precise feedback. If you have unit tests that verify the behavior of a function, and the test fails, then in the vast majority of cases you can be sure that the problem is in the function. Another big advantage of unit tests is their speed. Since they’re fast, they’re executed more often, making them a source of nearly constant valuable feedback.
Other types of testing—for instance, end-to-end testing—aren’t so precise in their feedback. Say you have a test that interacts with a given feature. By doing so, the test exercises all layers in the application, causing it to communicate with the database, the filesystem, and a REST API. If this test fails, you could have a hard time tracing the root cause of the problem, since it could be in any of the application layers or the external dependencies.
On the other hand, end-to-end testing—and similar forms of testing—interact with the application in a way that’s much closer to a real user. That way, they can provide more realistic feedback. Unit tests verify how well-isolated units work. They can’t check how well those units integrate with different units or whether they play nicely with external dependencies. Integration testing, end-to-end testing, and other similar types of testing can do that, even if they have to pay the price in terms of speed and simplicity.
We can sum all this up by saying that unit tests in a QA strategy play the role of providing early, fast and constant feedback. You can’t rely only on unit tests, though. You must supplement them with the other types of tests that excel in the areas where unit tests lack. A good concept to have in mind when balancing the different testing needs of your app is the testing pyramid, which states you should have a larger number of unit tests, and fewer number of the other types of tests.
Who Creates Unit Tests?
This is probably the main difference between unit tests and most other types of software tests: who performs them. Software engineers write unit tests, mostly for their—and other engineers’—benefit. If this sounds like it contradicts what I’ve said before, bear with me.
Engineers make assumptions all the time when coding. When these assumptions match the real code, everything is fine. However, when they don’t, a bug is born. Because of that, a widely-known “truism” in software development is that engineers should document their assumptions when writing code.
As it turns out, unit tests are a great tool to document such assumptions. If the assumption turns out to be false—or becomes false after a while—the test fails. So, we can say that unit tests, besides helping ensure the overall quality of the application, doubles as a tool for developers to document their assumptions and have trust in the code they write.
Some people go as far as to say that unit tests aren’t really tests, and that they should have a different name—something along the lines of automatic documentation, or executable specifications. While I don’t fully subscribe to this view, I agree that unit tests are a confidence-booster for developers first and “tests” later.
The Difference Between Unit Tests vs Integration Tests
We do have an entire post dedicated to that, but we’ll answer the question in a short way here as well.
The line between unit tests and integration tests can be hard to see for beginners. Both unit tests and integration tests can be written using the same tools. Also, both forms of testing consist of writing “examples” of interactions to a piece of code. They might even look very alike, or even identical. So, what’s the difference?
It’s all about the level of isolation. While unit tests have to be completely isolated, integration tests don’t shy away from using external dependencies. Also, integration tests often test several modules of the application in an integrated manner—hence the manner. So, integration tests offer a more high-level view of the application than unit tests do. Because of that, the feedback they provide is both more realistic and less focused. Due to their reliance on external dependencies, they can be significantly slower and have a more difficult setup.
How to Achieve Testable Code
We say that a given piece of code is testable when it’s easy to test with unit tests. On the other hand, a given piece of code is untestable when it’s hard or impossible to add unit tests to.
So, how to ensure code is testable? A complete answer to this question would be worth a post of its own. A short answer is simply: avoid the enemies of testability. To fight side effects and nondeterminism, adopt pure functions. Against too many dependencies, use dependency injection. Prefer smaller and more focused modules over gigantic ones that do too much. Keep cyclomatic complexity at bay.
9 Essential Unit Test Best Practices
We’ve covered a lot of ground by talking about the fundamentals of unit testing. After learning the basics of unit testing, you’re now ready for the main part of the post, in which we’ll walk you through nine best practices you can use to get the most out of your unit testing.
1. Tests Should Be Fast
I just can’t stress that enough: do everything within your power to make your tests fast. If they’re slow, developers won’t run them as often as they should. That defeats the whole purpose of having a suite of unit tests in the first place, which is to boost the developers’ confidence to make changes to the code. The tests can’t work as the safety net they’re supposed to be if they’re not run often.
But how fast is “fast?” While it might be somewhat subjective, there are objective ways to define this metric. For instance, the Mocha testing framework, by default, considers any test that takes more than 75ms to run as a slow test.
Finally, how do you make your tests as fast as possible? Here are some tips:
- make them as simple as possible
- don’t make them depend on other tests
- mock external dependencies
2. Tests Should Be Simple
Whenever I’m teaching unit testing to beginners, one of the questions that comes up frequently is “how do I know whether my tests are right?” That’s a great question. After all, test code is no different than production code, except when it comes to their purposes. It can have bugs as well.
There are several techniques we can apply to have a high degree of confidence in the correctitude of our tests. One of those is to keep your tests with low cyclomatic complexity. Cyclomatic complexity is a code metric that indicates the number of possible execution paths a given method can follow. A piece of code with lower complexity is easier to understand and maintain, which means developers are less likely to introduce bugs when working on it.
So, how do you keep your tests simple? Simple: measure the cyclomatic complexity of your tests (using, for instance, a linter tool) and do your best to keep it low.
3. Test Shouldn’t Duplicate Implementation Logic
This item is sort of a continuation of the previous one. Tests that fail to follow this best practice are highly likely to disrespect the previous one as well. So, what does “duplicate implementation logic” mean, and why is it a red flag?
To understand that, consider the code below:
function add(stringNumbers) { if (stringNumbers.trim().length == 0) return 0; return stringNumbers .trim() .split(',') .map(n => parseInt(n, 10)) .reduce((a, b) => a + b); }
The code represents a solution for the famous StringCalculator kata, proposed by Roy Osherove. The challenge consists of writing a function that takes a string containing a list of comma-separated numbers and returns their sum. There are further rules (ignore numbers greater than 1000, throw if negative numbers are passed, etc.), but we’re ignoring those for simplicity’s sake.
Now, take a look at some of the tests written for the code above.
describe("String Calculator", function(){ var calc = new StringCalculator(); it("should return zero when an empty string is passed", function(){ expect(calc.add('')).toEqual(0); }); it("should return zero when a white-space string is passed", function(){ expect(calc.add(' ')).toEqual(0); }); it("should return the same number when a single number is passed", function(){ expect(calc.add('1')).toEqual(1); }); it("should return the sum when two numbers are provided", function(){ expect(calc.add('1, 2')).toEqual(3); }); it("should return the sum when three numbers are provided", function(){ expect(calc.add('1, 2, 3')).toEqual(6); }); });
As you can see, the test cases are really repetitive. This could be remedied using something like parametrized tests, but let’s say we’re impatient and decide to take a step further. Here is the new version of the tests:
it("should return the sum when three numbers are provided", function(){ var numbers = '1,2,3'; var sum = numbers.split(',').map(x => parseInt(x)).reduce((a, b) => a + b); expect(calc.add(numbers)).toEqual(sum); });
Someone might think that the new tests are better than the old versions. After all, we’ve gotten rid of duplication, and duplication is evil, right? Well, not in this case.
The code above is problematic because the test code is now almost a carbon copy of the implementation code. If the same person wrote both the test and the implementation, it’s possible they made the same errors in the two spots. But since the tests mirror the implementation, they might still pass, which puts you in a terrible situation: the implementation is wrong, but the tests might fool you into thinking otherwise.
We can summarize both this item and the previous one with a single sentence: resist to urge to make your tests fancy. Keep them dead simple, and your testing suite will be the best for it.
4. Tests Should Be Readable
This best practice overlaps a little bit with the one about keeping your tests simple. If tests are hard to read, developers are more likely to misunderstand them and introduce bugs. But that’s not the only reason we advocate for test readability.
Test cases double as a form of documentation. And they are the best type of documentation since they’re executable and don’t get out of sync with what they’re documenting. But in order for the team to be able to reap the benefits of these executable specifications, they obviously need to be readable.
You might argue that readability is in the eye of the beholder, and I agree, to some extent. However, there are some objective steps you can take if you want your test cases to be more readable. Here they are:
- Use the Arrange, Act, Assert pattern, to clearly define the phases of your test case.
- Alternatively, adopt BDD-style test cases (Given/When/Then pattern.)
- One logical assertion per method.
- Don’t use magical numbers or strings in your test cases. Instead, employ variables or constants to document the values you’re using.
5. Tests Should Be Deterministic
If you walk out from this article remembering just one of the tips we’ve shown, make it this one: make your tests deterministic. “Deterministic” is just a fancy-sounding word meaning that the test should always present the same behavior if no changes were made to the code under test.
Suppose you have a function “a().” Then you write a test for it, and the test is passing. If you don’t change a(), the test should continue to pass, no matter how many times you run it. The opposite is also true: imagine you go and change the function, and that causes the test to fail. No matter if you run the test one, ten, or a thousand times, it should continue failing until you or someone else goes and fixes the function.
Why is it important for our tests to be deterministic, and how can we make them so?
If a test sometimes passes and sometimes fails, without the code that it tests undergoing any change, people will perceive it as being arbitrary or random. In other words, if your tests aren’t deterministic, developers won’t trust them.
In order for a test to be deterministic, it has to be completely isolated. You can’t have your tests depend on:
- other test cases
- environmental values, such as the current time, or language settings of the computer it’s running on
- external dependencies, such as the file system, network, APIs, etc
In short: a test case should only switch its outcome (from passing to failing or vice-versa) due to a change in the production code it targets (with the exception of changes in the test itself, in the case it’s buggy and needs fixing.)
6. Make Sure They’re Part of the Build Process
Since we’re talking about automated testing, it makes sense to automate the whole process of running the tests and taking some action when they fail. So, make sure your build process executes your unit tests and marks the build as broken when the tests fail.
Sure, developers can and should run the tests on their development machines as often as they can. If they’re following the TDD approach, they necessarily are. But that’s not enough. Making the tests part of the build process will add an extra layer of safety. Even if a developer forgets to test on their machine, the CI server won’t, and that will prevent buggy code from getting to the customers.
7. Distinguish Between The Many Types of Test Doubles and Use Them Appropriately
When writing unit tests, you’ll inevitably have to deal with the external dependencies your code interacts with. The usual way to go about this is using mechanisms that replace the external dependencies during testing. So, instead of making a call to the real API, the code under test will be communicating with something that pretends to be the API, while remaining none the wiser. The test will work just as fine but will remain fast and reliable.
Though many people use the word mock to refer to those mechanisms, that’s not really accurate. The correct generic term for something that replaces a dependency during testing is test double. Test doubles come in different types. To name a few: stubs, spies, and, yes, mocks. It’s important to understand the difference between the different test doubles and use the ones better suited to your current needs. A good resource to learn about all of this is this great article by Martin Fowler.
8. Adopt a Sound Naming Convention for Your Tests
This tip could’ve been included in tip #4, but I decided to give it its own dedicated session giving its importance. Naming things is hard, but it pays off. Since tests are also documentation, your tests should have names that reflect the scenario they’re testing.
The specifics of the naming convention you’ll adopt depend on varying factors, including the unit test framework of your preference. You might want to use BDD as an inspiration for names that are clear even to non-tech members of the organization, for instance. What really matters, in the end, is that you adopt a naming convention that makes sense for you and the members of your team, is easy to understand, and clearly communicates what the test is about.
9. Don’t Couple Your Tests With Implementation Details
One of the obstacles in the way of teams trying to adopt software testing is test maintenance. When tests—unit and otherwise—are too fragile and fail all the time due to the slightest change to the codebase, maintaining the tests becomes a burden.
When it comes to unit testing, you should prevent them from becoming too coupled to the internals of the code they’re testing. That way, they’re more resilient in the face of change, allowing the developers to change internal implementation and refactor when needed while providing valuable feedback and a safety net.
Unit Testing Is a Must…But You Must Do It Right
Unit testing is one of the most important types of automated testing—some smart people go as far as saying it’s the most important type. That’s true for all programming languages and platforms, and JavaScript is no exception. It doesn’t matter if you’re using JavaScript in the backend, or using a front-end framework, or even just writing vanilla JavaScript: unit testing is a must.
Many software teams struggle with unit testing, though. They start off committing mistakes that could be avoided through education on the subject. Quickly, their unit testing strategy descends into a mess, and the team decides it’s no longer worth the trouble and gives up on the effort. That’s a shame since unit testing doesn’t have to be hard, if you do it right, by following best practices from the start.
Today’s post was our attempt at helping teams start off their unit testing journey by providing some best practices that will help them avoid the most common pitfalls of unit testing.
Thanks for reading.
This post was written by Carlos Schults. Carlos is a .NET software developer with experience in both desktop and web development, and he’s now trying his hand at mobile. He has a passion for writing clean and concise code, and he’s interested in practices that help you improve app health, such as code review, automated testing, and continuous build.