I talked about the necessity of testing in my previous post. I’ll talk about how we can implement TDD in our project in this post.
When we talk about testing, there are a lot of tests that we can perform. A non-exhaustive list:
- Unit testing
- Integration testing
- System testing
- Regression testing
- Acceptance testing (done by our customer)
- Load testing
- Performance testing
- Stress testing
- Installation testing
Each of these tests is important, but in this section we’ll only talk about unit testing.
What is unit testing?
As the name implies, unit testing will test a small part of the code (a unit) in isolation. This unit is usually a function. In this post we’ll use a function as a unit. So we take the smallest piece of testable software in the application (the function) and try to isolate it from the rest of the code. This implies that we have to create clean functions that do only one thing well. We need to know what will be the input for each function (which follows from its signature), and what will be the output. We then write one or more tests that will verify if the function behaves correctly.
A unit test should test 1 thing. It can be tempting to write one test and add more test cases into that test. But when an assertion in your test fails, why is that? Is it because of a defect in unit 1, unit 2 or in both? So each unit tests must only involve one function to test. On the other hand, one function can be tested by multiple unit tests. You may write a test for a happy path in the function, some tests for edge cases and some tests for expected exceptions.
Your set of unit tests will grow over time. You try to cover as much of the functionality of your functions in your initial unit tests, but there will be things that you forgot. So indeed, you’ll need to add more tests then, to cover those bugs.
Don’t treat unit tests as an afterthought, because then they will only be additional code that you need to write, taking up your time. This takes us to TDD.
Test Driven Development
Instead of writing your tests after you have implemented the code, you can also reverse this. When you’re about to write a function
- Write the function signature, and do something like
throw new NotImplementedException();
- Write one or more tests for the most trivial cases for your function (usually the positive paths).
- Run these tests. They will fail, but this gives you a baseline for your testing. From now on things can only get better!
- Implement the function, with the tests in mind
- Run the tests again, and correct the function if necessary
The cycle of developing – testing is very short. You just create the function, run the tests, repeat until everything works as supposed. And then you run all the tests in your project / solution and hope that everything is green now. This can also be seen as a regression test: we changed some code, and we have a whole test suite to prove that everything still works. So when our code is moved to production, we don’t have to pray for a quite evening.
Red / Green
The previous example show how TDD works:
- Understand the requirements.
- Red: create a test and make it fail. This can easily be done by throwing some exception in your function.
- Green: Write the code that makes the test pass. Keep this code as simple as possible. All you want to do is make it pass the test(s) you have already written. Think of the YAGNI principle. If new functionality is needed, you just add another test. You then add the new functionality and run all your tests again. If all is well the previous tests will still be green.
- Refactor. The code is now complete, and all requirements are fulfilled. If necessary optimize your code by refactoring it. Of course, after each change you can run the tests again.
- Repeat this for all your other units.
So what is a good unit test?
Unit tests will be run often. In the previous section we saw that a set of unit tests is run each time we want to test our function. If we need to wait 10 minutes for the tests to terminate they don’t serve their purpose anymore!
Removes external dependencies
One of the things that may take a lot of time is calling external services. Database calls can take quite some time, calling web services can be slow as well. So we need to remove these dependencies from the tests.
In a unit test we don’t want to test if the web service that we call is correct. Normally that service is tested and should be correct already. The same goes for the database. Of course, later in the process we’ll perform integration tests, and they will perform the whole flow.
Dependencies can be removed in several ways. I take the example where we want to access data for a certain user. So in the function we first obtain the current username, and we pass that in the query to the database.
Testing this code manually may work, because we’re authenticated with our Windows account, and that account may be retrieved in the function (Windows authentication). But the test runner may run under another account, causing problems already. If another developer runs your test code results may be different as well. And it will be difficult to test for other user accounts, because you would have to log on with their credentials.
A simple solution may be to pass the username as a string into the function. Now your function can be tested without depending on the currently logged on user.
Other solutions involve dependency injection, which I will talk about in a later post.
Tests should be automatically run before you move your code into DEV. This prevents moving code into your source control system that doesn’t work properly. This is a good reason to remove external dependencies. It will also reduce the “works on my box” syndrome!
Tests should not depend on the state of external systems. The example with the user name already illustrates this, but this becomes even worse when a database is involved. You cannot rely on a certain status of the database. If you are going to modify data during your tests then you face some problems:
- You can’t be sure about the state of your database when you start your tests. You don’t know if other users have changed something, or if a previous test has modified the database already. Unit tests are not run in a certain order, so you never know which of your tests already ran. And there there is parallelism as well: tests can be run simultaneously.
- You don’t want to mess up your database if your tests fail. One of the reasons for testing is that things may fail, so you should expect his!
- As said before, databases are slow.
- Maybe your database is not accessible in DEV.
I will talk about stubbing and faking in later posts, and one of the specific cases will be how to handle the dependency on databases.
Very limited in scope / AAA pattern
Test functions are simple. They will do 3 things:
Arrange. Set up everything that you need for the test. Here objects are instantiated, variables are declared and initialized.
Act. Invoke the method under test.
Assert. Run several tests on the return values to see of they are what you expected.
If you keep these 3 sections in mind your tests should be easy to implement. As you can see writing a test doesn’t take much time.
Clearly communicate intent
Give your tests a good name. Instead of calling a test “TestGCD”, call it something like “GCDPositiveCase” or “GCDNegativeInputShouldThrowException”. When you see some red tests in the list of executed tests you know immediately what went wrong.
It is also a good idea to comment the test functions as well, if the name doesn’t show the intent of the test enough.
When you write new functions, you’ll write new tests. Clear.
When bugs are detected that aren’t covered in your current unit tests, you’ll need to add tests as well.
When requirements change, and the behavior of your function must change, you’ll have to maintain your tests too.
In this post we saw how we can use TDD to speed up development, and in the same time improve our code quality.
In the next post we’ll see how to set up unit testing in Visual Studio, and we will finally get our hands dirty!