We corrected our function to calculate the GCD so that it now runs correctly. But is the function correct in all the cases? Have we really tested everything? It would be nice if there was a tool that could tell us that, or even better, generate the tests for us. But let’s first talk about code coverage.
When we unit test our code, we aim for as much code coverage as possible. This means that when all our tests have been executed, we have run every line of code in our function (or unit) under test. Aiming for 100% code coverage is good, but in practice if you get to 80% you’re very good already.
When your code is just a sequence of instructions, you’ll obtain 100% code coverage each time. But usually you’ll have some if-statements, switches, loops, etc. For each if-statement there will be 2 paths that need to be executed to obtain full code coverage. When you start nesting if’s you can see that this will grow exponentially. If you want to run through all your code it will be necessary to craft the right parameters to pass in the function. When the flow of your function changes this work will need to be done again.
With the current version of our function to calculate the GCD this is easy, because the function is small and compact.
public int CalcGCD3(int x, int y)
if (y == 0)
return CalcGCD3(y, x % y);
There aren’t many flows here: either y == 0 or y != 0, this is an easy case.
But if we calculate the GCD using the “Binary GCD algorithm” then this becomes harder.
public int CalcGCDBinary(int u, int v)
// simple cases (termination)
if (u == v)
if (u == 0)
if (v == 0)
// look for factors of 2
if ((u & 1) == 0) // u is even
if ((v & 1) == 1) // v is odd
return CalcGCDBinary(u >> 1, v);
else // both u and v are even
return CalcGCDBinary(u >> 1, v >> 1) << 1;
if ((v & 1) == 0) // u is odd, v is even
return CalcGCDBinary(u, v >> 1);
// reduce larger argument
if (u > v)
return CalcGCDBinary((u – v) >> 1, v);
return CalcGCDBinary((v – u) >> 1, u);
If we want to craft parameters to pass through each part of the function things will be a lot more complex now. Obtaining 100% code coverage becomes very hard now. So let’s use IntelliTrace and see how far we get.
Let’s start with the simple case: CalcGCD3(…). Click right on the function and select “Run IntelliTest”. This is a bit counterintuitive because I would have expected that in order to run something you’d first have to create it, but this is how it works. The result is a list of 5 tests, and surprisingly 2 of them failed:
It seems that tests 3 and 5 throw an OverflowException which we hadn’t caught in our manually created tests. This indicates that we have to go back to our function now and decide if we want to do something about it or not. Also notice that IntelliTest tests with zeroes and negative values, and even with int.MinValue. These are cases that we didn’t think of (actually I did, of course!). Typically these are test cases that will be added when you discover problems. But now IntelliTest finds them for you before the problems happen!
Now you can select 1 or more of the test cases and click on the save button. This will generate tests for the selected cases in a separate test project. That keeps everything nicely separated! Now you have a baseline for your tests. You can refactor your code and run these saved tests again.
Maybe the OverflowExceptions are not a problem. When you click on the line with the exception you can click on the “Allow” button in the toolbar. Now if you run your tests again this condition will not make the test fail anymore (and all tests turn green).
So now let’s run IntelliTest against the Binary GCD Algorithm. It turns out that we only need 9 tests to cover all the code of this function. And they all pass!
Show me the code
When we saved the tests for CalcGCD3( ) IntelliTest created a new test project. In the project a partial class CalcTest is generated. This means that there will be a part that we can touch, and another part that belongs to IntelliTest, which will always be overwritten when a new set of tests is generated.
[PexAllowedExceptionFromTypeUnderTest(typeof(ArgumentException), AcceptExceptionSubtypes = true)]
public partial class CalcTest
[PexMethod(MaxConstraintSolverTime = 2)]
public int CalcGCD3(
PexAssume.IsTrue(x >= 0); // added by me
PexAssume.IsTrue(y >= 0); // added by me
int result = target.CalcGCD3(x, y);
Assert.IsTrue(result == 0 || x % result == 0); // added by me
Assert.IsTrue(result == 0 || y % result == 0); // added by me
// TODO: add assertions to method CalcTest.CalcGCD3(Calc, Int32, Int32)
This is ours to change (if we want). This is the function that will be called by all the 5 test cases. So this is the central point. I added 2 assertions in this function to verify the results.
Adding the 2 PexAssume lines tells IntelliTest that it shouldn’t expect negative parameters. As a result, when I run IntelliTest again on the function the cases with negative parameters are not generated anymore.
Important: don’t forget to save the tests again. Just running IntelliTest will not change the generated tests. And this is where the partial class becomes important. IntelliTest will leave CalcTest.cs alone, and only update (more correct: overwrite) CalcTest.CalcGCD3.g.cs. So this file we must leave alone, or we will lose our changes when the tests are regenerated.
Previously we told IntelliTest that an OverflowException was allowed. This you see (for example) in the PexAllowedException attribute. If you like you can add more restrictions here.
Code coverage results
Click right on the tests and then select “Analyze code coverage for the selected tests”. This generates the following window:
Looking at the results you can see that CalcGCD3(…) has a coverage of 100%, which means that all the code has been run at least once, and no code blocks were ignore by our tests.
IntelliTest generates good test cases for your functions. The generation of the tests can take a while, but the results are worth it. You can tune the results by fixing certain warnings (with the “fix” button”) or by adjusting the test class in the CalcTest.cs file (in our case). Don’t touch the other part of the partial file because our changes will be lost when you regenerate tests by saving them in the “IntelliTest Exploration Results” window.
Of course IntelliTest doesn’t know what your function is supposed to do. That’s why I added some more asserts in the test code. So in my opinion this is a supplemental tool to your regular tests, that can reveal test cases that you didn’t think of (yet).
The generated tests will verify if the result is what was returned from the function. So if you refactor your function and then run the same tests again, you can be sure that the function’s behavior hasn’t changed.
The combination of data-driven tests and IntelliTest becomes very powerful!
If you like TDD, then you see that this is more an “after the facts” testing tool. But don’t let that stop you from using it! I advise you to check out the links below, they contain more information!