Unit Testing your Repositories–the code

In the previous post I started to write about how to set up your unit tests in the repository code. The title was (maybe a bit misleading) “Unit testing your Repositories”. So I had to promise to write an article about the side of the tests as well.

The Problem

As explained in the other post, we only want to test the non-trivial functions in our repositories. We also don’t want to depend on the current state of the database, because this is never sure. So we want to mock parts of our database. This is not the solution for everything, but in many cases it will save us. Look at my previous post for possible problems with this approach.

The Example

I have a real life example of a database containing sales orders. The database is not completely normalized and the sales order codes are actually a concatenation of a couple of fields. We want to write a function that will calculate the next code for a certain sales representative for a certain season. As you can imagine already, this will involve some string manipulation, conversions, etc.

Currently the code is written as a user function in the database, so we have to convert this to C# using Entity Framework 6 (or better). The following approach won’t work with older versions.

The T-SQL User Function

ALTER FUNCTION [dbo].[GetNextSalesOrderCode]

(

       @Season nvarchar(3),

       @Prefix nvarchar(3),

       @RepPrefix nvarchar(2)  — RepresentativePrefix

)

RETURNS nvarchar(50)

AS

BEGIN

       DECLARE @Code as nvarchar(50)

       DECLARE @PrevCode as nvarchar(50)

       declare @MinSoCode as int

 

       SELECT top 1 @MinSoCode = C.MinSoCode

       FROM   Computers C

       INNER JOIN Representatives R ON C.Representative = R.Id

       WHERE  (R.Prefix = @RepPrefix) AND (C.ComputerName = N’ERP’)

      

       SELECT top 1 @PrevCode = Right(SO.Code,5)

       FROM   SalesOrders SO

       INNER JOIN Representatives R ON SO.Representative = R.Id

       where SUBSTRING(SO.Code,4,3)= @Season

         and R.Prefix=@RepPrefix

         and cast(Right(SO.Code,5) as int)>=@MinSoCode 

       order by Right(SO.Code,5) DESC

 

       if @PrevCode is null

             set @MinSoCode = 0

       ELSE

             set @MinSoCode = CONVERT(int, @PrevCode)+1

 

       set @Code=  @Prefix+‘.’+ @Season + ‘-‘ + @RepPrefix + FORMAT(@MinSoCode,‘00000’)

       RETURN @Code

END

 

This function will in some way return the next sales order code, using non-trivial logic. The main problem is actually that the database isn’t completely normalized, which explains why we need in this case some more logic in our repository.

The repository code

    public class SalesordersRepository : Repository, ISalesordersRepository

    {

        public async Task<string> GetNextSalesOrderCode(string season, string prefix, string representativePrefix)

        {

            Representative repr = await _db.Representatives.SingleAsync(r => r.Prefix == representativePrefix);

            int rPrefix = repr.Id;

            RepresentativeComputer comp = await _db.RepresentativeComputers.SingleAsync(c => c.RepresentativeComputer_Representative == rPrefix && c.ComputerName == “ERP”);

            int minSoCode = comp.MinSoCode;

 

            int prevCode = await GetPrevCode(season, rPrefix, minSoCode);

 

            return $”{prefix}.{season}{representativePrefix}{prevCode.ToString(“00000”)};

        }

 

        // Other methods

    }

 

Because C# as a language is more powerful than SQL, we can write this function a bit more concise (and clear). It still contains enough logic to justify writing a test for it. We also use the function GetPrevCode but to keep things simple we keep this function out of scope. Of course testing it would be done in exactly the same way!

Testing

We follow all the known steps to create a test project, hook it up with the assembly under test, and write a test for the method. As a first attempt we just use the database in its current state. Of course this is bad for several reasons, but it’s a start anyway:

[TestMethod()]

public void GetNextSalesOrderCodeTest()

{

    ISalesordersRepository repo = new SalesordersRepository();

    string next = repo.GetNextSalesOrderCode(“151”, “S0”, “09”).Result;

    System.Diagnostics.Debug.WriteLine(“next: “ + next);

    Assert.AreEqual(next, “S0.151-0902001”);

}

We are lucky with one thing: the method doesn’t change the state of the database, so running this test will not have any side effects. But we do depend on the current state of the database, which can (will) be different when we run the test again later, and of course our unit test is not fast, fast, fast! The test code also depends on the connection string, which for DEV may be correct, but in the TEST environment probably not.

Mocking the Database

We want to mock our database, preferably not with too much code. Mocking the database means in this case mocking some known state in the concerned database tables, and then inject this “in-memory” database (SalesOrderEntities) in the repository. I have created a base class Repository that provides the means to inject a SalesOrderEntities implementation. By default it will use the database using EF, when testing we can inject the mocked database using the second constructor (if you want more info on this, see the other articles in my blog). I just give the class here without more explanation:

public class Repository : IDisposable

{

    protected SalesOrdersEntities _db;

 

    public Repository()

    {

        _db = new SalesOrdersEntities();

    }

    /// <summary>

    /// Make DI possible for testing

    /// </summary>

    /// <param name=”db“></param>

    public Repository(SalesOrdersEntities db)

    {

        _db = db;

    }

 

    public void Dispose()

    {

        if (_db != null)

            _db.Dispose();

        _db = null;

 

        GC.SuppressFinalize(this);

    }

 

    ~Repository()

    {

        Dispose();

    }

}

All my repositories derive from this class, giving me always the possibility to inject a mocked database for testing.

Setting up for mocking

I like to use Moq as a mocking framework. There are many other mocking frameworks out there that are equally good, but I’m used to this one. So in my test project I install the Moq package:

image

Don’t forget to set the default project to your test project.

As all the repositories derive from the Repository class, it seems like a good idea to implement a RepositoryTests class that will set up all the common stuff. Like that we don’t repeat ourselves all the time. In this class we will set up the mock for the SalesOrderEntities, and some of the tables that it contains.

    [TestClass]

    public class RepositoryTests

    {

        protected static Mock<SalesOrdersEntities> _dbMock;

        protected static Mock<DbSet<Representative>> _representativesMock;

        protected static Mock<DbSet<RepresentativeComputer>> _representativeComputersMock;

        protected static Mock<DbSet<SalesOrder>> _salesOrdersMock;

 

        public static void Init()

        {

            SetupRepresentatives();

            SetupSalesOrders();

            _dbMock = new Mock<SalesOrdersEntities>();

            _dbMock.Setup(db => db.Representatives).Returns(_representativesMock.Object);

            _dbMock.Setup(db => db.RepresentativeComputers).Returns(_representativeComputersMock.Object);

            _dbMock.Setup(db => db.SalesOrders).Returns(_salesOrdersMock.Object);

        }

 

        private static void SetupRepresentatives()

        {

            _representativesMock = new Mock<DbSet<Representative>>();

            _representativesMock.Object.AddRange(new Representative[]

                {

                    new Representative { Id = 1, Prefix=“1”},

                    new Representative { Id = 2, Prefix=“2”},

                    // other entities, left out for brevity

                    new Representative { Id = 105, Prefix=“15”},

                });

 

            _representativeComputersMock = new Mock<DbSet<RepresentativeComputer>>();

            _representativeComputersMock.Object.AddRange(new RepresentativeComputer[]

                {

                    new RepresentativeComputer { Id = 1, ComputerName=“ThnkPad”, MinSoCode=1, MaxSoCode=2000, RepresentativeComputer_Representative=9},

                    // other entities, left out for brevity

                    new RepresentativeComputer { Id = 19, ComputerName=“ERP”, MinSoCode=2001, MaxSoCode=4000, RepresentativeComputer_Representative=5},

                });

        }

 

        private static void SetupSalesOrders()

        {

            _salesOrdersMock = new Mock<DbSet<SalesOrder>>();

            _salesOrdersMock.Object.AddRange(new SalesOrder[]

                {

new SalesOrder { Id=21910342, Code = “SO.151-0402009”, SalesOrder_Representative=4 },

// other entities, left out for brevity

new SalesOrder { Id=26183, Code = “SO.151-0402001”, SalesOrder_Representative=4 },

                });

        }

    }

 

In the test base class I first declare 4 Mock objects. One to mock the SalesOrdersEntities and 3 other to mock the DbSets (the collections with entities). Then I create 2 methods to set up the Representatives (and their computers) and the sales orders. As you can see I’m adding the records hard-coded in these functions. This would involve a lot of typing without the help of our friend Excel.

Intermezzo: Using Excel to generate the code

I used SQL Server Management Studio to obtain some records for each table. I then copied these records in an Excel spreadsheet and used a formula to generate the code to instantiate the entities. I only fill the fields that will be necessary now (YAGNI), but having it all in Excel would allow me to easily add more fields when needed. In the screenshots that you see here I removed all the data that could make this recognizable (privacy).

image

The column [New Object] contains the following formula:

=”new Representative { Id = ” & [@Id] & “, Prefix=”””&[@Prefix]&”””},”

As you can see I can easily add more rows if I want to, to execute more test scenarios. You may want to keep this spreadsheet in your source code control system and treat is like your other source code.

This isn’t rocket science, but it has helped me on several occasions Glimlach.

The modified test

    [TestClass()]

    public class SalesordersRepositoryTests : RepositoryTests

    {

        [ClassInitialize]

        public static void Init(TestContext context)

        {

            RepositoryTests.Init();

        }

 

        [TestMethod()]

        public void GetNextSalesOrderCodeTest()

        {

            ISalesordersRepository repo = new SalesordersRepository(_dbMock.Object);

            string next = repo.GetNextSalesOrderCode(“151”, “S0”, “09”).Result;

            System.Diagnostics.Debug.WriteLine(“next: “ + next);

            Assert.AreEqual(next, “S0.151-0902001”);

        }

    }

 

2 Things have changed in this test class:

  • I call the base class’ Init( ) method to initialize _dbMock.
  • I pass _dbMock.Object in the repository constructor (DI).

So let’s run our test and see what happens. This should be good…

Bummer

Running the test gives an unexpected exception:

image

The problem is that the DbSet mocks don’t implement the IDbAsyncQueryProvider interface, which makes sense because we are not using a database here. So we need to find a workaround for this. In the repository we use the async / await pattern a lot, which depends on this interface.

Following the indicated link brought me to this great article: IQueryable doesn’t implement IDbAsyncEnumerable. I copied the code with the TestDbAsync classes into my project and referenced this in my mocks (as described in the article), so I won’t copy them in this post. I did change my test base class in the following ways:

Creating the InitializeMock<T> method

For each dataset to be mocked the following code must be executed:

var mockSet = new Mock<DbSet<Blog>>();

mockSet.As<IDbAsyncEnumerable<Blog>>()

    .Setup(m => m.GetAsyncEnumerator())

    .Returns(new TestDbAsyncEnumerator<Blog>(data.GetEnumerator()));

mockSet.As<IQueryable<Blog>>()

    .Setup(m => m.Provider)

    .Returns(new TestDbAsyncQueryProvider<Blog>(data.Provider));

mockSet.As<IQueryable<Blog>>().Setup(m => m.Expression).Returns(data.Expression);

mockSet.As<IQueryable<Blog>>().Setup(m => m.ElementType).Returns(data.ElementType);

mockSet.As<IQueryable<Blog>>().Setup(m => m.GetEnumerator()).Returns(data.GetEnumerator());

 

I created a generic method to prevent to copy / paste this code everywhere:

private static Mock<DbSet<T>> InitializeMock<T>(IQueryable<T> data) where T: class

{

    var mockSet = new Mock<DbSet<T>>();

    mockSet.As<IDbAsyncEnumerable<T>>()

            .Setup(m => m.GetAsyncEnumerator())

            .Returns(new TestDbAsyncEnumerator<T>(data.GetEnumerator()));

    mockSet.As<IQueryable<T>>()

           .Setup(m => m.Provider)

           .Returns(new TestDbAsyncQueryProvider<T>(data.Provider));

    mockSet.As<IQueryable<T>>().Setup(m => m.Expression).Returns(data.Expression);

    mockSet.As<IQueryable<T>>().Setup(m => m.ElementType).Returns(data.ElementType);

    mockSet.As<IQueryable<T>>().Setup(m => m.GetEnumerator()).Returns(data.GetEnumerator());

 

    return mockSet;

}

This allows me to write the SetupXXX methods like this:

private static void SetupSalesOrders()

{

  var data = new List<SalesOrder>

  {

    new SalesOrder { Id=21910342, Code = “SO.151-0402009”, SalesOrder_Representative=4 },

    // other entities, left out for brevity

    new SalesOrder { Id=26183, Code = “SO.151-0402001”, SalesOrder_Representative=4 },

  }.AsQueryable<SalesOrder>();

 

  _salesOrdersMock = InitializeMock<SalesOrder>(data);

}

 

The actual SalesOrdersRepositoryTests class remains unchanged. And in case you wondered: yes, my test turns green now.

image

Conclusion

Writing unit tests for repositories can be done. It requires some work but not as much as one would expect. With the help of Excel (or some other tool) you can generate the data in an easy way. I hope that I have given you a framework for your EF unit testing with this post.

I want to warn again that not everything can be tested using mocks, so you will need to run integration tests eventually. But if you can already fix a lot of bugs (and prevent them from coming back later) using some clever unit tests, then this is a quick win.

References

Testing with a mocking framework (EF6 onwards)

IQueryable doesn’t implement IDbAsyncEnumerable

Posted in .Net, Codeproject, Debugging, Development, Entity Framework, Testing | Tagged | 1 Comment

Unit testing your Repositories

The problem

We have created our data layer, and of course we want to test it.

It is very hard to use a (physical) database to test your code. There are some problems with this (commonly used) approach:

  • Unit tests must be fast, fast, fast. If you include database access in the tests then the execution time of your unit tests will be much slower, hence your tests won’t be run regularly anymore and they become less useful.
  • You don’t know the state of your test database. Maybe somebody has run some tests before you, and the database is not in the expected state for your tests. Or maybe the order that your tests are executed in is not always the same. Adding or removing tests can mess this order up easily. So your tests are not deterministic and again pretty much useless now.
  • You don’t want to change the state of your database during the testing. Maybe someone in your team is using the same database at the time that you are running your tests, and they may not be pleased with it.

Of course there are solutions for these problems:

  • From time to time you may want to run some integration tests. For this you can set up a dedicated database to work with. Before running your tests you can then restore the database in a known state so at least that part will be deterministic. You can then run your tests in a specific order (or cleanup after each test, which requires more code, but is more foolproof). There are (again) some problems with this approach:
    • Your databases will evolve. You will probably add tables, procedures and other objects to your database. The database that you have backed up will not have these changes yet, so some maintenance is required.
    • It can be slow to run all your tests. Typically integration tests are executed before going live with your project, and often this is done automatically at night, giving you a detailed report in the morning. If possible, it is a good idea to execute the integrations tests every night. You can then discuss the problems during your daily standups.
    • You do need integration tests anyway, because sometimes your database will act different from your mocked objects and you can’t afford to find this out in production.
  • From EF7 on an in-memory database will be provided that will allow you to test your database code fast and deterministic. This article will not cover that (because otherwise I don’t have anything left to write about anymore 😉
  • You can mock your data tables to unit test your repositories. That will be the subject of my next post.

What (not) to test?

In this post I will assume that you use Entity Framework (EF) to implement your repositories. Some of the code in your repository will be very straightforward, and only involve some basic EF functionality (like LINQ queries, …). EF is a third party (Microsoft) library that has been tested by Microsoft, so you don’t have to test EF functionality. I’m usually using a very simple rule: do I get it a method working correctly from the first (or second 😉 time? Then it probably doesn’t require a test.

So we want test the more complex methods in the repository that will perform calculations, update multiple tables, …

Setting up your repository

Design for testing (SoC)

Let’s say that you want to perform some wild calculations over a table. You get the rows from that table and then with the entities you perform your calculation.

But your calculation has nothing to do with the database, all it needs is a collection of objects (entities) to perform its work. So it may be a good idea to add a (private) function that will perform the calculations on an IEnumerable<T>. Testing the calculation is now easy (you just create some lists in your tests and perform the calculations). Testing the whole function may have become unnecessary now.

Separation of concerns, grasshopper 😉

Design for testing (Dependency Injection – DI)

Initially we will write the repository like:

    public class EventsRepository : IEventsRepository

    {

        private readonly PlanningEntities db = new PlanningEntities();

      // …

   }

This will tie the EF context hard to the repository. The code is simple and clean, but very hard to test. So a better way would be to inject the EF context, and provide a default when you don’t. Example:

    public class EventsRepository : IEventsRepository

    {

        private readonly PlanningEntities db = new PlanningEntities();

        public EventsRepository()

        {

            _db = new PlanningEntities();

        }

 

        public EventsRepository(PlanningEntities db)

        {

            _db = db;

        }

      // …

   }

In this example there are 2 constructors. The default constructor will be used in your controller and will instantiate the repository using the EF context. The second constructor can be used in your test code to pass the mocked context into the repository. In the repository nothing further needs to be modified, unless you use more exotic functionalities (like stored procedures etc). In the functions using the repository nothing needs to be modified either (thanks to the default constructor).

Of course using a dependency injection container like Unity can help a lot with setting up your DI. It will depend on your needs if you need this or not. Usually when your project grows, you will need it at some point.

Caveats

You can’t mock stored procedures, user functions, triggers, … By nature these are written in some form of SQL (T-SQL, PL-SQL, …) and they usually contains multiple statements that will be executed directly against the database engine. If you use them, you’ll need to integration test your code. This doesn’t mean that stored procedures are bad and should not be used, just that you need to be aware of this limitation.

You can’t mock default values for fields, calculated fields, … Identity fields (sequences in Oracle, Autonumber in MS Acces) are a good example of this. They self increment with each newly created record. The same goes for unique identifiers (GUIDs) of course. There are also the typical fields like [CreationDate] that will get the current date and time by default when a record is created. You’ll need to write some extra code in your mocks to cope with these.

Sometimes fields are calculated when a record is updated. Typically there will be a [LastUpdated] field that will be set by an update trigger when the record is updated. For performance reasons some tables may be denormalized, and maintained by triggers. This will also require extra work in your mocks.

Foreign keys may be a problem as well. If you don’t have all the tables in your model, and a table has a foreign key to one of these not-included tables, your mocks can’t catch this either.

Conclusion

Writing testable code requires some architecture. You must think about how your code will be tested, and how this can be done as efficient as possible. You must also accept that you can’t mock everything, so integration tests will be necessary.

I actually started this post with the idea to introduce Effort to you, an nice library to mock your EF classes. But while writing I decided to postpone this to next week’s post.

References

https://en.wikipedia.org/wiki/Separation_of_concerns

Introduction to Unity

Posted in .Net, Architecture, Codeproject, Development, Entity Framework, Testing | Tagged | 1 Comment

Creating an OData V4 service

I’m writing an agenda application, where the data can be delivered to the calendar control using either REST or OData V4. I choose the latter because this will allow for much more flexibility afterwards.

I already have an MVC 5 project containing the calendar control, so now I want to create the OData services.

NuGet is your friend

I first need to add the OData NuGet package in the project. I suppose that you know how to use NuGet, but just to be sure:  Tools > NuGet Package Manager > Package Manager Console.

In the console type

Install-Package Microsoft.AspNet.Odata

This will do all the work for you:

PM> Install-Package Microsoft.AspNet.Odata
Attempting to gather dependencies information for package ‘Microsoft.AspNet.Odata.5.9.0’ with respect to project ‘SyncfusionMvcApplication1’, targeting ‘.NETFramework,Version=v4.6.1’
// Removed a lot of NuGet output here

Added package ‘Microsoft.AspNet.OData.5.9.0’ to ‘packages.config’
Successfully installed ‘Microsoft.AspNet.OData 5.9.0’ to SyncfusionMvcApplication1

As you can see the OData package has some dependencies, which are nicely solved by NuGet. The net result is that 2 assemblies have been added to the project: Microsoft.OData.Core and Microsoft.OData.Edm.

Creating the Entity Framework classes

I already have an existing database so I will create my EDM from that database (call me lazy…) Right click on the Models folder and select “ADO.NET Entity Data Model”. Pick the first choice (EF Designer from database).

image

Then choose your data connection. In my case it is a SQL Server database, running on my development machine:

image

In the next dialog I choose the necessary tables. For this example I’ll only need 1 table:

image

Clicking “Finish” takes me to the EventsContext.edmx diagram. Under de Models folder some new classes have been generated to make it easy to work with the data.

Of course it is also possible to work “code first”, in the end all we need is a way to retrieve and update data. For the OData example this could even be a static list of events, but there is no fun in that!

Create the OData endpoint

If you created your project with the WebAPI option, you should be fine. You’ll have a WebApiConfig.cs file under the App_Start folder which contains the method Register( ) that will be called from within the Application_Start( ) method in Global.Asax.

If this is not the case then you have some manual work to do:

Create the WebApiConfig class under the App_Start folder

Right click on the App_Start folder, and then add a class (add > class). Name the class WebApiConfig and open the generated WebApiConfig.cs file. Mind that when you create a class under a folder, the namespace of this class will contain the folder name. So remove “.App_Start” from the namespace name.

The name of the class (and its namespace) are not important, but to remain compatible it is a good idea to use the standard naming conventions.

In our case the class looks like

using System.Web.OData.Builder;

using System.Web.OData.Extensions;

using System.Web.Http;

using SyncfusionMvcApplication1.Models;

 

namespace SyncfusionMvcApplication1     //.App_Start => removed

{

    public static class WebApiConfig

    {

        public static void Register(HttpConfiguration config)

        {

            ODataModelBuilder builder = new ODataConventionModelBuilder();

            builder.EntitySet<Events>(“Events”);

            config.MapODataServiceRoute(

                routeName: “ODataRoute”,

                routePrefix: “OData”,

                model: builder.GetEdmModel());

        }

    }

}

We first create a builder that will contain the model to be represented by the OData services. This is done by creating an ODataConventionModelBuilder object, that derives from ODataModelBuilder. This class will generate an EDM using the same entity- and property names as in the model classes. This is done in the next line:

            builder.EntitySet<Events>(“Events”);

The documentation says that the method EntitySet(…) registers an entity set as a part of the model. So the model now contains an Events entityset that can be returned to our client. In the next line:

            config.MapODataServiceRoute(

                routeName: “ODataRoute”,

                routePrefix: “OData”,

                model: builder.GetEdmModel());

 

I set up a route that will be prefixed with OData. So the URL for the OData services will be something like http://myservice/OData/Events. OData URLs are case-sensitive. Notice that the builder that we just set up is passed here as the last parameter.

You may need to verify that this method is called from Global.asax.cs. Check the Register() function for the following line:

            WebApiConfig.Register(config);

If the line isn’t there you’ll need to add it. Make sure that you add this line at the end of the Register function, otherwise you’ll get a very confusion exception saying: “valuefactory attempted to access the value property of this instance.

So now we have set up the project to use OData, next thing we need to do is to create an OData controller. It turns out that this is the easy part.

Adding the OData Controller

Given that a good developer is lazy (but a lazy developer is not necessarily a good one) I searched a bit for OData V4 Scaffolding. We’re lucky, because this exists:

Right-click on the OData folder that you just created and select Add > Controller. In VS 2015 you’ll find a list of controller types that you can add, including some OData V3 Controllers.

Click on the “Click here to go online and find more scaffolding extensions” link below the dialog.

image

In the “Extensions and Updates” dialog type “OData V4” in the “Search Visual Studio Gallery” text box. In the list you’ll find “OData v4 Web API Scaffolding”, which is the one you want to download. Allow the installer to make changes to your system. Unfortunately you’ll need to restart Visual Studio, so let’s do that now and have some coffee Glimlach

After the restart of Visual Studio open your project and go back to the (still empty) OData folder. Go through the same motions as before: right-click > Add > Controller. Two more choices are presented now:

Microsoft OData v4 Web API Controller. This creates an OData controller with all the CRUD actions. This clearly indicates that we don’t need Entity Framework (EF) to create OData controllers. So it is possible (and often preferable) to implement the Repository pattern and then use the repositories to obtain or modify the data. If you plan to do this, then beware of the caveats that we’ll encounter later in this blog (see the part on IQueryable).

Microsoft OData v4 Web API Controller Using EF. This does the same, but from an EF data context. Given that we’re using EF in this example, let’s try this one. A dialog box pop up, fill the fields like this:

image

 

 

 

 

 

 

and click “Add” to generate your controller.

Running your application will take you to your homepage, add /OData/Events to the URL and you’ll get the list of all the events in the database.

Reminder: OData is case sensitive, so for example /OData/events will NOT work. This is by design.

Show me the code

It is nice to have all the code scaffolded, but when you want to modify something you must know where to do it. So let’s go over the generated code in EventsController.cs.

Class declaration

public class EventsController : ODataController

{

    private Planning365Entities db = new Planning365Entities();

The first thing to notice is that the EventsController class derives from ODataController. The ODataController class derives from ApiController, which makes this service a REST service with a bit more functionality (ODataController adds the protected methods Created and Updated to return action results for the respective operations).

Get the Events

A xxxEntities object db is created to access the database using Entity Framework.

// GET: odata/Events

[EnableQuery]

public IQueryable<Events> GetEvents()

{

     return db.Events;

}

The actual code is simple: return db.Events;

This returns an IQueryable, which is important because there is also the [EnableQuery] attribute. The combination of these 2 makes that we can create lots of queries from this method. You can try the following http GET requests:

We could have written this function like this:

[EnableQuery]

public List<Events> GetEvents()

{

    return db.Events.ToList();

}

The results are the same, but…

  • db.Events is materialized by the ToList() function. This means that I also had to adapt the signature to become a List<Events>.
  • So when we would query a table with 100.000 rows, and the use the $filter clause in the http request to only return 10 records, ToList() will first retrieve all 100.000 records, and then the LINQ Where clause will be applied to this (in-memory) list.
  • In the first (correct) version, we returned an IQueryable, which means that LINQ will now let the database do its work (that means: a SQL where clause will be added to the request and only the 10 relevant records are retrieved from the database. Needless to say that this is a lot more efficient!

I’m raising this issue because often when a repository is implemented, this will return a collection instead of an IQueryable, which would cause this (subtle) bug. This also shows that it is a good idea to test your code with large datasets, so you may catch this kind of errors before you go to production!

Get an event by its key

// GET: odata/Events(5)

[EnableQuery]

public SingleResult<Events> GetEvents([FromODataUri] long key)

{

    return SingleResult.Create(db.Events.Where(events => events.Id == key));

}

Again the actual code is simple: db.Events.Where(events => events.Id == key)

db.Events.Find(key) would be more efficient, but we are returning a SingleResult class. This is actually an IQueryable with zero or one records in it. So we need to pass it an IQueryable object, which the Where method does.

The EnableQuery attribute is used again here, so allow for more than just the simplest queries. We can try:

Updating an event

For Updates we typically use the PUT verb. This will then replace the current event with the new event.

It is also possible to update entities using the PATCH or MERGE verbs. Both verbs mean the same and are allowed. The difference with PUT is that they don’t replace the entity, but only do a partial update: only the fields that have been changed will be modified in the data store.

// PUT: odata/Events(5)

public async Task<IHttpActionResult> Put([FromODataUri] long key, Delta<Events> patch)

{

    Validate(patch.GetEntity());

 

    if (!ModelState.IsValid)

    {

        return BadRequest(ModelState);

    }

 

    Events events = await db.Events.FindAsync(key);

    if (events == null)

    {

        return NotFound();

    }

 

    patch.Put(events);

 

    try

    {

        await db.SaveChangesAsync();

    }

    catch (DbUpdateConcurrencyException)

    {

        if (!EventsExists(key))

        {

            return NotFound();

        }

        else

        {

            throw;

        }

    }

 

    return Updated(events);

}

As a first remark: the class Events indicates a single event. I did not put it in singular when the Entity Framework data model was created, hence this name. So don’t let that put you on the wrong foot.

In the function signature we see that the patch parameter is of type Delta<Events> instead of just Events. This type tracks the changes for the Events class, allowing the line patch.Put(events) to do its work: overwrite the fields of the found event with the fields of the patch object.

The rest of the function is straightforward.

If you look at the Patch() method, you’ll see exactly the same code except for the line

patch.Patch(events);

The Patch method will only update the changed fields.

The Post( ) and Delete( ) methods should be clear by now.

Conclusion

Using the OData V4 templates it is easy to create OData services. There are some things that are good to know, but most can be inferred from the code.

Some references

http://www.asp.net/web-api/overview/odata-support-in-aspnet-web-api/odata-v4/create-an-odata-v4-endpoint

https://blogs.msdn.microsoft.com/davidhardin/2014/12/17/web-api-odata-v4-lessons-learned/

Posted in .Net, Codeproject, Development, MVC, MVC5, Web API | Tagged | 2 Comments

Using the repository pattern

In most applications database access is necessary. In .NET there are 2 main ways of doing this: ADO.NET or using the Entity Framework. Of course there are great 3rd party libraries like NHibernate that do the trick as well, and if you are courageous you can use native libraries to access your database.

We usually need data from other places as well. We may need to access a directory service (like Active Directory) to obtain additional user info, or call web services to access remote data. Also not all the data will come from a SQL Server database, other database management systems such as Oracle, PostgreSQL etc are possible too. And then there are the good old XML files, or CSV-files and flat files.

I’m sure you can think of other data sources than those that I have just summed up. But for your application it isn’t important where data is stored or where data comes from. We want to be able to think about the data in a more abstract way.

The repository

When we use ADO.NET, we’re close to the database. We may need to write SQL to obtain or manipulate or data. We’re close to the tables as well. Sometimes this is an advantage because we know exactly what we’re doing, but when things start to evolve this may become more complex.

With entity framework we are 1 step further. The database is largely abstracted away, and we can use inheritance and other OO mechanisms in our data models. But we’re still talking to a database, and we depend on it.

So let’s think what we really want to do in our application. We want to get a list of customers, we want to be able to insert, update, delete customers, and probably we want to filter some data left and right. But we don’t care where this data comes from.

So we need another abstraction layer, which we call a repository. Repositories can be implemented in many ways. Some people like to use generics in their repositories (which saves a lot of repetitive work), others like to create “pinpointed” repositories for the job at hand. And of course we can start from the generic one and then add some pinpointed code.

Contacts example

Let’s create a simple contacts application. We want to show a list of contacts, be able to insert a new contact, update contacts and delete contacts. So we can create a class like:

    public class ContactsRepository

    {

        public IEnumerable<Contact> GetContacts()

        {  … }

        public Contact GetContactByID(int contactID)

        {  … }

        public Contact Insert(Contact ins)

        {  … }

        public Contact Update(Contact upd)

        {  … }

        public Contact Delete(int contactID)

        {  … }

    }

This class can be implemented using Entity Framework, or ADO.NET, or XML files, or … The user of the class doesn’t care, as long as the class behavior is right. So we effectively abstracted away the way of storing our data.

This screams for interfaces… Using VS2015 we right-click on the class name > Quick

Actions > Extract interface > OK. The IContactsRepository interface is generated for us.

image

    public interface IContactsRepository

    {

        Contact Delete(int contactID);

        Contact GetContactByID(int contactID);

        IEnumerable<Contact> GetContacts();

        Contact Insert(Contact ins);

        Contact Update(Contact upd);

    }

This interface can be made generic. We just need to specify the class name for the entity type. If you have standards that say that all primary keys will be integers then that will be enough. Otherwise you’ll need to make the data type for the primary key generic as well. In this example we’ll do the latter:

    public interface IRepository<T, K>

    {

        T Delete(K key);

        T GetByID(K key);

        IEnumerable<T> Get();

        T Insert(T ins);

        T Update(T upd);

    }

So now the IContactsRepository interface becomes simple:

    public interface IContactsRepository : IRepository<Contact, int>

    {

        // additional functionality

    }

If you need more specific functionality you can extend the IRepository<T, K>  interface and add the additional methods. Then you implement this interface:

    public class ContactsRepository : IContactsRepository

    {

        // function implementations

    }

Code Organization

Let’s say that we implement this repository using Entity Framework. In whichever way you use it (database first, code first, model first), you’ll end up with some classes that will reflect the database tables. In our example this is the Contact class (the entity class). It may be tempting to use these classes for anything else as well, such as sending retrieved data in a WCF web service, or displaying data in an MVC application, but this is generally a bad idea.

Using entities in WCF

When we create a WCF service GetCustomers( ) that returns a list of Customer objects, we’ll need to specify the [DataContract] attribute on the data class that you want to return, and the [DataMember] attribute on all the properties that you want to serialize with it. You could update your entity classes to add these attributes, but when you regenerate your classes from the database your modifications will be overwritten. And that is not even the biggest problem. The biggest problem is that you have violated the Separation of Concerns principle. You are using entity classes to return web service data. If this is the only thing you intend to do with your entity classes, this may be “acceptable” (but certainly not future-proof), but if you also want to also show them in an MVC application, with its own data attributes then things will become messy.

For this reason you should put the entities and the repositories in a separate assembly, which you can name Contacts.Data. In that way you have a project which will only handle data access and will only expose the entity classes and the IRepository interfaces. Internally the interfaces will be implemented by using the Entity Framework (or something else). This assembly will be a class library, so you only need to reference it in your other projects to use it.

In the WCF project we reference the Contacts.Data project so we have access to the data. We then define our own DataContract classes, which may be a copy of the entity classes (with all the necessary attributes); or not.

Show me the code

The WCF service will not return Contacts, but Customers. Here is the definition of the Customer:

    [DataContract]

    public class Customer

    {

        [DataMember]

        public int CustomerID { get; set; }

        [DataMember]

        public string Name { get; set; }

    }

 

As you can see the class name is different and the ID property is now called CustomerID.

The interface is a plain vanilla WCF interface:

    [ServiceContract]

    public interface ICustomersService

    { 

        [OperationContract]

        IEnumerable<Models.Customer> GetCustomers();

    }

 

Notice the [ServiceContract] and [OperationContract] attributes, which make sure that our web service exposes the GetCustomers() method.

The implementation contains little surprises as well. The only thing is that we need to convert the Contact to a Customer, something that LINQ is very suitable for:

    public class CustomersService : ICustomersService

    {

        private readonly IContactsRepository _repo;

 

        public CustomersService()

        {

            _repo = new ContactsRepository();

        }

        public IEnumerable<Customer> GetCustomers()

        {

            var contacts = _repo.GetContacts();

            var qry = from c in contacts

                      select new Customer { CustomerID = c.ID, Name = c.Name };

 

            return qry.ToList();    // Don’t forget ToList()

        }

    }

 

This is a trivial conversion. Sometimes this logic may be more complex, maybe also calculating some fields. And sometimes it may be just a member-wise copy, where libraries like Automapper can help you reduce code.

If the code for the conversion is used in many places, then you can make a function for it. I sometimes create a utility class only for the conversions.

Some caveats using EF in your repository

As you know, tables are represented as DbSets in Entity Framework. So if you have a context with a property called Contacts, then you can use it like

context.Contacts.

But this will not obtain the data from the database until you call a method like ToList() or ToArray() on it.

So in your Repository you can return object.Contacts. The advantage is that in your client code (the WCF code in our example) you can now chain other methods like Where, OrderBy, … to it, and only when you call a method that will retrieve your data (First, Single, Any, ToList, …) the query to the database will be generated so you get your data. This is a very efficient way of working with Entity Framework, but it will tie you to it. If that doesn’t bother you, then this is a good solution.

Another way to implement this is by returning

context.Contacts.ToList().

In this case you obtain the list of entities from the database and return them as a collection. The advantage is clear: You don’t depend on Entity Framework now, you just get a list of Contacts that you can work with. The problem however is that subtle errors can emerge:

int nrOfContacts = repo.GetContacts().Count();

if you have 100.000 records in your database, then all the records will be loaded in memory, and you calculate your count on the records in memory.

If you use the previous method (returning the DbSet), then a select count (*) will be sent to the database, resolving your query by indexes in the database and returning only the integer containing your count.

So choose wisely!

Implementing the MVC application

In the MVC application we can call the web service. To do that we’ll first create a proxy to make our live easy, and then obtain the customers. Again it would be possible to use the Customer class directly to display the records, but this poses the same “Separation of Concerns” problem. So create a ViewModel class to handle all the communication with the user. The idea is that everything that has to do with the data will be handled by the entity classes, and everything that has to do with the representation of the data, and getting data from the user will be handled by the ViewModel classes. The only additional code to write is again the trivial conversion between the entities and the viewmodels.

Conclusion

It may seem like we are creating a lot of classes that make no sense. But separating the concerns like this makes our code easier. In the data library we only work with the entity classes. In the web services we only use the entity classes to talk with the data library, and then transform them into what we want to return to the client. And in the MVC application we do the same. This gives a lot of freedom, but it also makes things much more testable. I know that you have been waiting for the T-word. I will cover tests for this flow in another post.

Posted in .Net, Architecture, Codeproject, Development, MVC, WCF | Tagged | 3 Comments

Mocking functionality using MOQ

What is wrong with this code?

class Program
{
static void Main(string[] args)
{
DateTime dob = new DateTime(1967, 7, 9);
int days = CalcDays(dob);
Console.WriteLine($”Days: {days}”);
}

    private static int CalcDays(DateTime dob)
{
return (DateTime.Today – dob).Days;
}
}

Well, the code will work. But testing will become a problem because DateTime.Today is a non-deterministic function. That means that it is possible that the function will return a different value when called at different times. In this case it is clear that tomorrow the function will return something else than today or yesterday. So if you want to write a test for this function you’ll need to change it every day. And actually, changing a test to make it pass isn’t exactly the idea of unit testing…

How can we solve this?

There is some work involved here. Let me draw what we’re going to implement:

image

I have shown this pattern in previous posts already. We are injecting the IClockService into the Date class. In the actual implementation we then implement the CurrentDate property as DateTime.Today.

So now we can create our test project and write some tests. In the tests we will mock the IClockService so that CurrentDate becomes deterministic and we don’t have to modify our tests every day.

Our tests could look something like:

class FakeClockService : IClockService
{
public DateTime CurrentDate
{
get
{
return new DateTime(1980, 1, 1);
}
}
}

[TestClass()]
public class DateTests
{
[TestMethod()]
public void CalcDaysTest()
{
// arrange
Date dt = new Date(new FakeClockService());
DateTime dob = new DateTime(1967, 7, 9);

        // act
int days = dt.CalcDays(dob);

        // assert
Assert.AreEqual(4559, days);
}
}

As you can see I have implemented a FakeClockService that will always return a fixed date. So testing my code becomes easy.

But if I want to test my code with different fake values for CurrentDate I will need to implement the interface for each of these different values. In this simple case the interface contains 1 simple method, so besides messing up my code that is not a big problem. But when your interface becomes larger this becomes a lot of work.

Enter MOQ

According to their website, MOQ is “The most popular and friendly mocking framework for .NET”. I tend to agree with this.

Setting up MOQ

Starting to use MOQ is easy: in the test project install the MOQ Nuget package. This will set up your project to use MOQ. There are 2 possible ways to do this.

Using the GUI

In Visual Studio open the Nuget Package manager (Tools >  Nuget Package Manager >  Manage Nuget Packages for Solution… ) and find MOQ, then install the latest stable version.

Using the NugetPackage Manager Console

If you like typing then bring up the Nuget Package Manager Console (Tools >  Nuget Package Manager >  Package Manager Console). Make sure that in he Default Project your test project is selected and then type

install-package MOQ

image

You will now find 2 additional references in your project references : Mocking and Moq.

Using MOQ in your tests

[TestMethod()]
public void CalcDaysTestNegative()
{
// arrange
var mock = new Mock<IClockService>();
mock.Setup(d => d.CurrentDate).Returns(new DateTime(1960, 1, 1));

    Date dt = new Date(mock.Object);
DateTime dob = new DateTime(1967, 7, 9);

    // act
int days = dt.CalcDays(dob);

    // assert
Assert.AreEqual(-2746, days);
}

The Mock class resides in the Moq namespace so don’t forget to add

using Moq;

in your list of usings.

I set up the mock by calling the Setup method. In this method I actually say: “When somebody asks you to return the CurrentDate, then always give them Januari the first of 1960.”

The variable mock now contains an implementation of IClockService. I defined the CurrentDateproperty to return a new DateTime(1960, 1, 1) in 2 lines of code. This keeps my code clean because I don’t need to implement another fake class for this case. So my code stays clean and I can easily test all my border cases (such as when the 2 dates are equal).

To be complete: mock is not an IClockService, but this will be the mock.Object. So I pass mock.Object into the new Date( ). Now I have a deterministic function for my tests!

You will appreciate MOQ even more when your interfaces become larger. When you have an interface with 20 methods, and you only need 3 methods, then you just have to mock these 3 methods, not all 20. You can also mock classes with abstract or virtual functions in the same way.

Conclusion

This is by no means a complete tutorial on how to use MOQ. You can find a tutorial at https://github.com/Moq/moq4/wiki/Quickstart written by the MOQ guys themselves. There you can find more of the power of MOQ.

I did want to show that using a mocking framework like MOQ will allow you to write simpler tests, so you have one less excuse for not writing tests!

Also now you know my date of birth, so don’t forget to send me a card!

One more thing

I wanted to implement a simple GetAge(…) function, which should be easy enough. But then I read this thread on StackOverflow and I decided to keep things simple and just go with CalcDays(…). Call me a coward Knipogende emoticon

Happy testing!!

Posted in .Net, Codeproject, Development, OOAD, Testing | Tagged | 1 Comment

Handling a “Microservices” Project

I’m currently working on a big application that has been built up using microservices. The application is composed of small(ish) apps that compose the application. The idea is that these small apps work together to create the final application.

This greatly limits the complexity of the application, by separating it into small parts – divide and conquer. These parts can live on their own, or have some parts in common.

So we have some actual apps, which have user interfaces and a back-end, and then we have some “helper” apps (let’s call them components) that serve mainly as a service for the other apps.

We’re setting things up so that actually every app can be accessed by the other apps. This makes that adding a new app sometimes means that we just go “shopping” in the components, and possibly in the other apps. The actual work for the new app is then relatively limited.

How do we start with a new app?

I have given away the answer a bit already, but I worked out a “cookbook recipe” to implement a new app in a growing application. I am supposing that the functional analysis has been done already. Also, I don’t talk about database structures etc to keep the post lighter. Here are the steps:

Analyze the data that will be needed.

Find out if the data is already available in other (external) data sources. Try to find 1 single source of truth for your data. There is a good chance that the data already exists somewhere in your application, so you can access it from there.

Sometimes you may want to extend an existing app to provide this additional data. That is better than copying the data in your little database and start all over again. Reuse is key!

Analyze what is left. This should be the data that is specific for your service. If this is not the case: rinse and repeat!

Analyze the functionality

Find out if the new functionality already (partially) exists in other services or components. Same story: reuse.

What is left should again be specific for this service. If this is not the case: rinse and repeat!

Describe where and how you’re going to implement this functionality.

Create the necessary REST services

Describe the resources that you need. These will be the URIs for your REST services.

Describe the needed functionality. This will become the verbs for your services.

Create an initial Swagger file that contains these descriptions. This will help you later in the process to verify if you covered everything. Make sure that you only add resources and functionality that you need. For example: if you don’t want to be able to DELETE all your customers, then don’t provide a DELETE /api/Customers. Keep the YAGNI principle in mind.

Analyze the UX

Verify if you can cover all the data needs for the pages (or screens) that you want to create. The Swagger file that we just created will be of great help for this. Verify that you can cover all the needs, and that no more is in the Swagger file than is needed. Otherwise you’ll be implementing too much.

Verify if there are reusable UX components. For example if you’re using Angular, then there are some good chances that the way to input a data / time / customer / … have been standardized already, and that there is a library of standard “UX components” that you can use. If all is well there is a repository with all the reusable components.

Verify if some of the UX functionality that you need can be created as a reusable component. In that case: describe it in the repository and implement it.

Notice that so far we haven’t implemented anything yet. These steps may seem a lot of overhead, but in the end all that we’re doing here is promoting reuse at different levels. This means that now we know

  • which components we should modify
  • which components must be created

so we can work as a team to implement this new functionality in time.

Implement the REST interfaces

We have the Swagger files in place, and they have been verified against the UX. So now is the time to implement them. Or not?

First create some simple stub implementations. Once these are ready another team can start working on the UI while in parallel we create the REST services. This also forces us to think about the APIs first, so we know what we’re building.

Once the stubs are in place start by creating your unit tests (you knew this was coming!), implement the REST services one by one and unit SoapUI - The Home of Functional Testingtest them. Once a service is ready use a tool like Postman, Fiddler, SoapUi, … Make sure that you can run all the tests automated, so you can run them as much as needed with very little effort.

Implement the UX

Once the REST stubs are ready UX can start using them. This doesn’t mean that we can’t do some useful work before, but having the stubs will allow us to implement the features completely, knowing that the back-end implementation will follow.

Use the UX components that we have found before as Lego blocks in your pages, reusing as much as possible.

And again: write unit tests (for example for Angular you can use https://angular.github.io/protractor/#/ or another testiong framework of your choice) and make them pass. When your functionality is ready; test it completely and make sure that everything works as expected.

Now you can move on to the next user story!

Perform integration tests

Run all the automated tests that have been created so far. If you extended other services then run their automated tests as well, making sure that all the tests pass.

Selenium LogoFind out if some of the functional scenarios can be automated by using tools like Selenium. Scripting the tests will save you a lot of manual work afterwards. And yes I know, when the UI changes you’ll need to adapt your scripted tests, but in most of the cases these changes won’t be dramatic and you’ll benefit from the automated UI tests more than they cost you.

So now what is left is testing those scenario’s that you couldn’t automate. Don’t forget this step, you may miss some important problems.

Done

So now you can merge your code and hope for the best. QA should take it from here.

There is always tension between QA and DEV because as a DEV we try to make sure that QA doesn’t find any bugs. As QA we know that DEV tried to write the code as perfect as possible and still we’ll need to find some bugs. This should be a positive tension!

Conclusion

We can see that in this whole flow development is not the main part. There is some preparatory work involved (sometimes referred to as technical analysis Knipogende emoticon), and there is a lot of testing involved. This guarantees that we don’t rewrite stuff, and that what we write is as good as possible. It should be easy to see that this saves a lot of work. Most of the preparatory work is usually done by an application architect, and then overseen by a lead developer.

This is of course just a framework that you’ll need to fill in for your own needs. Creating some flowchart may help visualize this for your team and your project leader.

Posted in Analysis, Architecture, Codeproject, Development, Methodology, Testing | Tagged | Leave a comment

Grasshopper Unit testing

“I have created my code, split out some functionality and written the tests. But the mocks have become a real mess, and much more work than I had thought it would be. Can you help me, Sensei?”

“Of course. Show me your code, grasshopper”

The code is a validator class with different validation methods. The public functions in this class would perform the validations, and in turn call some private methods to retrieve data from the database.

“How can I test these private methods, sensei?”

“Usually, if you have private methods that need to be tested, that’s a code smell already. It isn’t always bad, but it could indicate more problems. Using the PrivateObject class can help you with this.”

The validators would first call some of the private methods, then perform some trivial additional tests and then return the validation result as a Boolean. So to test the actual validation methods the private methods were put as public (smell) and then stubbed. So something was going wrong here. But my young apprentice came with a very good answer:

“But sensei, if stubbing out methods in a class to test other methods in the same class smells so hard, wouldn’t it be a good idea then to move these methods into a separate class?”

Now we’re talking! The Validation class was actually doing 2 separate things. It was

  1. validating input
  2. retrieving data from the database

This clearly violates the “Separation of Concerns” principle. A class should do one thing. So let’s pseudo-code our example:

public class Validator
{
    public bool CanDeleteClient(Client x)
    {
        bool hasOrders = HasClientOrders(x);
        bool hasOpenInvoices = HasOpenInvoices(x);
       
        return !hasOrders && !hasOpenInvoices;
    }
   
    public bool CanUpdateClient(Client x)
    {
        bool hasOpenInvoices = HasOpenInvoices(x);
       
        return !hasOpenInvoices;
    }
   
    public bool HasClientOrders(Client x)
    {
        // Get orders from db
        // …
    }
   
    public bool HasOpenInvoices(Client x)
    {
        // Get invoices from db
        // …
    }
}

In the tests the HasClientOrders and HasOpenInvoices functions were stubbed so no data access would be required. They were actually put public to make it possible to test them.

So splitting this code out in 2 classes makes testing a lot easier. Here is a drawing of what we want to achieve:

image

Show me the code

interface IValidatorRepo
{
    bool HasClientOrders(Client);
    bool HasOpenInvoices(Client);
}

public class ValidatorRepo : IValidatorRepo
{
    public bool HasClientOrders(Client) { … }
    public bool HasOpenInvoices(Client) { … }
}

public class Validator
{
    IValidatorRepo _repo;
    public Validator()
    {
        _repo = new ValidatorRepo();
    }
   
    public Validator(IValidatorRepo repo)
    {
        _repo = repo;
    }

    public bool CanDeleteClient(Client x)
    {
        bool hasOrders = _repo.HasClientOrders(x);
        bool hasOpenInvoices = _repo.HasOpenInvoices(x);
       
        return !hasOrders && !hasOpenInvoices;
    }
   
    public bool CanUpdateClient(Client x)
    {
        bool hasOpenInvoices = _repo.HasOpenInvoices(x);
       
        return !hasOpenInvoices;
    }
}

What have we achieved by this? We now have 2 classes and 1 interface instead of just 1 class. There seems to be more code, and it looks more complex…

But the class Validator violated the “Separation of Concerns” principle. Instead of only validating, it was also accessing the data. And this we now have fixed. The ValidatorRepo class does the data access, and it is a very simple class. The Validator class just checks if a client has orders or open invoices, but it doesn’t care how this is done.

Notice that there are 2 constructors: The default constructor will instantiate the actual ValidatorRepo, and the second version takes the IValidatorRepo interface. So now in our test program we can create a class that just returns true / false in any combination that we like, and then instantiate the Validator with this.

In the Validator we then just call the methods on the interface, again “not caring” how they are implemented. So we can test the Validator implementation without needing a database. We don’t have to stub functions in our function under test, so the air is clean again (no more smells).

“Thank you sensei. But it seems that this is something very useful, so I would think that somebody would have thought about it already?”

“Yes, this principle is called Dependency Injection, and it is one of the ways to achieve testable classes, by implementing Separation of Concerns.”

Posted in .Net, Codeproject, Development, Methodology, Testing | Tagged , , | 3 Comments