Architecture of a Polyglot Azure Application

Introduction

I started working on a C# project that will communicate requests to several different partners, and receive feedback from them. Each partner will receive requests in their own way. This means that sending requests can (currently) be done by

  • calling into a REST service,
  • preparing a file and putting it on an FTP share,
  • sending a mail (SMTP).

Needless to say that the formats of these requests are never the same, which complicates things even more.

Receiving feedback can also be done in different ways:

  • receiving a file through FTP. These files can be CSV files, JSON files, XML files, each in their own format,
  • polling by calling a web service on a schedule.

So we need an open architecture that is able to send a request, and store the feedback received for this request. This feedback consists of changes in the states for a request. I noticed that this is a stand-alone application, that can easily be moved into the cloud. We use Microsoft Azure.

Here is a diagram for the current specifications:

Current specifications

First observations

When I analyzed this problem, I noticed immediately some things that could make our lifes easier. And when I can make things simpler, I’m happy!

The current flow

Currently everything is handled in the same application, which is just a plain simple C# solution. In this solution a couple of the protocols are implemented. This is OK because currently there are only 2 partners. But this will be extended to 20 partners by the end of the year.

There are adapters that transform the request into the right format for the corresponding partner, and then send it through a REST service. So we already have a common format to begin with. If the “PlaceOrder” block can receive this common format we know at least what comes in. And we know what we can store in the “Feedback Store” as well; this will be a subset of the “PlaceOrder request.”

PlaceOrder then will have to switch per partner to know in which data format to transform the request, and send it to that partner.

On the feedback side, we know that feedback comes in several formats, over several channel types. So in the feedback handler we need to normalize this data so that we can work with it in a uniform way. Also, some feedback will come as a file (SFTP) with several feedback records; or per one record (for example when polling). This needs to be handled as well.

So now we can think about some more building blocks. The green parts are new:

image

  • The “Initiatior Service” will receive a request from the application (and in the future from multiple applications). All it will do is transforming the request into a standard format and putting it on the “Requests Queue“. Some common validations can be done here already. Creating a separate service allows future applications to use the application as well.
  • We introduce the “Request Queue”, which will receive the standardized request.
  • And now we can create the “PlaceOrder queue handler” which will wake up when a request arrives on the queue, and then handles all the messages on the queue.

Advantages of adding queues

  • Separation. A nice (and simple) separation between the caller (Application -> “Initiator Service“) and the callee (the “PlaceOrder Queue Handler“).
  • Synchronization. In the queue handler we only need to bother about 1 request at a time. Everything is nicely synchronized for us.
  • Elasticity. When needed we can add more Queue Handlers. Azure can handle this automatically for us, depending on the current queue depth.
  • Big loads will never slow down the calling applications, because all they have to do is to put a message on the queue. So each side of the queue can work at its own pace.
  • Testing. Initiating the Queue Handler means putting a message on the queue. This can be done using tools such as the Storage Explorer. This makes testing a lot easier.
    • Testing the “Initiator Service“: call the service with the right parameters, and verify if the message on the Request Queue is correct.
    • Testing the “Queue Handler“: put in some way (ex: storage explorer) a request in the correct format on the queue and take it from there.
    • Both are effectively decoupled now.

We can do the same for the feedback handler. Each partner application can receive feedback in its own way, and then send the feedback records one by one to the Feedback Queue in a standard format. This takes away a lot of the complexity again. The feedback programs just need to interpret the feedback from their partner and put it in the right format on the Feedback Queue. The Feedback Queue Handler just needs to handle these messages one by one.

To retrieve the feedback status we’ll need a REST service to answer to all the queries. You’ll never guess the name of this service: “Feedback Service“. I left this out of scope for this post. In the end it is just a REST service that will talk to the data store via the “Repository Service.”

I also don’t want to access the database directly, so a repository service is created as well. Again, this is a very simple service to write.

But there is still a lot of complexity

image

The “Place Order Queue Handler” handles each request by formatting the message and sending it to the specific partner. Having this all in 1 application doesn’t seem wise because

  • This application will be quite complex and large
  • When a new partner needs to receive calls we need to update (and test, deploy) this application again.
  • This is actually what we do currently, so there would be little or no advantage in putting all this effort into it if we stopped here.

So it would be nice to find a way to extend the application by just adding some assemblies in a folder. The first idea was to use MEF for this. Using MEF we can dynamically load the modules and use them, effectively splitting out the complexity per module. Again, each module has only 1 responsibility (formatting & sending the request).

The same would apply (more or less) for the feedback part.

But thinking a bit further, I realized that this is actually nothing but a workflow application (comparable to BizTalk). And Azure provides us with Logic Apps, which are created to handle workflows. So let’s see how we can use this in our architecture…

image

I left out the calling applications from this diagram. A couple of things have been modified:

  • DLQ. For each queue I also added a Dead Letter Queue (DLQ). This is sometimes also called a poison queue. The “Initiator Service” puts a request on the queue to be handled. But if the Queue Handler has a problem (for example, the Partner web service sends back a non-recoverable error code), we can’t let the Initiator Service know that. So we’ll put those failed messages on the DLQ to be handled by another application. A possible handling could be to send an e-mail to a dedicated address to resolve the problem manually.
  • Logic App. The “Request Q Handlernow is a Logic App. This is a workflow that can be executed automatically by Azure when a trigger is fired. In our case the trigger is that one or more requests are waiting on the “Request Queue.” In this post I won’t go into detail into the contents of this Logic App, but this is the main logic:
    • Parse the content of the request message as JSON
    • Store the relevant parts of the message in the database with a “Received” status.
    • Execute the partner specific logic using Logic App building blocks, and custom made blocks.
    • Update the status of the request in the database to “Sent”
    • When something goes wrong put the request on the DLQ.
  • Configuration. The nice thing is that this all can be done using available building blocks in the Logic App, so no “real” programming is needed – only configuration. Adding a new partner requires just adding a new branch in the switch and implementing the partner logic.
  • The database is accessed by a REST service, and there are Logic actions that can communicate with a REST service. So accessing the database can be done in a very standard way.

The feedback part is also a bit simpler now

  • One Logic App will poll every hour for those partners who work like that. This means that this App will have a block per “polling partner” which will retrieve the states for the open requests, transform them into a standard format and put them in the Feedback Queue. So the trigger for this Logic App is just a schedule.
  • Some partners communicate their feedback by putting a file on an FTP location. This is the trigger and the handling is a bit different:
    • Interpret the file contents and transform them into JSON.
    • For each row in the JSON collection execute the same logic as before.
    • Delete the file.
    • Again, these are all existing blocks in a Logic App, so no “real” programming!

The “Feedback Q handler” is again simple. Because the FTP Logic Apps (Notice the plural!) make sure that the feedback records are stored one by one on the “Feedback Queue“, all we have to do is to update the status in the database, and possibly execute a callback web service.

Conclusion

Thanks to MS Azure I was able to easily split the application in several small blocks that are easy to implement and to test. In the end we reduced a programming problem to a configuration problem. Of course some programming remains to be done, for example the “Repository Service” and possibly some other services to cover more exotic cases.

About Gaston

MCT, MCSD, MCDBA, MCSE, MS Specialist
This entry was posted in Analysis, Architecture, Azure, Cloud, Codeproject, Development and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s