How to use Microsoft Azure Key Vault

Introduction

In this post I will describe how to set up and use an Azure key vault to store your secret values.

Sometimes we see secrets like storage keys and connection strings written as literals in the code of a project, such as

public static class Secrets
{
  public const string ApiKey = "MyAppKey";
  // ...
}

This doesn’t seem too bad because

  • It is the fastest way to obtain a key
  • Probably the key won’t change too often in time

But there are some serious drawbacks to this way of working as well:

  • If the key does change, code needs to be adapted and redeployed.
  • The key is plain visible in the code.
  • The key is “for ever” in the source code system, maybe even on a public repository.
  • When you change the environment (from DEV to ACC to PROD), the key will probably change as well. This becomes a problem with a hard-coded key.

It would be nice to store the key elsewhere, but what are the options?

  • The key can be stored in a configuration file. This is better already, but this file will still be readable by developers (and on the public repo).
  • The key can be stored in Azure. This is what we’re going to talk about in this article.

Prerequisites for this article

If you want to follow along with the examples, you’ll need an Azure subscription. On the Azure home page you can find the steps to create a free subscription, that will be valid for 3 months.

Introducing Azure Key Vault

We can store the following items in a Key Vault, for later use:

  • Secrets. A lot of types of data can be stored here, such as tokens, passwords, keys, …
  • Keys. Encryption keys can go here, and can be references later to encrypt / decrypt your data.
  • Certificates.

These items are stored securely in the vault, only users (or processes) with the right access rights will be able to retrieve them. This access is monitored, so you can know who accessed what, and how the performance of the Key Vault is.

KeyVault

Creating an Azure Key Vault

In the Microsoft Azure portal

image

  • Click on the “Create a resource” button at the top left.
  • In the blade that appears enter “Key Vault” in the search box and select “Key Vault” from the list below.

image

Click “Create” and fill in the necessary parameters:

  • Name: a unique name for the key vault
  • Subscription: the subscription that will contain your key vault
  • Resource group: here you can either select an existing resource group or create a new one. For this example, you may want to create a new resource group so you can clean up everything easily when you are done “playing”.
  • Location
  • Pricing tier: standard, unless you want HSM backed keys.
  • Access policies: by default the current user will be the owner of the key vault. You can add or remove permissions here.
  • Click on “Create” and the key vault will be created for you. This can take some time.

Inserting values in the Key Vault

  • Find your new key vault in Azure, and click on it. If your subscription contains a lot of objects, you may first select the resource group that the key vault is in.
  • You now see the overview page, with some useful information.
    image

    • The main important piece of information here is the DNS Name (top right).  You will need this to connect to the key vault from your code.
    • You can also see the number of requests, the average latency, and the success ratio.
    • Pro tip: make a note of the average latency as a baseline value for future requests.
  • On the left side click on “Secrets”. You will see all the currently stored secrets. If you just created the key vault, this will be empty.
  • Click on “Generate/Import” to create a new secret:
    • Upload options: manual
    • Name: Password   (for our example)
    • Value: My Secret
    • Content type: leave this empty
    • If you wish you can also set an activation date and an expiration date for this secret. We will leave this empty for our example.
    • Make sure that “enabled” is set to yes and click “Create”.

When you click on the “Secrets” button on the left again, you will now see an entry for this key.

If you prefer to do this by scripting, the next section is for you.

Setting up the key vault using Azure Cloud Shell

Using a script to create an Azure object makes it repeatable. If you have multiple tenants, you can compose a script that will create the necessary objects for each tenant. This will save you time because

  • obviously, executing a script is faster than creating each object by hand
  • consistency. If everything is scripted, you can be sure that all the objects are created the same for each tenant. This can save you hours of finding configuration bugs.
  • you can keep the scripts in source control, which allows you to version them as well.

Open Cloud Shell

image

At the top, click the “Cloud Shell” icon. If this is the first time that you open the cloud shell, a wizard will be shown to set up the shell. You can choose the scripting language to use (PowerShell or Linux Bash), and then Azure will create some storage for you. There is also a fair warning that the storage will cost you some money.

For this example I will use Linux Bash.

RESOURCE_GROUP='CodeProject'
LOCATION='WestEurope'
KEY_VAULT='CPKeyVault666'

az group create --name $RESOURCE_GROUP --location $LOCATION
az keyvault create --resource-group $RESOURCE_GROUP --name $KEY_VAULT
az keyvault list
az keyvault secret set --vault-name $KEY_VAULT --name Password --value 'My Secret'
az keyvault secret list --vault-name $KEY_VAULT
az keyvault secret show --vault-name $KEY_VAULT --name Password --query value --output tsv

Using Azure Key Vault in your .NET project

Project setup

clip_image001

Using Visual Studio 2019, create a new .NET Core Console App, name it ‘KeyVault’.

NuGet packages

To use Azure Key Vault, you’ll first need to add 2 NuGet packages to your project:

  • Microsoft.Azure.KeyVault
  • Microsoft.Azure.Services.AppAuthentication

Open the “Package Manager Console” (Tools > NuGet Package Manager > Package Manager Console…) and type the following statements:

install-package Microsoft.Azure.KeyVault
install-package Microsoft.Azure.Services.AppAuthentication

In your source file you will need the following using statements:

using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;

Reading a string from the Key Vault

To separate the concerns in the application it is best to create a separate class for this, such as:

using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.KeyVault.Models;
using Microsoft.Azure.Services.AppAuthentication;

namespace KeyVault
{
  public class KeyvaultUtilities : IKeyvaultUtilities     
  {
     private readonly IKeyVaultClient _keyVaultClient;         
     private readonly string _vaultBaseUrl;

     public KeyvaultUtilities(string keyvaultName)         
     {             
        _vaultBaseUrl = $"https://{keyvaultName}.vault.azure.net";             
        AzureServiceTokenProvider azureServiceTokenProvider = 
		new AzureServiceTokenProvider();             
        _keyVaultClient = new KeyVaultClient(
		new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));         
     }
     /// <summary>         
     /// Get the value for a secret from the key vault.         
     /// </summary>         
     /// <param name="keyname"></param>         
     /// <returns></returns>         
     /// <exception cref="KeyVaultErrorException">When the key is not found, this exception is thrown.</exception>         
     public async Task<string> GetSecretAsync(string keyname)         
     {       
        try             
        {
           var secret = await _keyVaultClient.GetSecretAsync(_vaultBaseUrl, keyname)
                         .ConfigureAwait(false);
           return secret.Value;             
        }
        catch (KeyVaultErrorException kvex)
        {
           throw new KeyNotFoundException($"Keyname '{keyname}' does not seem to exist in this key vault", kvex);
        }
      }
   }
}

The purpose is to read a secret from the key vault, so that is the only method that I have implemented. You can add other key vault related methods in the class when needed.

Using this class is easy. Instead of passing the key vault name as a string, you may get it from a settings file. That will also allow you to travel easily through your development environments.

Notice that we never created a secret with a name “xyz”. Trying to retrieve this value will throw a KeyNotFoundException.

using System;
using System.Threading.Tasks;

namespace KeyVault
{
  class Program
  {
     static async Task Main(string[] args)
     {
        Console.WriteLine("Hello World!");
        IKeyvaultUtilities util = new KeyvaultUtilities("cpkeyvault666");

        string pwd = await util.GetSecretAsync("Password");
        Console.WriteLine("Password: " + pwd);
        string xyz = await util.GetSecretAsync("xyz");
        Console.WriteLine("xyz: " + pwd);
     }
  }
}

Cleanup in Azure

On the Azure Portal go back to the Cloud Shell. Delete the ‘CodeProject’ resource group:

RESOURCE_GROUP='CodeProject'
az group delete --name $RESOURCE_GROUP --yes

This will delete the ‘Codeproject’ resource group, with all of its contents. Don’t worry if you don’t perform this step, the key vault only costs you a wobbling 3 cents per 10000 operations.  You can calculate your costs here: https://azure.microsoft.com/en-us/pricing/calculator/.

You can also delete the resource group through the Azure portal.

First retrieval of the secret can be (very) slow

Retrieving the first key can take several seconds. If you are not sure that you will always need a secret from the key vault you may consider using the class Lazy<T>.

The next retrievals are fast.

For this reason you may consider to register the KeyVaultUtilities as a singleton and inject it instead of recreating it each time. How you do this will depend on the type of application that you are creating.

References

https://docs.microsoft.com/en-us/azure/key-vault/key-vault-overview

https://docs.microsoft.com/nl-be/azure/key-vault/quick-create-net

https://docs.microsoft.com/en-us/azure/key-vault/tutorial-net-create-vault-azure-web-app

Posted in .Net, Architecture, Azure, Codeproject, Development | Tagged | Leave a comment

Creating multiple identical VMs in Microsoft Azure

Introduction

I am preparing a course for 5 persons. They will all need a virtual machine (VM) with Visual Studio 2017 and some files on to perform exercises. I could create each machine, one by one and perform the same installation everywhere but this would not be very productive, and error-prone. Instead I want to create the virtual machines by creating a “master” image, from which I can easily create the other VMs.

Prerequisites

If you don’t have an Azure account yet, there are some ways to get a free test account. You can surf to https://portal.azure.com, where your credentials will be asked. If you don’t have an account yet, you click on “Create One!” and Microsoft Azure will gladly guide you to create a new account. This account will be free for the first 3 months and will provide you with a free (limited) budget to allow you to test Microsoft Azure.

Creation of the first VM

I want to create a Windows VM image that will contain Visual Studio 2017, and all the necessary course files. I already took these steps to organize my Azure resources:

  • Created a new resource group called “courses”.
  • In that resource group created a new VM called “vs2017-2”.
  • Once the VM was running, installed all the needed software and downloaded the needed files.

This is not the scope of this article, so I won’t describe this here. It would make a boring blog post…

Preparing the VM to be used as a template image

Now that the VM is installed the way we like it, let’s destroy it …

We are going to prepare the image in a way that it can be deployed on multiple computers (or VMs in our case). These don’t necessarily have the same configuration, so we need a tool to prepare for this cloning process. Enter sysprep.exe.

Sysprep can strip the image to the minimum, allowing it to be used to create other VMs. Each VM that we will create using this image will have the same software installed, with the same data files, settings, …

You can find sysprep in this folder: %windir% \System32\Sysprep. On https://blogs.technet.microsoft.com/danstolts/2014/05/how-to-sysprep-sysprep-is-a-great-and-powerful-tool-and-easy-too-if-you-know-how-step-by-step/ you can find the use of Sysprep described in a very good way.

Sysprep can be used with parameters (when you know what you are doing), or just without parameters, which will pop up a little form. In the screenshot, you can see the form with the right values filled in:

Out of Box Experience.

Generalize. This checkbox will change your image so that it can be run on a different computer. All the hardware-specific settings will be removed.

Shutdown. With the previous 2 settings you’re going to make a clean image of your computer which will only be useful to create other images from. So you don’t want to try to reboot this image.

Click OK, when you’re certain that all the security data can be wiped from this VM. Sysprep will clean up your VM, and then execute the generalize step. This can take several minutes to run. When it is ready, we can go back to the Azure portal to capture the image.

Capturing the VM in Azure

As said before, the next step is now to capture the VM, so that we can clone it later. This is done on the blade for the VM itself. To go there: open the “courses” resource group, then open the VM that you just created and generalized. On the top menu you’ll find the “Capture” button.

Clicking this button takes us into the “Create image” page, with will show some warnings to start with. Here you’ll give your new image a name, assign it to a resource group and you get the possibility to delete the VM that you are capturing. This makes sense because that VM will not be useful anymore. Below, I have created an image called “vs2017-image”, in the resource group “courses” and decided to clean up (Automatically delete):

Clicking on the “Create button” …

  • Stops the VM. When you have shut down the VM, it still is available in Azure (and still costs money). If you’re not going to use a VM for a while, don’t forget to also Stop it in the Azure portal. Warning: when you restart the VM in the portal, it will have another IP address. If you downloaded the RDP file to access this machine, you’ll need to adapt the IP address in the RDP file, or download it again. For the course I will only use the 5 VMS for 4 days, and only between 8:00 and 17:00. Therefor I also create a policy on each VM that will make the VM stop at 18:00.
  • Generalizes the VM further.
  • Creates the image.
  • Deletes the VM, as requested.

Even though the VM is deleted, other elements are not automatically deleted, so this needs to be done manually. These items don’t cost a lot in MS Azure, but it is a good practice to remove what you don’t need anymore.

  • Public IP address. Click on the Public IP address to open its blade. Click on the “Dissociate” button to remove it from its network interface (and confirm). Now you can click the “Delete” button to make the final kill!
  • Network interface. Open the blade by clicking on the name, then click “Delete” (and confirm).
  • Network security group. Open the blade by clicking on the name, then click “Delete” (and confirm).
  • The Disk. Open the blade by clicking on the name, then click “Delete” (and confirm).

The order of deletion is important, because some resources depend on others.

Creating a new virtual machine from the image

Now comes the time to profit from the work before. In the Azure portal, click on the image that you created (in this example “vs2017-image”). In the blade that appears, click on “Create VM”. This will take you through a wizard-like series of pages to enter all the necessary parameters. The important parameters are:

  • Resource group. You can select an existing resource group; or create a new one for the VM.
  • VM name. This must be a unique name for your VM.
  • Image. This will be pre-filled with the image name that you just created.
  • Size. The size for your new VM. This can be modified afterwards if needed.
  • Username / password.
  • Inbound port rules. If you want to access the VM over RDP, you need to add this here:

    You can specify these rules on the first page of the wizard, or on the network tab.
  • Most of the other fields will depend on your specific needs.

When you’re done, click on the “Create” button. The VM is now created from the image. You can test the VM by starting it; and connecting to verify if everything works correctly. When the VM is created, it is already started so you can connect immediately to it.

To create additional VMs, you don’t have to wait for the first VM to finish creation.

Conclusion

When you need one or two VMs it may not be worth setting up an image to clone the VMs from. But when you need more than that, you’ll save a lot of time using the sysprep / capture combo. In the end the steps to create an image are quite simple:

  • Create a VM that will serve as the master template. Install all the necessary software on it, together with all the data files that you may need. When everything works remove temporary files that you left during the testing of the VM. If needed, also remove MRU lists (ex in Visual studio: recently used files and projects) and other user state.
  • Run the sysprep tool on this VM.
  • Once sysprep is done, and the VM is shut down, capture the VM in the Azure portal. It is not necessary to remove all the left-overs from the master VM, but it is good practice.
  • When the capture is done, you can create new VMs from the created image.

References

Posted in Azure, Codeproject, Cloud | Tagged , , , | Leave a comment

Why would you use Common Table Expressions?

Introduction

In this article I suppose that you have a good understanding of SQL already. I will introduce some concepts very briefly before moving on to Common Table Expressions.

Below you can find the relevant database diagram of the database that I will use in this article:

image

How is SQL processed by SQL Server?

When we look at a basic SQL statement; the general structure looks like

SELECT <field list>
FROM <table list>
WHERE <row predicates>
GROUP BY <group by list>
HAVING <aggregate predicates>
ORDER BY <field list>

As a mental picture we see the order of execution as:

First determine where the data will come from. This is indicated in the <table list>. This list can contain zero or more tables. When there are many tables, they can be joined using inner or outer join operators, and possibly also cross join operators. At this stage we consider the Cartesian product of all the rows in all the tables.

select count(*) from [HR].[Employees]                    -- 9
select count(*) from [Sales].[Orders]                    -- 831
select count(*) from [HR].[Employees], [Sales].[Orders]  -- 7479
select 9 * 831                                           -- 7479

In the third query we combine the tables, without a join operator. The result will be all the combinations of employees with orders, which explains the 7479 rows. This can escalate quickly.

As a side remark: this is valid SQL, but when I encounter this in a code review it will make me suspicious. One way to make clean that you want all these combinations is the CROSS JOIN operator:

select count(*) from [HR].[Employees] cross join [Sales].[Orders]    -- 7479

This will be handled exactly the same as query 3, but now I know that this is on purpose.

Image result for sql joke

Once we know which data we are talking about, we can then filter using the <row predicates> in the where clause. This will make sure that soon in the process the number of rows is limited. In most join operators there is a condition (inner join T1 on <join condition>) which would be applied here, again limiting the number of rows.

select count(*) 
from [HR].[Employees] E 
inner join [Sales].[Orders]    O on E.empid = O.empid    -- 831

The predicate E.empid = O.empid will make sure that only the relevant combinations are returned.

If there is a group by clause, that happens next, followed by the filtering on aggregated values.

Then finally SQL looks at the <field list> to determine which fields / expressions / aggregates to make available, and then the order by clause is applied.

Of course this is all just a mental picture

Imagine a join between 3 tables, each containing 1000 rows. The resulting virtual table would contain 1.000.000.000 rows, on which SQL would have then to select the right ones. Through the use of indexes SQL Server will only obtain the relevant row combinations.  Each DBMS (Database Management System) contains a query optimizer that will intelligently use indexes to obtain the rows in the <table list>, combined with the <row predicate> from the where condition, and so on. So, if the right indexes are created in the database, only the necessary data pages will be retrieved.

Inner queries

The table list can also contain the result of another SQL statement. The following is a useless example of this:

select count(*) 
from (select * from [HR].[Employees]) E

This example will first create a virtual table named E as the result of the inner query, and use this table to select from. We can now use E as a normal table, that can be joined with other tables (or inner queries).

Tip: It is mandatory to give the inner select statement an alias, otherwise it will be impossible to work with it. Even if this is the only data source that you use, an alias is still needed.

As an example I want to know the details of the 3 orders that gave me the highest revenue. To start with, I first find those 3 orders:

select top 3 [orderid], [unitprice] * [qty] as LineTotal
from [Sales].[OrderDetails]
order by LineTotal desc

This gives us the 3 biggest orders:

orderid LineTotal
10865 15810,00
10981 15810,00
10353 10540,00

Now I can use these results in a query like

select *
from [Sales].[OrderDetails]
where orderid in (10865, 10981, 10353)

which will give the order details for these 3 orders, at this point in time. I can use the result of the previous query in the where condition to make the query work at any point in time:

select *
from [Sales].[OrderDetails]
where orderid in 
(
    select top 3 [orderid]
    from [Sales].[OrderDetails]
    order by [unitprice] * [qty] desc 
)

This query will give me the correct results. I just had to adapt some things from the initial query because the IN clause requires a list of values, so we can only return 1 value (the [orderId]. The order by clause then needs to use the full expression. Don’t worry, no more calculations than needed will be done. Trust the optimizer!

To further evolve this query we can now use an inner join instead of WHERE … IN. The resulting execution plan will be the same again, and the results too.

select *
from [Sales].[OrderDetails] SOD
inner join (select top 3 [orderid]
    from [Sales].[OrderDetails]
    order by [unitprice] * [qty] desc) SO 
on SO.orderid = SOD.orderid

Common Table Expressions

With all this we have gently worked toward CTEs. A first use would be to separate the inner query from the outer query, making the SQL statement more readable. Let’s first start with another senseless example to make the idea of CTEs more clear:

;with cte as
(
    select top 3 [orderid]
    from [Sales].[OrderDetails]
    order by [unitprice] * [qty] desc
)
select * from cte

What this does is to create a (virtual) table called cte, that can then be used in the following query as a normal data source.

Tip: the semicolon at the front of the statement is not needed if you just execute this statement. If the “with” statement follows another SQL statement then both must be separated by a semicolon. Putting the semicolon in front of the CTE makes sure you never have to search for this problem.

The CTE is NOT a temporary table that you can use. It is part of the statement that it belongs to, and it is local to that statement. So later in the script you can’t refer to the CTE table again. Given that the CTE is part of this statement, the optimizer will use the whole statement to make an efficient execution plan. SQL is a declarative language: you define WHAT you want, and the optimizer decides HOW to do this. The CTE will not necessarily be executed as first, it will depend on the query plan.

Let’s make this example more useful:

;with cte as
(
    select top 3 [orderid]
    from [Sales].[OrderDetails]
    order by [unitprice] * [qty] desc
)
select *
from [Sales].[OrderDetails] SOD 
inner join cte on SOD.orderid = cte.orderid

Now, for us humans we have split the query in 2 parts: we first calculate the 3 best orders, then we use the results of that to select their order details. Like this we can show the intent of our query.

In this case we use the CTE only once, but if you would use it multiple times in this query it would become more useful.

Hierarchical queries

image

In this table we see a field empid, and a field mgrid. (Almost) every employee has a manager, who can have a manager, … So clearly we have a recursive structure.

This kind of structures often occurs with

  • compositions
  • Categories with an unlimited level of subcategories
  • Folder structures
  • etc

So let’s see how things are organized:

select [empid], [firstname], [title], [mgrid]
from [HR].[Employees]

Gives us the following 9 rows:

image

We can see here that Don Funk has Sara Davis as a manager.

If we want to make this more apparent, we can join the Employees table with itself to obtain the manager info (self-join):

select E.[empid], E.[lastname], E.[firstname], 
       E.[title], E.[mgrid],
       M.[empid], M.[lastname], M.[firstname]
from [HR].[Employees] E
left join [HR].[Employees] M on E.mgrid = M.empid

Notice that a LEFT join operator is needed because otherwise the CEO (who doesn’t have a manager) would be excluded.

image

We could continue this with another level until the end of the hierarchy. But if a new level is added, or a level is removed, this query wouldn’t be correct anymore. So let’s use a hierarchical CTE:

;with cte_Emp as
(
select [empid], [lastname] as lname, [firstname], [title], 
       [mgrid], 0 as [level]
from [HR].[Employees]
where [mgrid] is null

union all

select E.[empid], E.[lastname], E.[firstname], E.[title], 
       E.[mgrid], [level] + 1 
from [HR].[Employees] E 
inner join cte_Emp M on E.mgrid = M.empid
)
select *
from cte_Emp

I’ll first give the result before explaining what is going on:

image

As explained before we start with a semicolon, to avoid frustrations later.

We then obtain the highest level of the hierarchy

select [empid], [lastname], [firstname], [title], 
       [mgrid], 0 as [level]
from [HR].[Employees]
where [mgrid] is null

This is our starting point for the recursion. Using UNION ALL we now obtain all the employees that have Sara as a manager. This is added to our result set, and then for each row that is added, we do the same, effectively implementing the recursion.

To make this more visual I added the [level] field, so you can see how things are executed. Row 1 has level 0, because this is the part of the query (0 as [level]). The for each pas in the recursive part, the level is incremented. This explains perfectly how this query is executed.

Conclusion

Common Table Expressions are one of the more advanced query mechanisms in T-SQL. They can make your queries more readable, or perform queries that would otherwise be impossible, such as outputting a hierarchical list. In this case the real power is that a CTE can reference itself, making it possible to handle recursive structures.

Reference

https://docs.microsoft.com/en-us/sql/t-sql/queries/with-common-table-expression-transact-sql

Posted in Codeproject, Databases, Development, SQL | Tagged , | Leave a comment

Creating a Visio Add-In in VS2017

Problem statement

A good friend asked me the following question:

How can I in Visio change the color to a previously selected color just by selecting a shape?

Sounds simple enough, but there are some caveats, so here is my attempt to tackle this problem. The main caveats were:

  • hooking up the SelectionChanged event,
  • keeping and accessing the state in the Ribbon (for the default color),
  • setting the color of the selected shape.

The code for this project can be found at https://github.com/GVerelst/ActOnShapeSelection.

Visio Add-In – Preparation

I decided to use Visual Studio 2017 to create a Visio Add-In. To do this we need to install the “Office/SharePoint development” workload. Since Visual Studio 2017; the installer allows for modular installation, so we just need to add this to our installation.

Start Visual Studio Installer (Start -> type “Visual Studio Installer”). In the installer window select “More > Modify”:

image

After a little while this takes us to the workloads selection screen. Select Office/SharePoint development and then click “Modify”.

image

When you launch Visual Studio again you’ll find a new bunch of project templates.

Creating the Visio Add-in

In VS create a new project (File > New > Project…) like this:

image

As you can see there are new project templates for “Office/SharePoint”. I choose the Visio Add-in project and gave it an appropriate name “ActOnShapeSelection”.

The result is a project with one C# file (ThisAddIn.cs). This is where we will initialize our events. As a test we show a message box when starting our add-in, and another one when closing it:

public partial class ThisAddIn
{
    private void ThisAddIn_Startup(object sender, System.EventArgs e)
    {
        MessageBox.Show("Startup ActOnShapeSelection");
    }

    private void ThisAddIn_Shutdown(object sender, System.EventArgs e)
    {
        MessageBox.Show("Shutdown ActOnShapeSelection");
    }

    // …
}

Remark: by default the namespace System.Windows.Forms is not included in the using list, so we will need to add it. An easy way to do this is by clicking on the red-underlined MessageBox and typing ctrl+; (control + semicolon that is). Now we can choose how to solve the “using problem”.

Starting the application (F5) will now start Visio and our first message box is indeed shown. Closing Visio shows the second message box. No rocket science so far, but this proves that our add-in is loaded properly into Visio.

Convincing Visio to do something when a shape is selected is just a bit harder.

Wiring the Selected event

public partial class ThisAddIn
{
    private void ThisAddIn_Startup(object sender, System.EventArgs e)
    {
        Application.SelectionChanged += Application_SelectionChanged;
    }

    private void Application_SelectionChanged(Visio.Window Window)
    {
        MessageBox.Show("SelectionChanged ActOnShapeSelection");
    }
// …
}

The SelectionChanged event must be wired in the startup event. Later we will do the same for the ShapeAdded event. Once you know this “trick” things become easy.

When running this code, each time we select something in Visio we see our fancy message box. So the event wiring works. Now we want to be able to only execute code when a shape is selected. Let’s investigate if we can find out something about the selected object(s) in the “Window” parameter:

image

As expected this is a dynamic object. Visio is accessed over COM. Luckily the debugger allows to expand the dynamic view members. Looking through the members of this object we find a property “Selection”. This looks promising! Looking a bit further, “Selection” is an implementation of the interface IVSelection. And this interface inherits from IEnumerable.

So Selection is actually an enumerable collection of all the selected items, hence it can be handled using a standard foreach( ). Let’s try this:

private void Application_SelectionChanged(Visio.Window Window)
{
    //MessageBox.Show("SelectionChanged ActOnShapeSelection");
    Visio.Selection selection = Window.Selection;
    foreach (dynamic item in selection)
    {
        Visio.Shape shp = item as Visio.Shape;
        if (shp != null)
        {
            shp.Characters.Text = "selected";
        }
    }
}

We run the add-in again (F5) and add 2 shapes on the page. When we select the shapes, they get the text “selected”. So now we are capable of knowing which shapes are selected and doing something useful with them. Let’s add a new ribbon with actions to perform on our shapes. After all, that is the purpose of this exercise.

Adding a ribbon

This can easily be done by right-clicking on the project, New Item. Then select Office/SharePoint > Ribbon (Visual Designer).

image

Name this ribbon “ActionsRibbon.”

Opening the ribbon, we can now use the visual designer. Make sure that the toolbox window is visible (View > Toolbox).

Now we add 3 ToggleButtons on the design surface, named btnRed, btnGreen, and you guessed it: btnBlue.  For each of the buttons we add a Click event by double-clicking on the button. Using the GroupView Tasks we also add a DialogBoxLauncher. This will open a ColorDialog for selecting a custom color.

image

Double-click the group to implement the “DialogLauncherClick” event, which will handle this.

The ActionsRibbon will contain its own data, being the 3 components for a color (Red, Green, Blue):

public byte Red { get; private set; }
public byte Green { get; private set; }
public byte Blue { get; private set; }

Each of the toggle buttons will toggle its own color component:

private void btnRed_Click(object sender, RibbonControlEventArgs e)
{
    Red = (byte)(255 - Red);
}

private void btnGreen_Click(object sender, RibbonControlEventArgs e)
{
    Green = (byte)(255 - Green);
}

private void btnBlue_Click(object sender, RibbonControlEventArgs e)
{
    Blue = (byte)(255 - Blue);
}

Remark: This code works fine if there is no possibility for a custom color. When selecting a custom color the values will become something else than 0 or 255 and not correspond to the UI anymore. I leave it as an exercise to the reader to make a better implementation.

Choosing a custom color:

private void group1_DialogLauncherClick(object sender, RibbonControlEventArgs e)
{
    ColorDialog dlg = new ColorDialog { Color = Color.FromArgb(Red, Green, Blue) };

    if (dlg.ShowDialog() == DialogResult.OK)
    {
        Red = dlg.Color.R;
        Green = dlg.Color.G;
        Blue = dlg.Color.B;
    }
}

In the SelectionChanged event of our AddIn class we now need to refer to the RGB values from the Ribbon. Here is the full code for the event handler:

private void Application_SelectionChanged(Visio.Window Window)
{
    ActionsRibbon rib = Globals.Ribbons.ActionsRibbon;
    Visio.Selection selection = Window.Selection;
    foreach (dynamic item in selection)
    {
        Visio.Shape shp = item as Visio.Shape;
        if (shp != null)
        {
            shp.Characters.Text = "selected";
            shp.CellsSRC[(short)Visio.VisSectionIndices.visSectionObject,3, 0].FormulaU = $"THEMEGUARD(RGB({rib.Red}, {rib.Green}, {rib.Blue}))";
        }
    }
}

There is some Visio-fu to set the color. Consider this as a cookbook recipe. When you do this, you’ll get the desired result. Visio is not always straightforward, one could say.

Now we have a working Visio add-in, that does what was asked. BUT when we add a new shape, it will automatically receive the selected color. To solve this we add another event handler:

Application.ShapeAdded += Application_ShapeAdded;

We also add a boolean indicating if we are adding a shape or not.

bool _isAddingAShape = false;

In the ShapeAdded event we set its value to true:

private void Application_ShapeAdded(Visio.Shape Shape)
{
    _isAddingAShape = true;
}

And we modify the SelectionChanged event to do nothing when a shape was added. This event will be called when a shape is selected, AND when a shape is added (which indeed selects it). The code:

private void Application_SelectionChanged(Visio.Window Window)
{
    if (! _isAddingAShape)
    {
        ActionsRibbon rib = Globals.Ribbons.ActionsRibbon;
        Visio.Selection selection = Window.Selection;
        foreach (dynamic item in selection)
        {
            Visio.Shape shp = item as Visio.Shape;
            if (shp != null)
            {
                shp.Characters.Text = "selected";
                shp.CellsSRC[(short)Visio.VisSectionIndices.visSectionObject, 3, 0].FormulaU = $"THEMEGUARD(RGB({rib.Red}, {rib.Green}, {rib.Blue}))";
            }
        }
    }
    _isAddingAShape = false;
}

Conclusion

Using Visual Studio it took us about 100 lines of code to implement the desired behavior. It would not be hard to add more actions to the ribbon. But in that case it will be wise to move the code to perform the action into the ActionRibbon class instead of using the values of this class in the AddIn class.

The hardest part was to find how to obtain the newly created Ribbon from within the AddIn class. All the rest is just a straightforward implementation.

Posted in .Net, Codeproject, Development, Office Development | Tagged , | Leave a comment

Architecture of a Polyglot Azure Application

Introduction

I started working on a C# project that will communicate requests to several different partners, and receive feedback from them. Each partner will receive requests in their own way. This means that sending requests can (currently) be done by

  • calling into a REST service,
  • preparing a file and putting it on an FTP share,
  • sending a mail (SMTP).

Needless to say that the formats of these requests are never the same, which complicates things even more.

Receiving feedback can also be done in different ways:

  • receiving a file through FTP. These files can be CSV files, JSON files, XML files, each in their own format,
  • polling by calling a web service on a schedule.

So we need an open architecture that is able to send a request, and store the feedback received for this request. This feedback consists of changes in the states for a request. I noticed that this is a stand-alone application, that can easily be moved into the cloud. We use Microsoft Azure.

Here is a diagram for the current specifications:

Current specifications

First observations

When I analyzed this problem, I noticed immediately some things that could make our lifes easier. And when I can make things simpler, I’m happy!

The current flow

Currently everything is handled in the same application, which is just a plain simple C# solution. In this solution a couple of the protocols are implemented. This is OK because currently there are only 2 partners. But this will be extended to 20 partners by the end of the year.

There are adapters that transform the request into the right format for the corresponding partner, and then send it through a REST service. So we already have a common format to begin with. If the “PlaceOrder” block can receive this common format we know at least what comes in. And we know what we can store in the “Feedback Store” as well; this will be a subset of the “PlaceOrder request.”

PlaceOrder then will have to switch per partner to know in which data format to transform the request, and send it to that partner.

On the feedback side, we know that feedback comes in several formats, over several channel types. So in the feedback handler we need to normalize this data so that we can work with it in a uniform way. Also, some feedback will come as a file (SFTP) with several feedback records; or per one record (for example when polling). This needs to be handled as well.

So now we can think about some more building blocks. The green parts are new:

image

  • The “Initiatior Service” will receive a request from the application (and in the future from multiple applications). All it will do is transforming the request into a standard format and putting it on the “Requests Queue“. Some common validations can be done here already. Creating a separate service allows future applications to use the application as well.
  • We introduce the “Request Queue”, which will receive the standardized request.
  • And now we can create the “PlaceOrder queue handler” which will wake up when a request arrives on the queue, and then handles all the messages on the queue.

Advantages of adding queues

  • Separation. A nice (and simple) separation between the caller (Application -> “Initiator Service“) and the callee (the “PlaceOrder Queue Handler“).
  • Synchronization. In the queue handler we only need to bother about 1 request at a time. Everything is nicely synchronized for us.
  • Elasticity. When needed we can add more Queue Handlers. Azure can handle this automatically for us, depending on the current queue depth.
  • Big loads will never slow down the calling applications, because all they have to do is to put a message on the queue. So each side of the queue can work at its own pace.
  • Testing. Initiating the Queue Handler means putting a message on the queue. This can be done using tools such as the Storage Explorer. This makes testing a lot easier.
    • Testing the “Initiator Service“: call the service with the right parameters, and verify if the message on the Request Queue is correct.
    • Testing the “Queue Handler“: put in some way (ex: storage explorer) a request in the correct format on the queue and take it from there.
    • Both are effectively decoupled now.

We can do the same for the feedback handler. Each partner application can receive feedback in its own way, and then send the feedback records one by one to the Feedback Queue in a standard format. This takes away a lot of the complexity again. The feedback programs just need to interpret the feedback from their partner and put it in the right format on the Feedback Queue. The Feedback Queue Handler just needs to handle these messages one by one.

To retrieve the feedback status we’ll need a REST service to answer to all the queries. You’ll never guess the name of this service: “Feedback Service“. I left this out of scope for this post. In the end it is just a REST service that will talk to the data store via the “Repository Service.”

I also don’t want to access the database directly, so a repository service is created as well. Again, this is a very simple service to write.

But there is still a lot of complexity

image

The “Place Order Queue Handler” handles each request by formatting the message and sending it to the specific partner. Having this all in 1 application doesn’t seem wise because

  • This application will be quite complex and large
  • When a new partner needs to receive calls we need to update (and test, deploy) this application again.
  • This is actually what we do currently, so there would be little or no advantage in putting all this effort into it if we stopped here.

So it would be nice to find a way to extend the application by just adding some assemblies in a folder. The first idea was to use MEF for this. Using MEF we can dynamically load the modules and use them, effectively splitting out the complexity per module. Again, each module has only 1 responsibility (formatting & sending the request).

The same would apply (more or less) for the feedback part.

But thinking a bit further, I realized that this is actually nothing but a workflow application (comparable to BizTalk). And Azure provides us with Logic Apps, which are created to handle workflows. So let’s see how we can use this in our architecture…

image

I left out the calling applications from this diagram. A couple of things have been modified:

  • DLQ. For each queue I also added a Dead Letter Queue (DLQ). This is sometimes also called a poison queue. The “Initiator Service” puts a request on the queue to be handled. But if the Queue Handler has a problem (for example, the Partner web service sends back a non-recoverable error code), we can’t let the Initiator Service know that. So we’ll put those failed messages on the DLQ to be handled by another application. A possible handling could be to send an e-mail to a dedicated address to resolve the problem manually.
  • Logic App. The “Request Q Handlernow is a Logic App. This is a workflow that can be executed automatically by Azure when a trigger is fired. In our case the trigger is that one or more requests are waiting on the “Request Queue.” In this post I won’t go into detail into the contents of this Logic App, but this is the main logic:
    • Parse the content of the request message as JSON
    • Store the relevant parts of the message in the database with a “Received” status.
    • Execute the partner specific logic using Logic App building blocks, and custom made blocks.
    • Update the status of the request in the database to “Sent”
    • When something goes wrong put the request on the DLQ.
  • Configuration. The nice thing is that this all can be done using available building blocks in the Logic App, so no “real” programming is needed – only configuration. Adding a new partner requires just adding a new branch in the switch and implementing the partner logic.
  • The database is accessed by a REST service, and there are Logic actions that can communicate with a REST service. So accessing the database can be done in a very standard way.

The feedback part is also a bit simpler now

  • One Logic App will poll every hour for those partners who work like that. This means that this App will have a block per “polling partner” which will retrieve the states for the open requests, transform them into a standard format and put them in the Feedback Queue. So the trigger for this Logic App is just a schedule.
  • Some partners communicate their feedback by putting a file on an FTP location. This is the trigger and the handling is a bit different:
    • Interpret the file contents and transform them into JSON.
    • For each row in the JSON collection execute the same logic as before.
    • Delete the file.
    • Again, these are all existing blocks in a Logic App, so no “real” programming!

The “Feedback Q handler” is again simple. Because the FTP Logic Apps (Notice the plural!) make sure that the feedback records are stored one by one on the “Feedback Queue“, all we have to do is to update the status in the database, and possibly execute a callback web service.

Conclusion

Thanks to MS Azure I was able to easily split the application in several small blocks that are easy to implement and to test. In the end we reduced a programming problem to a configuration problem. Of course some programming remains to be done, for example the “Repository Service” and possibly some other services to cover more exotic cases.

Posted in Analysis, Architecture, Azure, Cloud, Codeproject, Development | Tagged , | 1 Comment

Areas in ASP.NET Core

Introduction

In a default MVC application everything is organized by Controllers and Views. The controller name determines the first part of your URL, and the controller method the second part. By default the view that will be rendered has the same name as the method in the controller, although this isn’t required.

So when you create a HomeController in your application, with a method About( ) you have defined the URL Home/About for your application. Easy. For small web applications this is sufficient, but when things start to get bigger you may want to add another level in your URL.

Areas

imageThis is done by creating separate areas in the application. You can create as many areas as you like, and you can consider each area as a separate part of your application, with its own controllers, views and models. So now you can make an “Admin” area for user management and other “admin stuff.” The nice thing is that now your URL will have an additional level, such as Admin/Users/Create.

This allows organizing your project in a logical way, but don’t exaggerate. I have seen projects where areas only contain 1 controller. In that case the advantage of using an area is gone, and worse yet: you haven’t simplified your application, but added an extra layer for nothing. The KISS principle is still one of the most important principles in software engineering!

The problem

In the old ASP.NET MVC all you have to do is

  1. Right-click on the project level, select “Add area”
  2. Enter the name of the area
  3. Everything is done for you: you get a nice solution folder for your area, routing is set up, …

Looking for this menu item in ASP.NET Core was disappointing, it is not there anymore. I couldn’t imagine that areas would have disappeared, so I consulted my friend Google. This led me to this page in Microsoft Docs: https://docs.microsoft.com/en-us/aspnet/core/mvc/controllers/areas.

So how does it work in ASP.NET Core?

I want to create a new area in my application, called “Reports”. We already know that right-clicking doesn’t work anymore, so here are the steps.

Create a folder “Areas”

imageRight-click on the project > Add > Folder, enter “Areas”.

MVC will by default search for /Areas/… which is why you need this folder. If you want to give it a different name you also need to configure your RazorViewEngineOptions. More info on https://docs.microsoft.com/en-us/aspnet/core/mvc/controllers/areas.

Now right-click the Areas folder and add a new folder called “Reports”. And under Reports, create the 3 folders “Controllers”, “Models” and “Views”.

Caveat

The views under your area don’t share the _ViewImports.cshtml and _ViewStart.cshtml. This means that your site layout will not be automatically applied to your area’s views. The solutions is simple: copy both files under the corresponding Views folder.

The standard _ViewStart.cshtml looks like this:

@{
    Layout = "_Layout";
}

If you want to use the same layout in your areas you should change the copied file to

@{
    Layout = "~/Views/Shared/_Layout.cshtml";
}
Of course, if you want your area to have a different layout you don’t have to do this. You can then create a “Shared” folder under the Views folder and create a new _Layout.cshtml there.

We’re ready to add some code now.

Create a HomeController in the Reports Area

Right-click on the newly created Controllers folder > Add > Controller. This takes you to the known “Add Scaffold” dialog box; we choose to add an empty controller.

image

Name the controller “HomeController” and let VS2017 do its scaffolding work for you. We now have a controller with the Index( ) method already implemented. This controller is created under the areas folder structure, but for ASP.NET Core this isn’t enough. We need to indicate which area it belongs to. This is easy:

image

I added line 11, which does the trick. This means that areas and folder structure are now decoupled.

As you notice I also changed the return type to string on line 12, and on line 14  I return … a string Winking smile.  This string will be literally returned to the browser when this page is requested. Of course we could have gone through the trouble of creating a view, but let’s keep things simple in this demo.

Inform ASP.NET Core that areas are involved

MVC determines which controller to instantiate, and which method in the controller to call by means of routing. Routing is implemented by templated routing tables, as you can see below. By default there is 1 route template defined:

routes.MapRoute(
    name: "default",
    template: "{controller=Home}/{action=Index}/{id?}");

In the template we see {Controller=Test}, which will interpret the URL (ex: http://localhost:12345/Test/index). Test is now used to determine that the class TestController must be instantiated. The second part is easy to explain too: the method Index( ) will be called, and that’s how routing works basically.

When we start the site we don’t want (our users) to type http://localhost:12345/Home/Index, which is why a default value is foreseen: when we just type http://localhost:12345 the default HomeController will be instantiated, and the default Index( ) method will be called.

URLs are mapped against the routes in the routing table, and the first match will be used. This means that the “areaRoute” (in yellow below) best comes first in the routing table. This is all defined in the Startup.cs file in the project folder. Go to the Configure method and find where the routes are mapped. Add the lines in yellow:

image

Now we can try if our work has paid off:

  1. Start the application (ctrl + F5). This will show the default home page (no areas involved).
  2. Browse to http://localhost:12345/Reports/Home/Index. Of course 12345 depends on your configuration. We now see the string that we returned from the area controller. And of course http://localhost:12345/Reports/Home/ and http://localhost:12345/Reports/ return the same, because Home and Index are indicated as default values in the route mapping (lines 54 and 55).

Generating a link to the Reports/Home controller

Somewhere in the application we’ll want to refer to the newly created controller. This is typically done from the _Layout.cshtml view; which serves as a template for all your pages. By default a top menu is created for easy navigation between your pages.

We don’t want to hard-code links, because then part of the advantage of using the MVC framework disappears (and we have to think about the link, which always provides room for error). In the navigation we find links like this:

<ul class="nav navbar-nav">
    <li><a asp-area="" asp-controller="Home" asp-action="Index">
		Home
	</a>
    </li>
    <!--   other links  -->
</ul>

The TagHelpers clearly indicate the intend of this line: a link to Home/Index is created.

So for our Reports home page we just need to fill in the area :

<li><a asp-area="Reports" asp-controller="Home" asp-action="Index">
	Reports
    </a>
</li>

This will do the trick. We have created a new (top-level) menu that will open our great Reports page. The link will be http://localhost:12345/Reports. The /Home/Index part is left out because MVC knows from its routing tables that these are default values.

Conclusion

Adding an area is slightly more complex now, but the documentation was quite clear. I will need to do this more than once, hence this post Smile

References

https://docs.microsoft.com/en-us/aspnet/core/mvc/controllers/areas

https://docs.microsoft.com/en-us/aspnet/core/mvc/views/tag-helpers/intro

https://docs.microsoft.com/en-us/aspnet/core/mvc/views/layout

Posted in .Net, Codeproject, Development, MVC, Web | Tagged , , , | 4 Comments

Automating the creation of NuGet packages with different .NET versions

Introduction

Image result for +nugetI created a couple of straightforward libraries to be used in almost every project. So evidently these libraries are a good candidate for NuGet. This will decouple the libraries from the projects that they are used in. It also forces the Single Responsibility principle because every NuGet package can be used on its own, with only dependencies on (very few) other NuGet packages.

Creating the packages for one version of .NET is quite straightforward, and well-documented. But the next request was: “can you upgrade all our projects from .NET 4.5.2 to .NET 4.6.1, and later to .NET  4.7?”.

The plan

We have over 200 projects that preferably all are compiled in the same .NET version. So clicking each project open, change it’s version, and compile it is not really an option…

<PropertyGroup>
  <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
  <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
  <ProjectGuid>{21E95439-7A66-4C75-ACC5-1B9A5FF4A32D}</ProjectGuid>
  <OutputType>Library</OutputType>
  <AppDesignerFolder>Properties</AppDesignerFolder>
  <RootNamespace>MyProject.Clients</RootNamespace>
  <AssemblyName>MyProject.Clients</AssemblyName>
  <TargetFrameworkVersion>v4.6.1</TargetFrameworkVersion>
  <FileAlignment>512</FileAlignment>
  <TargetFrameworkProfile />
</PropertyGroup>

image

Investigating the .csproj files I noticed that there is 1 instance of the <TargetFrameworkVersion> element that contains the .NET version. When I change it, in Visual Studio the .NET version property is effectively changed. So this is easy: using Notepad++ I replace this in all *.csproj files and recompile everything. This works but …

What about the NuGet packages?

The packages that I created work for .NET 4.5.2, but now we’re at .NET 4.6.1. So this is at least not optimal, and it will possibly not link properly together. So I want to update the NuGet packages to contain both versions. That way developers who are still at 4.5.2 with their solutions will use this version automatically, and developers at 4.6.1 too. Problem solved.  But how …

Can this be automated?

Creating the basic NuGet package

This is quite good explained on the nuget.org website. These are the basic steps:

Technical prerequisites

Download the latest version of nuget.exe from nuget.org/downloads, saving it to a location of your choice. Then add that location to your PATH environment variable if it isn’t already.
Note:  nuget.exe is the CLI tool itself, not an installer, so be sure to save the downloaded file from your browser instead of running it.

I copied this file to C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7Tools, which is already in my PATH variable (Developer command prompt for VS2015). So now I have access to the CLI from everywhere, provided that I use the Dev command prompt of course.

So now we can use the NuGet CLI, described here.

Nice to have

https://github.com/NuGetPackageExplorer/NuGetPackageExplorer

From their website:

NuGet Package Explorer is a ClickOnce & Chocolatey application that makes it easy to create and explore NuGet packages. After installing it, you can double click on a .nupkg file to view the package content, or you can load packages directly from nuget feeds like nuget.org, or your own Nuget server.

This tool will prove invaluable when you are trying some more exotic stuff with NuGet.

It is also possible to change a NuGet package using the package explorer. You can change the package metadata, and also add content (such as binaries, readme files, …).

image

Prerequisites for a good package

An assembly (or a set of assemblies) is a good candidate to be a package when the package has the least dependencies possible. For example a logging package would only do logging, and nothing more. Like that NuGet packages can be used everywhere, without special conditions. When dependencies are necessary, then they are preferably on other NuGet packages.

Creating the package

In Visual Studio, create a project of your choice. Make sure that it compiles well.

Now open the DEV command prompt and enter

nuget spec

in the folder containing your project file. This will generate a template .nuspec file that you can use as a starting point. This is an example .nuspec file:

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2013/05/nuspec.xsd">
  <metadata>
    <!-- The identifier that must be unique within the hosting gallery -->
    <id>Diagnostics.Logging</id>

    <!-- The package version number that is used when resolving dependencies -->
    <version>1.1.0</version>

    <!-- Authors contain text that appears directly on the gallery -->
    <authors>Gaston</authors>

    <!-- Owners are typically nuget.org identities that allow gallery
         users to early find other packages by the same owners.  -->
    <owners>Gaston</owners>

    <!-- License and project URLs provide links for the gallery -->
<!--
    <licenseUrl>http://opensource.org/licenses/MS-PL</licenseUrl>
    <projectUrl>http://github.com/contoso/UsefulStuff</projectUrl>
-->
    <!-- The icon is used in Visual Studio's package manager UI -->
<!--
    <iconUrl>http://github.com/contoso/UsefulStuff/nuget_icon.png</iconUrl>
-->
    <!-- If true, this value prompts the user to accept the license when
         installing the package. -->
    <requireLicenseAcceptance>false</requireLicenseAcceptance>

    <!-- Any details about this particular release -->
    <releaseNotes>Added binaries for .NET 4.6.1</releaseNotes>

    <!-- The description can be used in package manager UI. Note that the
         nuget.org gallery uses information you add in the portal. -->
    <description>Logging base class </description>
    <!-- Copyright information -->
    <copyright>Copyright ©2017</copyright>

    <!-- Tags appear in the gallery and can be used for tag searches -->
    <tags>diagnostics logging</tags>

    <!-- Dependencies are automatically installed when the package is installed -->
    <dependencies>
      <!--<dependency id="EntityFramework" version="6.1.3" />-->
    </dependencies>
  </metadata>

  <!-- A readme.txt will be displayed when the package is installed -->
  <!--
  <files>
    <file src="readme.txt" target="" />
  </files>
  -->
</package>

Now run

nuget pack

in your project folder, and a Nuget package will be generated for you.

Verifying the package

If you want to know if the contents of your package are correct, use Nuget Package Explorer to open your package.

image

Here you see a package that I created. It contains some meta data on the left side, and the package in 2 versions on the right side. You can use this tool to add more folders and to  change the meta data. This is good and nice, but not very automated. For example, how can we create a Nuget package like this one, that contains 2 .NET versions of the libraries?

Folder organization

I wanted to separate the creation of the package from the rest of the build process. So I created a NuGet folder in my project folder.

I moved the .nuspec file into this folder, to have a starting point and then I created a batch file that solved the following problems:

  1. Create the necessary folders
  2. Build the binaries for .NET 4.5.2
  3. Build the binaries for .NET 4.6.1
  4. Pack both sets of binaries in a NuGet package

I also wanted this package to be easily configurable, so I used some variables.

The script

Initializing the variables

set ProjectLocation=C:\_Projects\Diagnostics.Logging
set Project=Diagnostics.Logging

set NugetLocation=%ProjectLocation%\NuGet\lib
set ProjectName=%Project%.csproj
set ProjectDll=%Project%.dll
set ProjectNuspec=%Project%.nuspec
set BuildOutputLocation=%ProjectLocation%\NuGet\temp

set msbuild="C:\Program Files (x86)\MSBuild\14.0\bin\msbuild.exe"
set nuget="C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Tools\nuget.exe"

The 2 first variables are the real parameters. All the other variables are built from these 2 variables.

The %msbuild% and %nuget% variables allow running the commands easily without changing the path. Thanks to these 2 lines this script will run in any “DOS prompt”, not just in the Visual Studio Command Prompt.

Setting up the folder structure

cd /d %ProjectLocation%\NuGet
md temp
md lib\lib\net452
md lib\lib\net461
copy /Y %ProjectNuspec% lib
copy /Y readme.txt lib

imageIn my batch file I don’t want to rely on the existence of a specific folder structure, so I create it anyway. I know that I can first test if a folder exists before trying to create it, but the end result will be the same.

Notice that I created Lib\Lib. The first level contains the necessary “housekeeping” files to create the package, the second level will contain the actual content that goes into the package file. The 2 copy statements copy the “housekeeping” files.

Building the project in the right .NET versions

%msbuild% "%ProjectLocation%\%ProjectName%" /t:Clean;Build /nr:false /p:OutputPath="%BuildOutputLocation%";Configuration="Release";Platform="Any CPU";TargetFrameworkVersion=v4.5.2
copy /Y "%BuildOutputLocation%"\%ProjectDll% "%NugetLocation%"\lib\net452\%ProjectDll%

%msbuild% "%ProjectLocation%\%ProjectName%" /t:Clean;Build /nr:false /p:OutputPath="%BuildOutputLocation%";Configuration="Release";Platform="Any CPU";TargetFrameworkVersion=v4.6.1
copy /Y "%BuildOutputLocation%"\%ProjectDll% "%NugetLocation%"\lib\net461\%ProjectDll%

The secret is in the /p switch

When we look at a .csproj file we see that there are <PropertyGroup> elements with a lot of elements in them, here is an extract :

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="12.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <Import Project="$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props" Condition="Exists('$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props')" />
  <PropertyGroup>
    <!--  …   -->
    <OutputType>Library</OutputType>
    <!--  …   -->
    <TargetFrameworkVersion>v4.6.1</TargetFrameworkVersion>
    <!--  …   -->
  </PropertyGroup>

Each element under the <PropertyGroup> element is a property that can be set, typically in Visual Studio (Project settings). So compiling for another .NET version is as simple as changing the <TargetFrameworkVersion> element and executing the build.

But the /p flag makes this even easier:

%msbuild% "%ProjectLocation%\%ProjectName%" 
          /t:Clean;Build /nr:false 
          /p:OutputPath="%BuildOutputLocation%";Configuration="Release";Platform="Any CPU";TargetFrameworkVersion=v4.5.2

In this line MSBuild is executed, and the properties OutputPath, BuildOutputLocation, Release, Platform and TargetFrameworkVersion are set using the /p switch. This makes building for different .NET versions easy. You can find more information about the MSBuild switches here.

So now we are able to script the compilation of our project in different locations for different .NET versions. Once this is done we just need to package and we’re done!

cd /d “%NugetLocation%”
%nuget% pack %ProjectNuspec%

Conclusion

We automated the creation of NuGet packages with an extensible script. In the script as much as possible is parameterized so it can be used for other packages as well.

It is possible to add this to your build scripts, but be careful to not always build and deploy your NuGet packages when nothing has changed to them. This is a reason that I like to keep the script handy and run it when the packages are modified (and only then).

References

MSBuild Reference

NuGet CLI Reference

Posted in .Net, Architecture, Codeproject, Development | Tagged | Leave a comment