Contributing to open source, part II

In my previous post I detailed what open source projects we have contributed code to, but this post will highlight what projects we have made publicly available.

We host all our open source projects from GitHub, feel free to browse our repositories. I would like to highlight just one project in this post, the AzureWorkers.

AzureWorkers is a project we started back in August to create a framework for running multiple worker threads within one single Azure Worker Role. It allows us to quickly spin up more workers to do different things. I have already posted on how to use this project, so I won’t repeat myself, but instead talk about why we chose to open source it and not keep it closed off.

Looking at Githubs President Tom Preston-Werner‘s blog post about this same issue he basically makes the points for me! So, thank you Tom! I would like to highlight two things though:

  1. We do not open source business critical parts
  2. Open sourcing parts of our stack makes it possible and legal for us to use code written at work for hobby/home projects

Business critical parts

For us all code related to risk management is any way is business critical, it is the essence of UXRisk and what Proactima can bring to the software world. This needs to be protected and so we do not open source it. So far it has been very easy in determining if something is business critical or not, presumably it will get harder as we develop more code in the gray areas between technology and risk knowledge.

Use in hobby/home projects

In most employment contracts it is specified that all work carried out during office hours when employed belongs to your employer, which is only to be expected! But if you open source that work then you are free to use the code/result in other projects too! A colleague of mine, Anders Østhus, is using our AzureWorkers in his latest project (to be published I hope!). This would have been hard to do, legally, if we had not open source that project.

In summary I would like to thank my employer for allowing me to not only blog about my work, but also to share the fruits of our labor with the world. So thank you Proactima!

Ninject and Factory Injection

We have a requirement to instantiate a service facade based on a single parameter, this parameter will then be used to load configuration settings appropriately. A fellow team member reminded me that this looked like a factory requirement so I started to look at Ninject Factory. At first I didn’t quite get the examples, but decided to just try it out and see what would work. Turns out to be pretty simple! This post is mostly as a reminder to myself and perhaps a bit of guidance for others looking at doing the same.

There are three requirements for using Ninject Factory:

  1. Install the nuget package
  2. Create a factory interface
  3. Create a binding to the interface

Point number two looked like this for me:

The IServiceFacade interface and concrete implementation looks like this:

Tying it all together is the Ninject Module:

Line number 5 will make Ninject create a proxy that calls the constructor on ServiceFacade, with the prefix input and the IService implementation (bound on line 7). To make use of the factory I did this:

Here I inject the factory interface and call the create method (line 23). The magic happens inside that create method, since this is a proxy generated for me by Ninject! The really cool thing is that all regular bindings still apply, so if you look at the constructor for ServiceFacade it takes the prefix string and an interface (IService) that I bind in my module. Stepping into (F11) the create method in debug mode I end up in the constructor for ServiceFacade, perhaps as expected, but very cool!

Also worth mentioning; you could have more inputs to the create method and only the names matter, ordering does not. So if I needed both prefix and postfix I would have them in any order between the factory interface and the constructor, as long as names match it’s OK. And finally you are not restricted to one method on the factory interface, so I could have had a CreateWithPrefix and a CreateWithPostfix method etc.

The complete source for this is not available, as it’s part of a much bigger project known as UXRisk and it is not open source.

Contributing to open source, part I

Proactima values knowledge and part of that is sharing the knowledge. One way of sharing, in software development, is to contribute to open source projects. So in UXRisk we have contributed to open source projects that we use:

Ninject.Extensions.Azure

We have contributed minor fixes to package versions and the ability to inject on Azure Worker Roles. So just small changes that we needed in our work, but very rewarding to be able to fix it ourselves.

Semantic Logging Application Block

We are using ElasticSearch (ES) for our logging needs and there was no built in sink to store logs in ES, so we created our own implementation. We basically reused the code from Azure Table Storage and adapted it to our needs. In cooperation with another member of the Semantic Logging Application Block (SLAB) codeplex site our code was accepted into the master branch of SLAB.

Experiences with our first Windows Store publish

We just published our app to the Windows Store for the first time, and except for a few minor issues that I’ll mention here, it went really smooth :)

We failed two times before passing and the issues was minor.

We’ve been running the Windows Store App certification checks a few times during development to make sure that we are not too far off. So when we were ready to publish, we did so again. The only issue we had was related to having too long assembly names (more specifically the Application Insights DLL), so that we passed the 256 character path limit (I know fixing this limit will break a lot of stuff, but I would still prefer that they do fix it). So, to get around that issue (for now atleast), we disabled NGEN by adding a nongen.txt file to our project. All green.

So, our first submission passed all the technical parts, but failed on the Content part. The reason being twofold:

  • We had specified 3+ as the age rating. This was done because we don’t have any content that should be offensive for anybody. Apperantly, since we do require you to log in to use the app, that means that the age rating needs to be 12+
  • We didn’t provide a register/sign up link. This is due to the fact that we’re in closed beta, and want to keep control of the users. The requirements are that you must say something about how to get access, so we ended up adding a couple of lines in the description along with an email address to contact. That was enough :)

Ok, so then we tried again, and again we failed on content. This time the issues was the sign in process:

  • We’re using ADAL and WAAD (Windows Azure Active Directory) to handle authentication. The way this works in a WINRT app, is that it uses the WebAuthenticationBroker to show the login page. The WebAuthenticationBroker has a RedirectURI that is generated from some app components (Publisher name, app name, certificate and stuff), and needs to be correct in the Application in WAAD. When we made the package, the process updated our publisher name and app name to align with what we had in the Windows Store Dev center. This means that the RedirectURI changed, without us checking what the new URI was. This caused the content certification to fail since they couldn’t log in, and thus failed.

We fixed that, submitted again, and YAY! It passed. Sweet :)

So, to close, I have to give good cred to the process from Microsoft. It was smooth, and the feedback loop was fast. They do say how long the certification process is expected to take, but in our experience it went a lot faster.

The first two attempts were done on a friday, and from the first submission to the second failure, it took about 4 hours (assuming you submit during the working hours that the people that does this works at).

The third attempt was done on a monday, and then the submission took about 4 hours (I would assume that this could be because a lot of people are submitting through the weekend to get a new version out on the following monday). But all in all, a very positive experience :)

Unit testing WinRT UI controls

When unit testing UI controls you might run into issues with async calls and proper initialization and messy setup code which pollutes your unit test with noisy code.

Since UITestMethod doesn’t support awaits you are basically forced to use the TestMethod attribute and run the test via the Dispatcher (as DependencyObjects can only be created on the UI thread).

So, let’s just skip to the gooey, yummy center and have a look at the extension method:

In short, this method takes an action (which is your test) and instantiates the control for you and makes sure it is initialized before running the test (waiting for the Loaded event to fire) which is done using the EventAsync class from the Winrt XAML Toolkit.

It then runs the supplied assertions after your action completes.

This enables us to write a easier to read unit test like this:

And that should hopefully be a cleaner way of doing WinRT UI control testing.
Sorry about the code formatting; will need to get a proper template for that.

Transient Fault Handling Application Block

In our project we use ElasticSearch as our search backend. We started seeing some unable to connect exceptions and then decided to add retry logic to our queries. To do so in an ordered manner we use the Transient Fault Handling Application Block (“topaz”) and a custom transient error strategy.

As we were implementing “topaz” we also decided to add a timeout value for our queries so that the client would not be stuck on long running queries. Basically if the query fails after X amount of time we should return an error message and the client can then retry.

Combined we wanted a simple retry logic with timeout on each retry. This is what our calling code looks like:

Our custom retry strategy is super simple:

Azure Workers

In my new project (UXRisk) we had a requirement to do a lot of background processing, based on messages passed over Azure Service Bus. We did a lot of research online and found good examples of how to properly to this and the outcome was a project we call AzureWorkers.

AzureWorkers makes it super easy to run multiple workers in one Azure Worker Role, all async and safe. You as the implementor basically have to just inherit from one out of three base classes (depening on Service Bus/Storage Queue  is used or not) and that class will run in its own thread and will be restarted if it fails.

There are four supported scenarios:

  • Startup Task – Will only be executed when the Worker Role starts. Implement IStartupTask to enable this scenario.
  • Base Worker – Will be called continuously (basically every second), for you to do work and control the timer. Inherit from BaseWorker to enable this scenario.
  • Base Queue Worker – Will call the Do method with messages retrieved from the Azure  Storage Queue. Inherit from BaseQueueWorker to enable this scenario.
  • Base ServiceBus Worker – Will call the Do method whenever a message is posted to the topic specified in the TopicName overload. Inherit from BaseServiceBusWorker to enable this scenario.

On GitHub an example project is included to document these three scenarios. To get started using AzureWorker you can get the nuget package.

Please note

AzureWorker depends on these projects:

  • Ninject – version 3.0.2-unstable-9038
  • Ninject.Extensions.Azure – version 3.0.2-unstable-9009
  • Ninject.Extensions.Conventions  – version 3.0.2-unstable-9010
  • Microsoft.WindowsAzure.ConfigurationManager – version 2.0.1.0
  • WindowsAzure.ServiceBus – version 2.2.1.0

Some of the code has been borrowed from a blog post by Mark Monster and a blog post by Wayne Walter Berry.

There are a at least two issues with the code right now:

  • If the processing of a message fails it will re-post the message directly and retry, no waiting time (Service Bus Worker).
  • The implementer has to delete messages manually (by intent).
  • It is depending on Ninject, not a generic IoC framework

We will accept PRs to alleviate these issues.

Sending Post request to external dashboard server in TFService build

At work we wanted to have an external dashboard to show our build status. Mainly because it’s cool, but also since I don’t like the build status events in the Team Foundation team room. It gets really messy when running CI :\

NB! I am not taking responsibility for screwups in this process, but I will try to help you if you have issues!

So, to start with, I created a basic dashboard. It’s a simple Azure WebSite, using Twitter Bootstrap, Knockout.js and SignalR.

The basic setup is:

  • Basic website with Knockout.js for Model-View bindings (I love not having to do manual code to update the UI)
  • Azure Table store to save the build statuses in (BuildName, Initiated by, Status)
  • WebApi endpoint for the dashboard to populate the initial view from
  • WebApi endpoint to post buildstatus to
  • SignalR to update the dashboard when a new status is recieved (SignalR is awesome, if you’re not using it, you probably should :) )
  • As a bonus, we also added a check to the post status, so that if the Build being posted is the CI Deploy of the dashboard, and status is OK, refresh the dashboard. It’s awesome.

The requirements for the Post status stuff:

  • One script to work across builds
  • Easy to implement in new builds
  • Easy to extend
  • Should just work…

I’ve never work with the Team Foundation Build service before (not with MSBuild either for that matter), so it took some time to figure out how to do it, and where to get the data from. So that’s why I’m putting it down here, it might be others wanting to do the same, or similar :)

So, first of all, I had to make a script that would post the data to the endpoint. The endpoint looks something like this:

public class StatusController : ApiController
{
    private readonly IBuildStatusHandler _statusHandler;

    public StatusController(IBuildStatusHandler statusHandler)
    {
        _statusHandler = statusHandler;
    }

    [HttpPost]
    public void Post(BuildStatus status)
    {
        _statusHandler.SetStatus(status);
    }
}

public class BuildStatus
{
    public string BuildName { get; set; }
    public BuildStatusCode Status { get; set; }
    public string Initiator { get; set; }
    public DateTimeOffset StatusTime { get; set; }
}

public enum BuildStatusCode
{
    Ok,      // 0
    Running, // 1
    Failed   // 2
}

A very simple endpoint to accept data. No security or nothing. The BuildStatusHandler handles the Azure Table Store part, and notifying the clients through SignalR, and it goes something like:

public class BuildStatusHandler : IBuildStatusHandler
{
    private readonly IBuildStatusContext _statusContext;

    public BuildStatusHandler(IBuildStatusContext statusContext)
    {
        _statusContext = statusContext;
    }

    public void SetStatus(BuildStatus status)
    {
        var context = GlobalHost.ConnectionManager.GetHubContext<BuildsHub>();
        status.StatusTime = DateTimeOffset.UtcNow;

        context.Clients.All.UpdateBuilds(status);

        if (status.BuildName.ToLower().Equals("Web.Dashboard-PROD-Publish-CI".ToLower()) && status.Status == BuildStatusCode.Ok)
            context.Clients.All.RefreshPage();

        _statusContext.AddOrUpdate(status);
    }

    public IEnumerable<BuildStatus> GetAllBuildStatuses()
    {
        return _statusContext.GetAllStatuses();
    }
}

In the above code, the BuildStatusContext is our abstraction of the actual Azure Table Service code.

The SetStatus works like this:

  • We first get the SignalR hub.
  • Then we set the current time as the status time (we sort the dashboard by time).
  • Then we send out a SignalR message to all connected clients.
  • We then check if the build name is the same as our Dashboard deploy build and that the status is OK.
    • If so, we notify all connected clients that they should refresh, since a new version has been deployed ;)
  • Finally, we write the status to Table Store

With the dashboard all working, now we need to send data to the Status endpoint. We will be using a PowerShell script along with the awesome Invoke-RestMethod call, and it goes something like this:

Param(
  \[string\]$buildName,
  \[string\]$initiator,
  \[int\]$status
)

$payload = @{BuildName=$buildName;Initiator=$initiator;Status=$status} | ConvertTo-Json

Invoke-RestMethod http://YOURENDPOINTHERE/api/status -ContentType application/json -Method POST -Body $payload

The script above is very straightforward. No errorchecking or nothing, just run on through :) But it does the job.

We’ve put the script in “$/XXX/BuildFramework”, but you can put it wherever. But I would still recommend to put it outside your solution branches, since it is actually independent.

So, the last step is to configure a custom build template. The reason for doing it this way is just that I found that to be the simplest and most generic way of doing it. So, in your “$/XXX/BuildProcessTemplates” you probably have either DefaultTemplate.11.1.xaml or AzureContinuousDeployment.11.xaml. I’ve done this process to both of those templates, since we use both, but if you can, start with the DefaultTemplate, since it’s the shortest of them.

So, first of all, make a copy of the template you want to update, and work on the copy!

To easily edit the templates, I’ve created an empty Class Library solution, where I’ve removed all code files, and configured it to not build. Then I’ve added the template I want to work on as a link. So that the actual file in “$/XXX/BuildProcessTemplates” will be updated when you save. The solution looks like:
BuildProcessEdit

You also need to add an reference to System.Management.Automation, which can be found in: c:\Program Files (x86)\Reference Assemblies\Microsoft\WindowsPowerShell\3.0\System.Management.Automation.dll

So, open the referenced Build Process Template, and you’ll probably be a bit overwhelmed. It’s not really easy to explain the next steps, because there is a lot of information in the template, but I’ll try my best.

First, we need to add some variables to hold stuff that we need. So, click on “Collapse All” in the top right of the window to make everything a bit more visible. Then select the “Run on Agent” box in your screen, and click on the “Variables” tab in the bottom left. Add the 3 variables I’ve highlighted in the screenshot below.
Build Variables

The variable PowerShellResult needs to be of the type PSObject[]. To manage that, in the “Variable Type” dropdown, select “Array of [T]“, then select “Browse for Types…”, and search for “PSObject”.
Also remember to fill in the Default on two of the variables, and set all to “Run on Agent”.

Ok, now we’ve got the data we need available, let’s make sure the script is available. Double Click on the icon of “Run on Agent” to enter that box. Then double click on “Initialize Workspace”. Open your toolbox on the left, and search for “Download” as I’ve shown below:
DownloadFiles

Drag the DownloadFiles (not DownloadFile) over to the workflow, and put it right after Create Workspace. Highlight it and fill in the properties, just like below:
Download PostStatus

Then, from your toolbox, find the InvokeProcess and drag that to the workflow just after the DownloadFiles. Select it, and set the following properties on it:
PostStatus Script

Just in case, here is the argument used in that image:

String.Format(" ""& '{0}\src\PostStatus.ps1' -buildName {1} -initiator {2} -status 1"" ", BuildDirectory, BuildDefinitionName, BuildRequestedBy)

Then double click on the InvokeProcess you just added, find the WriteBuildMessage from your toolbox and drag it to the first activity slot. Set the message property to stdOutput. Find and drag the WriteBuildError to the other activity slot, and set the message property to errOutput.

All done. Now the build process will post a status just after the build starts. But we would also like to capture the build complete/build error, right? So the process is almost the same, except that we need some logic as well. So, navigate up to the main build execution (top left, click Run on Agent). Then enter Try Compile, Test, …. Then click Expand All in the top right. Now you’ll see the whole main build process :O

Navigate to the bottom, and then start going up until you find the If a Compilation Exception Occurred. Find the If in your toolbox, and add it into the workspace, just before If a Compilation Exception Occurred. Set the condition to: BuildDetail.CompilationStatus = Microsoft.TeamFoundation.Build.Client.BuildPhaseStatus.Succeeded. In the Else, add a PostSend just as we did before, but set the status to be 2 (or whatever you want to represent a failure).

In the Then, add a new If. On this If, set the condition to: BuildDetail.Status = Microsoft.TeamFoundation.Build.Client.BuildStatus.Failed. In the following Then add the same as you did before (status 2), and in the Else add the same, only this time with a build passed status (0). Below is a screenshot of our setup:
Final PostStatus

Then, save and close the workflow, and check it into TFS, along with the PostScript if you haven’t done it yet. Edit one of the build definitions, and select the build process you just made (you’ll need to select new on the process page). Run the build, and if everything is setup correctly, you should get a status posted.

Moving forward, my next objective for this is to also include Code Coverage, but we’ll leave that for the next time :)

Class Library + Team Foundation Service + OctoPack + MyGet = Win

We are in the beginning of a new project at work, and in the process of getting up and running, we needed to compile, package and publish a NuGet package of a Class Library. This should of course happend automagically on commits to the branch we want to build from.

The basic setup is as following:

  • TFService
  • OctoPack (Build targets for creating NuGet packages and publishing them)
  • MyGet (Awesome NuGet package hosting) – Create an account at once, you’ll need it a bit later

So start off creating a regular Class Library. Add some code if you want. Install the OctoPack NuGet package to the solution.

After you’ve installed OctoPack, open the project file for the Class Library, and verify that you have the following line in it (probably at the bottom):

<Import Project="$(SolutionDir)\.octopack\OctoPack.targets" />

You also need to make sure the _.octopack\OctoPack.targets_ is checked in to TFS.

Then you need to add a .nuspec file. You can either create it manually, or run “nuget spec” in the project folder to get a template.

Anyhow, make sure it’s checked in. Below I’ve posted an example .nuspec file. Also, the .nuspec file must be named like your Class Library.
If your Class Library is My.Awesome.Dll, then the .nuspec file must be named My.Awesome.Dll.nuspec

<?xml version="1.0"?>
<package >
  <metadata>
    <id>My.Awesome.Dll</id>
    <version>1.0.0</version>
    <title>TITLE</title>
    <authors>AUTHOR</authors>
    <owners>OWNER</owners>
    <licenseUrl>LICENSEURL</licenseUrl>
    <projectUrl>PROJECTURL</projectUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>DESCRIPTION</description>
    <releaseNotes>RELEASE NOTES</releaseNotes>
    <copyright>COPYRIGHT</copyright>
    <dependencies>
      <dependency id="WindowsAzure.Storage" version="2.1.0" />
      <dependency id="Microsoft.WindowsAzure.ConfigurationManager" version="1.8.0" />
    </dependencies>
  </metadata>
  <files>
    <file src="..\..\bin\My.Awesome.Dll.dll" target="lib\net45" />
  </files>
</package>

So, the main points to note is the <dependencies> element. I’ve listed two example dependencies, but you’ll need to change that to match any NuGet package dependencies you might have. Also note the <files> tag. This specifies what files from the build to include in the package. So, list all files you want to include here, and set the target (that is the folders inside the package) to match the .NET version you support.

In your AssemblyInfo.cs, set these properties to what you want:

[assembly: AssemblyVersion("1.0.*")]
[assembly: AssemblyFileVersion("1.0.*")]

Then, set up a normal TFService build with the following exceptions:

On the Process page

  • Use the Default Template
  • Select the solution to build
  • Set up to run tests if you use them (you should)
  • Under Advanced, MSBuild Arguments, add the following: /p:RunOctoPack=true /p:OctoPackPublishPackageToHttp=YOURMYGETURL /p:OctoPackPublishApiKey=YOURMYGETAPIKEY

Save the build configuration and run it. It should be working now.