Database Integrations for MicroServices

09 Jun 2016

This is a follow up post after seeing Michal Franc's NDC talk on migrating from Monolithic architectures.

One point raised was that Database Integration points are a terrible idea - and I wholeheartedly agree. However, there can be a number of situations where a Database Integration is the best or only way to achieve the end goal. This can be either technical; say a tool does not support API querying (looking at you SSRS), or cultural; the other team either don't have the willingness, time, or power to learn how to query an API.

One common situation is a reporting team, who either cannot query an API (e.g. they are stuck using SSRS), or don't want/have time to learn how to query an API.

There are two ways which can make a Database Integration an altogether less painful prospect, both with a common starting point: A separate login to the Database, with only readonly access to a very small set of tables and views.

Views can be used to create a representation of the service's data in a manner which makes sense to external systems, for example de-normalising tables, or converting integer based enumerations into their string counterparts.

Tables can be used to expose a transformed version of the service's data, for example a readmodel from an event stream.

Event Sourcing source data

For example, one of our services uses Event Sourcing. It uses projections to construct readmodels as events are stored (we use the Ledger library, and a SqlServer backend for this.) To provide a Database Integeration point, we have a second set of projections which populate a set of tables specifically for external querying.

If the following event was committed to the store:

  "eventType": "phoneNumberAdded",
  "aggregateID": 231231,
  "number": "01230 232323",
  "type": "home"

The readmodel table, which is just two columns: id:int and json:varchar(max), would get updated to look like this:

id      | json
231231  | {
            "id": 231231,
            "name": "Andy Dote",
            "phones": [
              { "type": "mobile", "number": "0712345646" },
              { "type": "home", "number": "01230 232323" }

The external integration table, which is a denormalised view of the data would get updated to look like this:

id      | name      | home_phone    | mobile_phone
231231  | Andy Dote | 01230 232 323 | 07123 456 456

Non-SQL Systems

While I have not needed to implement this yet, there is a plan for how to do it: a simple regular job which will pull the data from the service's main store, transform it, and insert it into the SQL store.

Relational Systems

A relational system can be done in a number of ways:

  • In the same manner as the Non-SQL system: with a periodical job
  • In a similar manner to the Event Sourced system: Updating a second table at the same time as the primary tables
  • Using SQL triggers: on insert, add a row to the integration table etc.

I wouldn't recommend the 3rd option, as you will start ending up with more and more logic living in larger and larger triggers. The important point on all these methods is that the Integration tables are separate from the main tables: you do not want to expose your internal implementation to external consumers.

code, net, microservices, integration, eventsourcing


CQS with Mediatr

19 Mar 2016

This article is some extra thoughts I had on api structure after reading Derek Comartin.

Asides from the benefits that Derek mentions (no fat repositories, thin controllers), there are a number of other advantages that this style of architecture brings.

Ease of Testing

By using Command and Queries, you end up with some very useful seams for writing tests.

For controllers

With controllers, you typically use Dependency injection to provide an instance of IMediator:

public class AddressController : ApiController
    private readonly IMediator _mediator;

    public AddressController(IMediator mediator)
        _mediator = mediator;

    public IEnumerable<Address> Get()
        return _mediator.Send(new GetAllAddressesQuery(User));

You can now test the controller's actions return as you expect:

public void When_requesting_all_addresses()
  var mediator = Substitute.For<IMediator>();
  var controller = new AddressController(mediator);
  controller.User = Substitute.For<IPrincipal>();

  var result = controller.Get();

      .Send(Arg.Is<GetAllAddressesQuery>(q => q.User == controller.User));

This is also useful when doing integration tests, as you can use Microsoft.Owin.Testing.TestApp to test that all the serialization, content negotiation etc works correctly, and still use a substituted mediator so you have known values to test with:

public async void Addresses_get_should_return_an_empty_json_array()
    var mediator = Substitute.For<IMediator>();

    var server = TestServer.Create(app =>
        var api = new Startup(mediator);

    var response = await _server
        .AddHeader("content-type", "application/json")

    var json = await response.Content.ReadAsStringAsync();


For Handlers

Handler are now isolated from the front end of your application, which means testing is a simple matter of creating an instance, passing in a message, and checking the result. For example the GetAllAddressesQuery handler could be implemented like so:

public class GetAllAddressesQueryHandler : IRequestHandler<GetAllAddressesQuery, IEnumerable<Address>>
    public IEnumerable<Address> Handle(GetAllAddressesQuery message)
        if (message.User == null)
            return Enumerable.Empty<Address>();

        return [] {
            new Address { Line1 = "34 Home Road", PostCode = "BY2 9AX" }

And a test might look like this:

public void When_no_user_is_specified()
    var handler = new GetAllAddressesQueryHandler();
    var result = handler.Handle(new GetAllAddressesQuery());


Multiple Front Ends

The next advantage of using Commmands and Queries is that you can support multiple frontends without code duplication. This ties in very nicely with a Hexagonal architecture. For example, one of my current projects has a set of commands and queries, which are used by a WebApi, and WebSocket connector, and a RabbitMQ adaptor.

This sample also makes use of RabbitHarness, which provides a small interface for easy sending, listening and querying of queues and exchanges.

public RabbitMqConnector(IMediator mediator, IRabbitConnector connector) {
    _mediator = mediator;
    _connector = connector;

    _connector.ListenTo(new QueueDefinition { Name = "AddressQueries" }, OnMessage);

private bool OnMessage(IBasicProperties props, GetAllAddressesQuery message)
    //in this case, the message sent to RabbitMQ matches the query structure
    var addresses = _mediator.Send(message);

        new QueueDefinition { Name = props.ReplyTo },
        replyProps => replyProps.CorrelationID = props.CorrelationID,

Vertical Slicing

This a soft-advantage of Commands and Queries I have found - you can have many more developers working in parallel on a project adding commands and queries etc, before you start treading on each others toes...and the only painful part is all the *.csproj merges you need to do! Your mileage may vary on this one!


In a large project, you can end up with a lot of extra classes, which can be daunting at first - one of my current projects has around 60 IRequest and IRequestHandler implementations. As long as you follow a good naming convention, or sort them in to namespaces, it is not that much of a problem.


Overall I like this pattern a lot - especially as it makes transitioning towards EventSourcing and/or full CQRS much easier.

How about you? What are your thoughts and experiences on this?

code, net, cqs, cqrs, mediatr


RabbitMQ integration tests in XUnit

18 Mar 2016

Quite a number of my projects involve talking to RabbitMQ, and to help check things work as expected, I often have a number of integration tests which talk to a local RabbitMQ instance.

While this is fine for tests being run locally, it does cause problems with the build servers - we don't want to install RabbitMQ on there, and we don't typically want the build to be dependent on RabbitMQ.

To solve this I created a replacement FactAttribute which can check if RabbitMQ is available, and skip tests if it is not.

This attribute works with a single host, and will only check for the host actually being there on its first connection.

public class RequiresRabbitFactAttribute : FactAttribute
  private static bool? _isAvailable;

  public RequiresRabbitFactAttribute(string host)
    if (_isAvailable.HasValue == false)
      _isAvailable = CheckHost(host);

    if (_isAvailable == false)
      Skip = $"RabbitMQ is not available on {host}.";

  private static bool CheckHost(string host)
    var factory = new ConnectionFactory
      HostName = host,
      RequestedConnectionTimeout = 1000;

      using (var connection = factory.CreateConnection())
        return connection.IsOpen;
    catch (Exception)
      return false;


I was planning on using a dictionary, keyed by host to store the availability, but realized that I always use the same host throughout a test suite.

The reason for passing the host name in via the ctor rather than using a constant is that this usually resides within a generic "rabbitmq helpers" type assembly, and is used in multiple projects.

code, net, rabbitmq, xunit


Generating AssemblyInfo files with Gulp

19 Nov 2015

When changing a project's build script over to Gulpjs, I ran into a problem with one step - creating an AssemblyInfo.cs file.

My projects have their version number in the package.json file, and I read that at compile time, pull in some information from the build server, and write that to an AssemblyVersion.cs file. This file is not tracked by git, and I don't want it showing up as a modification if you run the build script locally.

The problem is that the gulp-dotnet-assembly-info package doesn't support generation of files, only updating. To get around this I used the gulp-rename package to read a template file, and generate the non-tracked AssemblyVersion.cs file.


First, create an AssemblyVersion.base file, and save it somewhere in your repository. I usually put it next to the gulpfile, or in the projects Properties directory, depending on if the project has multiple assemblies or not. This file can be added and tracked by git - it won't get changed.

using System.Reflection;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;

[assembly: AssemblyVersion("0.0.0")]
[assembly: AssemblyFileVersion("0.0.0")]
[assembly: AssemblyDescription("Build: 0, Commit Sha: 0")]

Next install the two gulp modules, and import into your gulpfile:

npm install gulp-rename --save
npm install gulp-dotnet-assembly-info --save
var rename = require('gulp-rename');
var assemblyInfo = require('gulp-dotnet-assembly-info');

In the gulp file, read the package.json file and the environment variables. I do this once at the begining of my gulpfile and use the config all over the place.

var project = JSON.parse(fs.readFileSync("./package.json"));

var config = {
  version: project.version,
  commit: process.env.APPVEYOR_REPO_COMMIT || "0",
  buildNumber: process.env.APPVEYOR_BUILD_VERSION || "0",

Then add a task to create a new AssemblyVersion.cs file. Change the src parameter to match where you saved the AssemblyVersion.base file.

gulp.task('version', function() {
  return gulp
    .src( + '/Properties/AssemblyVersion.base')
      version: config.version,
      fileVersion: config.version,
      description: "Build: " +  config.buildNumber + ", Sha: " + config.commit
    .pipe(gulp.dest('./' + + '/Properties'));

Don't forget to reference the AssemblyVersion.cs file in your csproj!

You can see a full gulpfile with this in here: Magistrate gulpfile.

code, net, gulp


Posting PlainText to Asp WebApi

21 Sep 2015

Recently I have been writing a WebApi project which needs to accept plaintext via the body of a PUT request, and did the logical thing of using the FromBodyAttribute

public HttpStatusCode PutKv([FromBody]string content, string keyGreedy)
  return HttpStatusCode.OK;

Which didn't work, with the useful error message of "Unsupported media type."

It turns out that to bind a value type with the FromBody attribute, you have to prefix the body of your request with an =. As I am emulating another Api's interface, this is not an option, so I set about figuring out how to override this requirement.

In the end I discovered that providing a new MediaTypeFormatter which handles plaintext is the answer:

public class PlainTextMediaTypeFormatter : MediaTypeFormatter
  public PlainTextMediaTypeFormatter()
    SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/plain"));

  public override Task<object> ReadFromStreamAsync(Type type, Stream readStream, HttpContent content, IFormatterLogger formatterLogger)
    var source = new TaskCompletionSource<object>();

      using (var memoryStream = new MemoryStream())
        var text = Encoding.UTF8.GetString(memoryStream.ToArray());
    catch (Exception e)

    return source.Task;

  public override Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, System.Net.TransportContext transportContext, System.Threading.CancellationToken cancellationToken)
    var bytes = Encoding.UTF8.GetBytes(value.ToString());
    return writeStream.WriteAsync(bytes, 0, bytes.Length, cancellationToken);

  public override bool CanReadType(Type type)
    return type == typeof(string);

  public override bool CanWriteType(Type type)
    return type == typeof(string);

This can then be added to the config.Formatters collection:

public static class WebApiConfig
  public static void Register(HttpConfiguration http)
    http.Formatters.Add(new PlainTextMediaTypeFormatter());

It really seems like something which should be supplied out of the box with WebApi to me, but at least it wasn't as complicated to implement as I was expecting it to be :)

code, net, webapi