DDD: Refactoring toward deeper insight

A good question came up on the DDD/CQRS group earlier today, and I thought I’d publish my response here.

The questioneer was asking how they should model deletions in their domain, where they might have a “Delete[Entity]Command”.

If you read Eric Evans’ DDD book, you’ll find it often talks about “refactoring toward deeper insight“. This basically means when you’re not sure which way to go when modelling the domain that you should go back and talk to your domain experts. Keep talking as the information soaks in and you’ll find yourself picking up on little seemingly throwaway phrases and bits of information here and there. They don’t think are particularly special because they’re so used to them, but to you these little facts are incredibly important. It’s like panning for gold.

In this case, “delete” might not be a use-case your domain experts need. In fact, unless your domain experts are in the domain of computers and file systems, I’d go as far as to say it’s highly unlikely. A Domain Model is just that – a model of the domain. Since most domains are in the real world, “delete” doesn’t really exist. You can’t “delete” a stock item, or a financial transaction. You can perhaps mark a stock item as “lost” or create a reciprocating financial transaction though.

Take an example of sales orders (modelled by a SalesOrder AR). If you ask your domain experts “what happens when you delete an order?” they’ll likely respond “Oh no – you can’t delete orders!”. You explain that’s not quite what you meant, and discover that orders can be “cancelled” or “completed”, in which case you can’t add any more line items.

In this example “completed” and “cancelled ” are the key words, and you’d implement the appropriate invariants in your SalesOrder aggregate root (AR). Of course that implementation may end up as a state machine, but then it’s often the case that an AR works like a state machine (i.e. favour a variable “state” rather than a heap of boolean flags).

In fact in general you should be wary of terms like “create”, “update” and “delete” when modelling your domain. If these are the only verbs in your ubiquitous language you should probably not be using DDD for that system.

Remember that DDD is allowing the Domain to Drive your Design. The Domain Experts know it best so they’re your best tool. Your domain model should reflect behaviours and rules required and defined by the domain experts and only those behaviours and rules.

Session Submission Nerves

People who know me will know that I talk a lot. They would not be surprised to hear that my school reports often included comments from teachers about being talkative to the point of distracting my classmates. I think I’m fairly personable, and enjoy being in company.

Yet I’m still nervous about the fact that last week I submitted a session for DDD10.

Don’t get me wrong; I’m very excited and will be gutted if I don’t get picked to speak (although that’s what voting is all about), but now it’s real. Now I really might have to go through with it.

I’ve got my fingers crossed that I’ll get the gig (and won’t die on my arse). Hopefully everything will go to plan and it’ll all be worth it if I’ve enthused at least a few people about my chosen subject. I’ll be putting a fair amount of preparation in, and I’m fortunate enough to be able to perform live testing on humans at the ShropshireNET user group later this month, but still it’s a new beginning for me and I just hope I’m worthy.

If you fancy finding out a bit more about the practical side of actually doing things with CQRS and Event Sourcing, please feel free to come to the user group session and/or vote for me at DDD10.

Here’s to trying new things and getting out of your comfort zone!


Disclaimer: I know those of you who’ve done computer science or software engineering at university will already know how to do this, and know the name for the pattern, but in case we don’t use it I wanted to show it off somewhere. Smile

A colleague and I just had a code-off without realising it; we were both thinking about the same problem at the same time. That problem being a way to take a list of things, and get a list of the permutations of them.

So { “P1”, “P2”, “P3” } should result in:

    { “P1” },
    { “P2” },
    { “P3” },
    { “P1”, “P2” },
    { “P1”, “P3” },
    { “P2”, “P3” },
    { “P1”, “P2”, “P3” }

I remembered an trick an old boss of mine taught me for finding combinations of items in a series, using bits. If you think of iterating a series of bytes you see the usual pattern:

  • 1 = 00000001
  • 2 = 00000010
  • 3 = 00000011
  • 4 = 00000100

So this means that iterating a numeric value (i.e. 1 to 256) and converting the loop variable to a sequence of bits on each iteration is basically going to generate all the combinations of true and false for a series of 8 boolean flags. That’s the behaviour we’re looking for. Of course, 8 is quite a limitation, but if we use Integer rather than byte we get 32, which is more than enough (in fact I get OutOfMemoryExceptions with a series of 23 items on my 8gig Quad-Xeon machine).

Here’s my implementation. Notice I’m using the trick, but I’m not iterating all the “powers of 2”, I’m iterating the items in a list, and only taking the ones where the bit representing their position in the list is set:

using System;
using System.Collections.Generic;

namespace ConsoleApplication13
    public class Combinator
        public IList<List<T>> AllCombinationsOf<T>(IList<T> items)
            if (items == null) throw new ArgumentNullException("items");
            if (items.Count > 32) throw new ArgumentException("Only 32 values are supported.", "items");

            int top = GetTop(items.Count);

            var permutations = new List<List<T>>();
            for (int combinationId = 1; combinationId <= top; combinationId++)
                AddPermutations(permutations, combinationId, items);

            return permutations;

        private static void AddPermutations<T>(List<List<T>> permutations, int filter, IEnumerable<T> items)
            var permutation = new List<T>();

            int i = 1;
            int bitIndex = 1;
            foreach (var item in items)
                if ((filter & bitIndex) == bitIndex)

                bitIndex = (int)Math.Pow(2, i - 1);


        private static int GetTop(int count)
            int result = 0;
            for (int i = 0; i < count; i++)
                result = (result << 1) + 1;
            return result;

Simplify string and path operations in FinalBuilder with PowerShell

At work we use FinalBuilder as our continuous integration server. Essentially it works like CruiseControl etc, but has software you use to build the project files rather than eating your XML raw. The basis of FinalBuilder is assembling “actions” into a build script that is executed either in the FinalBuilder software, or on a build server running FinalBuilder Server.

Now typically, performing path and string manipulation is tricky, because you need to use FinalBuilder actions like “String Trimming”, “String Replace” and “String Pos”. All of which work on the basis that they take the value of a global variable defined in the project, and set the result to another global variable defined in the project. If you have a lot of string work to do, this can quickly become unwieldy.

So instead, I propose ignoring the built-in string and path manipulation actions, and swopping them all for one or two “Run Script” actions with PowerShell scripts. In my case, I have a URL to a Mercurial repository hosted on a Kiln server passed-in to my project, and I want to apply a convention to work out what the local repository path for me to clone to and build from should be. I do this by:

  1. Adding a single “Run Script” action at the top of my project
  2. Selecting it
  3. In the “Script Editor” window (View->Script Editor), select “PowerShell” as the scripting language
  4. In the script editor window, add the following:

$RepositoriesLocation      = $FBVariables.GetVariable("_RepositoriesLocation") # Global variable configured on FB Server
$RepositoryUrl             = $FBVariables.GetVariable("RepositoryUrl") # Passed-in at runtime
$uri                       = New-Object -type System.Uri -argumentlist $RepositoryUrl

$repositoryName            = $uri.Segments[$uri.Segments.Length – 1].Trim(‘/’) # Parse the repo name
$projectName               = $uri.Segments[$uri.Segments.Length – 3].Trim(‘/’) # Parse the Kiln project name

$WorkingCopyRoot = [System.IO.Path]::Combine($WorkingCopiesLocation, $projectName)
$WorkingCopyRoot = [System.IO.Path]::Combine($workingCopyRoot, $repositoryName)

$FBVariables.SetVariable("WorkingCopyRoot", $workingCopyRoot) # The the global variable for subsequent actions to use

As you can see, this obtains the value passed-in to the project from the HgUrl variable, breaks it up and re-arranges it to produce a local path for the URL. There’s some other stuff about the location of the working copies being in a common location but that’s all there is to it.

I’ve recently gone a bit mad for this approach. How about this method of establishing the solution file to build in any given Hg repository, for example?:

$workingCopyRoot = $FBVariables.GetVariable("WorkingCopyRoot")
$solutionFileFullName = Get-ChildItem $workingCopyRoot -filter *.sln | select-object FullName -first 1
$FBVariables.SetVariable("SolutionFileFullName", $solutionFileFullName)

Happy, erm, “PowerShelling”… 🙂

Windows 8 – The end of an error?

So I hear there’s some news about a new Windows, and people are worried by the 5-minute Windows 8 press release because it mentions HTML5. Some people are really worried. I’m not in tears myself just yet, though I would be upset if the scare-mongers are proved right.

Personally I’m just (finally) starting out in WPF. I really like it and if I’m honest I’m not a great fan of HTML/CSS because of the inconsistencies between browsers. I’m aware I’m not alone in that respect. My worry isn’t about historical investment in WPF, but the fact that I’m just starting out. I hope I’m not writing the new Betamax for my new apps.

However, If one takes a deep breath, relaxes and looks at it again, one could surmise that it’s unlikely .NET will be dropped totally. MS do have a good history (often to their own detriment) of backwards-compatibility, and I reckon that in the fullness of time there will be "layers" of apps:

  1. HTML5/CSS3 for tiles and "widgets", though SL might be part of the "tile" story.
  2. LOB apps that want to talk to local databases and/or webservices etc but still solve the business problems in a RAD-fashion will be Silverlight and WPF (WinForms will surely be supported but possibly discouraged for new apps and relegated to the “legacy” UI that so closely resembles Win7 in the video).
  3. Device drivers and those apps that need to get down to nitty-gritty close-to-the-metal stuff or require super-duper high performance will be for C/C++ devs with brains far larger than mine.

It’s not much different from the decision that WP7 apps being totally SL-based. They’re trying to tidy-up a long-established line of inconsistent apps and UI tech to give "mom and pop" users a better experience. My Dad loves his iPhone but still struggles with the fact that Windows isn’t the Pit of Success when it comes to usability and stability.

Let’s face it, advanced users (application/IT support, testing teams, DBAs, developers) will not use this new HTML5 veneer all that much, because it’s not meant for them. This is MS taking a look at their customer base, comparing it with the iPhone customer base, and realising they need a simpler OS UI that allows people to watch videos, check emails, mess with their pictures etc. It’s simply moving to a "task-based UI" on a grander scale.

Of course, tooling goes a long way to calm .NET devs in these situations. At the moment many may be worried by the prospect of using Notepad to write their Windows apps and struggle with debugging and implementation inconsistencies. However I’m sure that companies like JetBrains and DevExpress will be there to help.

It will be fine, don’t worry. 🙂

Visual Studio Screen Real Estate

I have to say first that I am a total keyboard-freak. I use keyboard shortcuts for all Visual Studio and ReSharper commands. Over Christmas, my main dual-24”-screen development machine developed a fault, and I fell back to my (excellent and highly recommended) Lenovo ThinkPad X201. Of course screen real estate was suddenly an issue, and I decided to do something radical.

I got rid of all the toolbars.

Yep – all of them.

Literally, I right-clicked the toolbar, and un-checked every single one, in both design mode and debugging mode. I then found and installed the Hide Main Menu plugin for Visual Studio.

Now Visual Studio looks like this:


This is good, I like this. Then I fixed my desktop PC, and didn’t want to lose the goodness, but also wanted to make use of VS2010’s improved multi-monitor support. I exported All Settings->General Settings->Window Layouts options to SingleMonitor.vssettings, put that file into my shared DropBox folder to sync it to the desktop, imported it, and customised VS to look like this:


I’ve exported those window layouts to MultiMonitor.vssettings, so I can easily switch between them. Sometimes I move to a single screen even on the desktop in order to read documents/websites on the secondary monitor, or when screen sharing with my colleagues.

If you want my window layout files, you can find them on my box.net page.

Additional Tips:

  1. Don’t forget when customising your window layouts to also customise them when debugging, as VS will switch between layouts as you start and finish debugging.
  2. Try closing toolwindows too. If you learn the shortcuts to get them back (or alternatives) it makes for a much cleaner feel.
  3. Try replacing the solution explorer with ReSharper’s “CTRL+T” command. It’s faster and doesn’t take up space.
  4. Try working in full-screen mode when actually coding (toggle using SHIFT+ALT+ENTER).

I appreciate I may have taken this as far as I could without using vi or something, but hopefully it serves as a little inspiration.


Rabbit in the headlights (Windows Identity Foundation woes)

I’m working on a new project at work, and part of my role as architect is to decide how we’re going to build authentication and authorisation. This will be a desktop smart client application deployed via ClickOnce that will use web services (hosted by us at our datacentre) to operate, with a dash of local caching for offline operation where applicable. We already have an ASP.NET website that has auth/authz behind it, so ideally I’d like to build on that.

After much internetting, I re-discovered Windows Identity Foundation (WIF), having heard a little about it on a DotNetRocks episode some time back. I like the concepts – separating auth/authz from your applications and instead obtaining tokens containing the claims the user has.

Sounds great. In theory.

In practice, it’s appears WIF suffers from “Over-engineering Microsoft Giddyness” syndrome. I’ve watched various WCF Pluralsight videos (which are excellent, by the way) to try and get a basis of understanding for WIF, but when I really got into WIF itself, it’s too much for my small brain to cope with.

Essentially I’ve worked out that I want to have an “Identity Provider” that my desktop app can authenticate with, and that will return a security token in exchange for a valid username and password. I then want to be able to pop that token into subsequent calls to the other WCF services, which will then investigate the claims supplied in that token to establish whether the user can carry out those operations or not. However it seems writing my own Identity Provider that works off our user store is, ahem, “non-trivial”. There are a few choices of existing ones out there in the world., but it all seems like over-kill to me.

So I’m going to steal the idea and Build My Own. I’ll have a WCF service that is accessed via SSL that will return a token containing various information (“claims” in WIF parlance) about the user in exchange for a valid username and password. This token will then be made available in the SOAP headers of calls to our other web services (also accessed via SSL, so it’s okay Smile).

I know what some of my readers will be thinking – that there’s a reason for the engineering that’s gone into WIF. The truth is that this approach should work just as well. I’ll need to have a think about possibly signing the token and encrypting it so that the web services can be confident the data hasn’t been tampered with or otherwise intercepted, but I’m willing to be my small brain comes up with The Simplest Thing That Could Possibly Work. A key tenant of Domain Driven Design, and something else I’m going to strive for on this project.

The Big Rewrite

I know the big rewrite is almost never the right answer, but in the case of a stalled hobby project that started a year ago with Linq to SQL, ASP.NET MVC 1.0, then maybe it’s okay. 🙂

I’ve decided to start again totally from scratch. This is for a few reasons:

  • I want to start again using ASP.NET MVC 3 with Razor view engine, and make use of the many other improvements.
  • I want to use EF code first, rather than Linq to SQL.
  • I attended DDD9 today, and saw a session with a lot of useful information about CQRS that I want to try and apply here.
  • I want to remove the product name from the code. I think I’ll still be branding it as “Workflo”, but since I have this overall idea of a project called “Genrsis” that’ll include other products, the root namespace for this particular product will probably be something more abstract like “Genrsis.WorkItemTracking”.
  • I want to host this project on codeplex.
  • There is a lot of stuff I can reuse from the old code: my custom ASP.NET MembershipProvider, logo, CSS etc.

So I’m going to get started on this now. The sooner I can start dogfooding the better. 🙂

Yet Another Bug Tracking System

Other titles I thought of included “If you build it, they will come” and “Building a better mousetrap”.  What can I say, SEO isn’t a skill of mine.

I needed a hobby project that was a thought experiment (which is my caveat for all my bad architectural decisions) that could go somewhere.  That wasn’t just a name and a few lines of half-finished code.  Something that I could take on a laptop to job interviews in the future and say “No, I can’t actually work out an algorithm on the spot that calculates the distance in yards from London to Calais, but look at this neat thing I’ve done…”.  Not that I’m looking for work right now, but it’s good to have something to show for one’s skills that one owns the IPR to.

Amongst other things, I wanted to try ASP.NET MVC (which was v1 at the time), jQuery and Linq to SQL (I know it’s out of date now, but I have my reasons…).

I decided to be different, and write a bug-tracking system.  My motivation came in the form of difficulties using FogBugz at work.  FogBugz is very capable, but only if you use it the way the guys at Fog Creek expect you to – which means conforming to the way they think bug tracking should be done.  We’re trying to use it in a different process than it was designed for, and it’s caused friction.  Being the jumped-up know-it-all that I am, I thought I could do better (well, different) so, full of Great Ideas, I started planning.

This was many months ago.  I’ve put quite a few hours in here and there.  Probably a few weeks FTE.

Then yesterday I deleted much of the code I’ve written.  I’ve kept infrastructure stuff (like ASP.NET Membership providers – yuck!), but basically this is a re-write of what little I’d actually managed to create.

I’ve learnt much with my tinkering over the last year, but now I’m going to give it a proper go.  Hopefully this may even end up as something we can use at work.  It’s called “Workflo”.  Yes, I know that name exists but I have a project codename to put it under and “Buggr” just didn’t seem appropriate.  It has few key features:

  1. A fully-customisable workflow.  Not using MS Workflow Foundation, but something home-grown (it’s a thought-experiment, see).
  2. Estimates and time-tracking per status.  So developers estimate the development, testers estimate the testing, etc etc.  Then you see how long those things actually took and do amazing reports like how good people are at estimating, either for themselves or on behalf of others, and break down how much effort goes into testing vs. actual coding.

That’s kind of it, for now.  I have lots of wonderful ideas floating around, and the plan is to use this blog to chronicle the various design decisions and implementation tribulations as they happen.  Hopefully I will learn something, even better would be to teach something.  We’ll see.  I’ve got a little more re-jigging to do, and then I’ll be posting the code on Codeplex.

Wish me luck!