Developing on Staxmanade

Testing Asynchronous Code with MochaJS and ES7 async/await


A JavaScript project I'm working on recently underwent a pretty good refactor. Many of the modules/methods in the application worked in a synchronous fashion which meant their unit tests were also generally synchronous. This was great because synchronous code is pretty much always easier to test since they're simpler and easier to reason about.

However, even though I new early on that I would likely have to turn a good number of my synchronous methods into asynchronous ones I tried holding off on that as long as absolutely necessary. I was in a mode of prototyping as much of the application out as possible before I wanted to be worried/thinking about asynchronous aspects of the code base.

Part of why I held of on this was because I was pretty confident using the new proposed ES7 async/await syntax to turn the sync code into async code relatively easily. While there were a few bumps along the refactor actually went extremely well.

An example of one bump I ran into included replacing items.forEach(item => item.doSomethingNowThatWillBecomeAsyncSoon()) with something that worked asynchronously and I found this blog post immensely helpful. Basically, don't try to await a forEach instead build a list of promises you can await.

Another one I ran into was dealing with async mocha tests, which is what the rest of this post is about.

MochaJS is great because the asynchronous testing has been there from the beginning. If you've done (see what I did there?) any asynchronous testing with MochaJS then you already know that you can signal to Mocha an asynchronous test is done by calling the test's async callback method.

Before we look at how to test asynchronous Mocha tests leveraging the new ES 7 async/await syntax, let's first take a little journey through some of the various asynchronous testing options with Mocha.

Note: you will see example unit tests that use the expect(...).to.equal(...) style assertions from ChaiJS.

How to create an asynchronous MochaJS test?

If you look at a normal synchronous test:

it("should work", function(){
    console.log("Synchronous test");

all we have to do to turn it into an asynchronous test is to add a callback function as the first parameter in the mocha test function (I like to call it done) like this

it("should work", function(done){
    console.log("Synchronous test");

But that's an invalid asynchronous test.

Invalid basic async mocha test

This first async example test we show is invalid because the done callback is never called. Here's another example using setTimeout to simulate proper asynchronicity. This will show up in Mocha as a timeout error because we never signal back to mocha by calling our done method.

it("where we forget the done() callback!", function(done){
    setTimeout(function() {
    }, 200);

Valid basic async mocha test

When we call the done method it tells Mocha the asynchronous work/test is complete.

it("Using setTimeout to simulate asynchronous code!", function(done){
    setTimeout(function() {
    }, 200);

Valid basic async mocha test (that fails)

With asynchronous tests, the way we tell Mocha the test failed is by passing an Error or string to the done(...) callback

it("Using setTimeout to simulate asynchronous code!", function(done){
    setTimeout(function() {
        done(new Error("This is a sample failing async test"));
    }, 200);

Invalid async with Promise mocha test

If you were to run the below test it would fail with a timeout error.

it("Using a Promise that resolves successfully!", function(done) {
    var testPromise = new Promise(function(resolve, reject) {
        setTimeout(function() {
        }, 200);

    testPromise.then(function(result) {
        expect(result).to.equal("Hello World!");
    }, done);

If you were to open up your developer tools you may notice an error printed to the console:

    Uncaught (in promise) i {message: "expected 'Hello!' to equal 'Hello World!'", showDiff: true, actual: "Hello!", expected: "Hello World!"}

The problem here is the expect(result).to.equal("Hello World!"); above will fail before we can signal to Mocha via the done() of either an error or a completion which causes a timeout.

We can update the above test with a try/catch around our expectations that could throw exceptions so that we can report any errors to Mocha if they happened.

it("Using a Promise that resolves successfully with wrong expectation!", function(done) {
    var testPromise = new Promise(function(resolve, reject) {
        setTimeout(function() {
            resolve("Hello World!");
        }, 200);

        try {
        } catch(err) {
    }, done);

This will correctly report the error in the test.

But there is a better way with promises. (mostly)

Mocha has built-in support for async tests that return a Promise. However, run into troubles with async and promises in the hook functions like before/beforEach/etc.... So if you keep reading you'll see a helper function that I've not had any issues with (besides it's a bit more work...).

Thanks to a comment from @syrnick below, I've extended this write-up...

Async tests can be accomplished in two ways. The first is the already shown done callback. The second is if you returned a Promise object from the test. This a great building block. The above example test has become a little verbose with all the usages of done and the try/catch - it just gets a little cumbersome to write.

If we wanted to re-write the above test we can simplify it to return just promise.

IMPORTANT: if you want to return a promise, you have to remove the done callback or mocha will assume you'll be using that first and not look for a promise return. Although I've seen comments in Mocha's github issues list where some people depend on it working with both a callback and a promise - your mileage may vary.

Here's an example of returning a Promise that correctly fails the test with the readable error message from Chaijs.

it("Using a Promise that resolves successfully with wrong expectation!", function() {
    var testPromise = new Promise(function(resolve, reject) {
        setTimeout(function() {
            resolve("Hello World!");
        }, 200);

    return testPromise.then(function(result){

The great thing here is we can remove the second error promise callback (where we passed in done) as Mocha should catch any Promise rejections and fail the test for us.

Running the above test will result in the following easy to understand error message:

AssertionError: expected 'Hello!' to equal 'Hello World!'

Turn what we know above into async/await.

Now that we know there are some special things we need to do in our async mocha tests (done callbacks and try/catch or Promises) let's see what happens if we start to use the new ES7 async/await syntax in the language and if it can enable more readable asynchronous unit tests.

The beauty of the async/await syntax is we get to reduce the .then(callback, done)... mumbo jumbo and turn that into code that reads like it were happening synchronously. The downside of this approach is that it's not happening synchronously and we can't forget that when we're looking at code and starting to use it this way. But overall it is generally easier to reason about in this style.

The big changes from the above Promise style test and the transformed async test below are:

  1. Place the async word in front of the async function(done){.... This tells the system that inside of this function there may (or may not be) the use of the await keyword and in the end the function is turned into a Promise under the hood. a Promise to simplify our unit tests.
  2. We replace the .then(function(result){ promise work and in place use the await keyword to have it return the promise value assign it to result so after that we can run our expectations against it.
  3. Remove the done callback. If you aren't aware, async/await is a fancy compiler trick that under-the-hood turns the code into simple Promise chaining and callbacks. So we can use what we learned above about Mocha using 5. return the Promise.

If we apply the 5 notes listed above, we see that we can greatly improve the test readability.

it("Using a Promise with async/await that resolves successfully with wrong expectation!", async function() {
    var testPromise = new Promise(function(resolve, reject) {
        setTimeout(function() {
            resolve("Hello World!");
        }, 200);

    var result = await testPromise;


Notice the async function(){ part above turns this into a function that will (under-the-hood) return a promise that should correclty report errors when the expect(...) fails.

Handling errors with async/await

One interesting implementation detail around async await is that exceptions and errors are handled just like you were to handle them in synchronous code using a try/catch. While under-the-hood the errors turn into rejected Promises.

NOTE: You're mileage may vary with the async/await and mocha tests with promises. I tried playing around with async in mocha hooks like before/beforeEach but ran into some troubles.

Since there may or may-not be issues with mocha hook methods, one work-around is to leverage a try/catch and the done callback to manually handle exceptions. You may run into this so I'll show examples of how to avoid relying on Mocha to trap errors.

Below shows the (failing) but alternative way (not using a return Promsie) but using the done callback instead.

it("Using a Promise with async/await that resolves successfully with wrong expectation!", async function(done) {
    var testPromise = new Promise(function(resolve, reject) {
        setTimeout(function() {
            resolve("Hello World!");
        }, 200);

    try {
        var result = await testPromise;


    } catch(err) {

Removing the test boilerplate

One I started seeing the pattern and use of try/catch boilerplate showing up in my async tests, it became apparent that there had to be a more terse approach that could help me avoid forgetting the try/catch needed in each async test. This was because I would often remember the async/await syntax changes for my async tests but would often forget the try/catch which often resulted in timeout errors instead of proper failures.

another example below with the async/await and try/catch

it("Using an async method with async/await!", async function(done) {
    try {
        var result = await somethingAsync();


    } catch(err) {

So I refactored that to reduce the friction.

And the mochaAsync higher order function was born

This simple little guy takes an async function which looks like async () => {...}. It then returns a higher order function which is also asynchronous but has wrapped your test function in a try/catch and also takes care of calling the mocha done in the proper place (either after your test is asynchronously completed, or errors out).

var mochaAsync = (fn) => {
    return async (done) => {
        try {
            await fn();
        } catch (err) {

You can use it like this:

it("Sample async/await mocha test using wrapper", mochaAsync(async () => {
    var x = await someAsyncMethodToTest();

It can also be used with the mocha before, beforeEach, after, afterEach setup/teardown methods.

beforeEach(mochaAsync(async () => {
    await someLongSetupCode();

In closing.

This post may have seemed like quite a journey to get to the little poorly named mochaAsync or learn to use Mocha's Promise support but I hope it was helpful and I can't wait for the async/await syntax to become mainstream in JavaScript, but until then I'm thankful we have transpiling tools like Babel so we can take advantage of these features now. ESNext-pecially in our tests...

Happy Testing!

Red - Possibly the Most Important Step in Red, Green, Refactor


If you do any sort of test driven development, you've likely heard of the following steps

  • Red: In this case means you write a test first and see the red (meaning failing test).
  • Green: This is where you actually write a tiny bit of production code to see the red test from above go green and pass.
  • Refactor: Now that you have a passing test (or suite of tests) you can presumably safely refactor and apply any common code cleanup practices to the code.

The importance of the Red step.

I don't want to go in to each of these steps in detail today, but I did want to drill into the Red step by giving you a short little example that happened to me today.

If you are interested in a little more detail on the subject check out:

If you're new to any sort of TDD/BDD/anything-DD you may not quite get how important this first step is, but hopefully the rest of this post helps...

Make sure it fails

The Red step is one of the most important, but one that a new practitioner may skip out of laziness. It takes discipline to write the failing test, run it and verify that it fails. It is so easy to write the test code AND the production code together, run it and see it pass that skipping the red step is something even seasoned veterans in the field can fall prey to.

However, if you don't see the test fail, how do you know it will ever fail?

I can tell you on numerous occasions in the past where I have written both production code and test code together, run it see it pass, just in case - comment out the production code and STILL see the test pass. Wait what?

If you are not careful you may have created a bad test and if you run this Red step first and you don't see it turn Red, you likely have a problem.

It could be a problem with the test itself, or possibly something you put in the test that is triggering something deeper in the system. It doesn't matter what the problem is, you first need to get the test to turn red before you write any production code to make it turn green.

Make sure it fails FOR THE RIGHT REASON

While it's easy to see a red bar and move on, it's also good to review the exact reason it failed. Did you get a FileNotFoundException exception when you were expecting a NullReferenceException? Or did you get an integer value of 10 when you were thinking at that moment it would have failed because it returned a string?

If you're writing proper tests, your red step will include a true failure case that not only just fails, but fails for the reason you would expect it to fail - at least until you go to write the production code that satisfies the tests intent.

Now a little example.

In the example below I was behaving myself and I DID run the red step first. I am using plain-ish JavaScript (I say ish here because I'm using ES6 with babel compiler).

It's much easier to make the type of mistake I'm going to highlight below with plain JavaScript than if you were using a statically typed language. You can try something like TypeScript for flow as these compilers provide a static type checker over your JavaScript. Compilers are a great first test...

Alas, I'm not doing that at the moment.

So here is what I did...

First I wrote a test:

import FeedData from '../app/feed.js';

describe("feed cache", function() {

  var feedCache;

  before(function() {
    feedCache = FeedData.loadCache();

  it("Feed Cache should have two items", function() {


When I tried to run it I got the error saying that TypeError: FeedData.loadCache is not a function. This was great and made total sense because I haven't written this loadCache() function yet.

Next I opened up my FeedData.js file and added the loadCache() function.

export default class FeedData {
  loadCache() {


I left the implementation blank for now and re-ran my tests. Same error TypeError: FeedData.loadCache is not a function as above.

That was odd, because I know I added the function but apparently it didn't think I had... some scratching... looking... hmm... Ahh ha - I had imported from feed.js not feedData.js.

It's subtle, but in my app feedData and feed are different things. So I moved the function to the correct ES6 class and re-ran the tests. I was certain this time that it may fail but at least fail for the right reason (not a missing function).


Again I got the same error TypeError: FeedData.loadCache is not a function. Ok, that was weird. Now I'm wondering if I have a caching problem in my browser, but before I try to debug Chrome's caching (kidding there, if I ever have to go that far I'm really having a rough day) I better have another look at my code.

It didn't take long in this case to realize where my issue was. FeedData is an ES6 class and I'm calling what I thought was a static function on the class from within the test, however it wasn't declared as static in the implementation.

Adding the static keyword below turned this function into what my test was originally expecting.

export default class FeedData {
  static loadCache() {


Now, all this work just to get the first part of my red test. What a journey it's been for something as silly as declaring a function.

It was a good reminder just how important the red step in red, green, refactor is.

Happy Testing!

In App Unit test example with JSPM, React, WinJS and Mocha


A while back I wrote about In App Unit Tests and have been having too much fun creating little starter plunks on Plunker. So...

As a simple demonstration on in-app unit tests I've thrown together a little plunk that shows in-app tests within the a WinJS Pivot. And add to the layers of abstraction I'm using react-winjs which I'm LOVING over normal WinJS development.

Link to: JSPM/React/WinJS/Mocha Plunk

If you like this starter, I have it and a few more linked here: for re-use

I'd like to highlight this particular starter at bit more in this post, not only because there are a few more concepts to this basic plunk, but also because I'm fairly happy with the MochaJS React component that I've now copied around and re-used a few times in some small projects where use In App Unit Tests.

Plunker file overview

  • index.html - The index file is a very basic JSPM bootstrap page that loads the app.jsx react component.
  • app.jsx - Defines a WinJS Pivot control where we render the in-app MochaTests.jsx React component. This also defines the test file(s) and using MochaJS's global detection we can tell the react MochaTests component what test files to load and run as well as what globals are allowed to exist.
  • config.js - This is JSPM's config that defines what version of React we're using and to use babel for transpilation.
  • tests.js - is our Mocha set of unit tests. We can have multiple test files if we want, just have to define what files to load in app.jsx.

Lastly the MochaTests.jsx which I'll include the full component below:

easier to copy-paste inherit for myself in the future

import mocha from 'mocha';
import React from 'react';

export default class MochaTests extends React.Component {

    static get propTypes() {
        return {
            testScripts: React.PropTypes.array.isRequired,
            allowedGlobals: React.PropTypes.array

    constructor(props) {

    componentDidMount() {

        var testScripts = this.props.testScripts;
        var runTests = this.runTests.bind(this);

        // for some reason importing mocha with JSPM and ES6 doesn't
        // place the mocha globals on the window object. The below
        // handles that for us - as well as setting up the rest of the
        // test scripts for the first run
        mocha.suite.on('pre-require', context => {
            var exports = window;

            exports.afterEach = context.afterEach || context.teardown;
            exports.after = context.after || context.suiteTeardown;
            exports.beforeEach = context.beforeEach || context.setup;
            exports.before = context.before || context.suiteSetup;
            exports.describe = context.describe || context.suite;
   = || context.test;
            exports.setup = context.setup || context.beforeEach;
            exports.suiteSetup = context.suiteSetup || context.before;
            exports.suiteTeardown = context.suiteTeardown || context.after;
            exports.suite = context.suite || context.describe;
            exports.teardown = context.teardown || context.afterEach;
            exports.test = context.test ||;

            // now use SystemJS to load all test files
            .all( {
                console.log("Adding Mocha Test File: ", testScript);
                return System.import(testScript);
            })).then(runTests, function(err) {
                console.error("Error loading test modules");


    runTests() {
        var allowedGlobals = this.props.allowedGlobals || [];

        this.refs.mocha.getDOMNode().innerHTML = "";



    render() {
        return (
                  #mocha-stats em { \
                      color: inherit; \
                  } \
                  #mocha-stats { \
                    position: inherit; \
                  #mocha pre { \
                      color: red; \
                  } \

                <button onClick={this.runTests.bind(this)}>Rerun Tests</button>

                <div id="mocha" ref="mocha"></div>

Usage example of this React MochaTests component.

// Define what test files get loaded by the MochaTests component
var testScripts = [

var allowedTestGlobals = [
  // Declare what globals are allowed to be created during any test runs.

// Usage of MochaTests in a react render() method.
<MochaTests testScripts={testScripts} allowedGlobals={allowedTestGlobals} />

I'm not expecting to see a large up-tick in WinJS apps out there with in-app unit tests that run in the browser, however hopefully the MochaTests.jsx React Component is of value to you and can be utilized outside of WinJS within almost any React app.

Please drop a line if you end up using it or if it can be adapted. If there's value in the component, maybe

Known Issue

If the number of tests starts to go beyond the height of the pivot in this sample, it has an issue where the WinJS Pivot cuts off at the bottom not allowing you to scroll and see the rest of the test output. I haven't dug into it yet because I've been clicking the failures: X link and it filters the U.I. to just the erroring tests.

If you happen to come up with a good solution, drop me a note - I'd love it. Thanks in advance!

Happy Testing!

JSMP/SystemJS Starter Plunker


I'm writing this post more for myself as a quick way to get going with JSPM, but if you find it useful please feel free.

Since I've recently been playing with JSPM, I've found it useful to kick-start my prototyping with a plunk. I've created a number of starter plunks below (and continue to add to the list.

Full list of JSPM starter plunks

JSPM Starter

JSPM, React


JSPM, React, MochaJS

JSPM, React, MochaJS, (Sample jsx Component test)

JSPM, React, WinJS

JSPM, React, WinJS, MochaJS

How to use:

  1. Click one of the links above
  2. Review the various files. Open and edit the app.js file
  3. Start writing code!

Be sure to open your developer tools and monitor any console output for errors...

If you want to take more of a test driven approach to your prototype take a look at MochaJS with JSPM or go directly to the JSPM/MochaJS Plunk

Happy Prototyping!

Browser only MochaJS tests using SystemJS


I've been poking at SystemJS (you may have heard of it through JSPM) and one of the first things I like to setup when playing with a new JS framework is a way to run MochaJS unit tests which allow me to test-drive my prototypes and in this case the best part is we don't have to do any local command line installations or crazy gulp/grunt builds. We can right in the browser start writing plain ES6 and prototype using a library from npm or github.

SystemJS is a very exciting project with it's ability to import modules/code right from your JS code itself. You can write your JS using ES6 import syntax and in many cases SystemJS will magically import code via NPM or GitHub directly.


If you want to skip the details below and just see the go right ahead!

How to run our first Mocha test

First we need to get a simple web page setup. I'm going to use as it allows me to specify multiple files. This will allow me to more easily componentize my code for cleaner extraction into a project, gist or other...

Once you have a basic Plunker setup go ahead and delete everything except index.html of for now.

Now we're ready to start throwing our code in here... But before you do open your browser's developer tools. I'm using Chrome on the Mac so Cmd+Option+j will do it. We need to be able to see the javascript console in case we see any errors with SystemJS loading of modules.

index.html <- paste the below in for you're Plunker index.html.

<!DOCTYPE html>

  <script src="[email protected]"></script>
  <script type="text/javascript">



With the above in the index.html you should see some errors printed to the console as SystemJS is trying to load ./testInit.js (but we haven't created it yet).

Before we create the testInit.js file let's first create a couple sample MochaJS test files that we want to test.

Here's our first test file: name it mochaTest1.js

Something cool about this test is once we get mocha wired up correctly, this test shows how seamlessly you can take a dependency on a 3rd party library like chaijs for help with assertions.

import { expect } from 'chai';

describe("This is a describe", function() {
  it("sample test that should pass", function() {
  it("sample test that should fail", function() {

Create another test file mochaTest2.js

import { expect } from 'chai';

describe("This is another describe", function() {
  it("sample test that should pass", function() {
  it("sample test that should fail", function() {

Creating two test files allows this sample to show how you can easily create and test multiple modules.

The meat and potatoes

Now is the juicy part on how to get Mocha to play nicely with this setup and run our tests.

Create a file and call it testInit.js (same as we named in our index.html and referenced it via System.import('./testInit.js')) and paste the below.

Feel free to read through it as I commented it thoroughly.

// This tells SystemJS to load the mocha library
// and allows us to interact with the library below.
import mocha from 'mocha';

// This defines the list of test files we want to load and run tests against.
var mochaTestScripts = [

// If you have a global or two that get exposed from your
// tests that is expected you can include them here
var allowedMochaGlobals = [

// Mocha needs a <div id="mocha"></div> for the browser
// test reporter to inject test results in to the U.I.
// Below just injects it at the bottom of the page. (You can get fancy here)
// Maybe you create a button in your website and allow anyone to run tests.
// Check out for more on the thought
var mochaDiv = document.createElement('div'); = "mocha";

// Importing mocha with JSPM and ES6 doesn't expose the usual mocha globals.
// I found this is one way to manually expose the globals, however if you know of a better way please let me know...
mocha.suite.on('pre-require', function(context) {
  var exports = window;

  exports.afterEach = context.afterEach || context.teardown;
  exports.after = context.after || context.suiteTeardown;
  exports.beforeEach = context.beforeEach || context.setup;
  exports.before = context.before || context.suiteSetup;
  exports.describe = context.describe || context.suite; = || context.test;
  exports.setup = context.setup || context.beforeEach;
  exports.suiteSetup = context.suiteSetup || context.before;
  exports.suiteTeardown = context.suiteTeardown || context.after;
  exports.suite = context.suite || context.describe;
  exports.teardown = context.teardown || context.afterEach;
  exports.test = context.test ||; =;

  // now use SystemJS to load all test files
    .all( {
      return System.import(testScript);
    })).then(function() {
    }, function(err) {
      console.error("Error loading test modules");



Please let me know if you know of an easier way to get access to the mochas globals using SystemJS. The below works, but is a bit uncomfortable.

MochaJS Tests Right in the Browser...

How awesome is this. Couple bits of bootstrap code, and we can go author whatever we want right in the browser using ES6 (err EcmaScript 2015) and we're off and running.

warning NOT FOR PRODUCTION WORKFLOWS (yet)! warning

This approach is primarily for allowing quick prototyping. Don't implement a complete app like this and then expect any performance. SystemJS can potentially download a large number of dependencies and you should read up on JSPM production workflows.

Happy Browser-Only Testing.

Running in-app mocha tests within WinJS


I described a while back a scenario about running in-app unit tests. So if you'd like some background on the subject have a look there before reading here.

This post is going to give some specifics about how to run MochaJS tests within a Microsoft Windows JavaScript application (WinJS).


These steps assume you already have a WinJS application, possibly using the universal template or other. In the end it doesn't matter. As long as you have a project that probably has a default.html file and the ability to add js & css files to.

Get Mocha

You can acquire the mocha library however you want, Bower, Npm, or download it manually from the site (

Reference Mocha Within Project & App

However you get the mocha source, you need to both add references to the mocha js and css files into your project file either in a *.jsproj file or if using a universal shared app, in the shared project.

Then you need to include a reference to the code in your default.html file.

In my example below you can see I used bower to download the MochaJS library.

The mocha.setup('bdd') tells mocha to use the BDD style of tests which defines the describe(...), it(...), etc functions.

<!DOCTYPE html>
    <meta charset="utf-8" />

    <!-- WinJS references -->
    <link href="/js/bower_components/winjs/css/ui-light.css" rel="stylesheet" />
    <script src="/js/bower_components/winjs/js/base.js"></script>
    <script src="/js/bower_components/winjs/js/ui.js"></script>

    <!-- TESTS-->
    <link href="/js/bower_components/mocha/mocha.css" rel="stylesheet" />
    <script src="/js/bower_components/mocha/mocha.js"></script>

    <!-- My Project js/css files below... -->

...Rest of default.html file excluded

Create a WinJS Control for hosting Mocha reporting.

Below is a sample WinJS control that can be used to host the mocha html report.

<!DOCTYPE html>
        #mocha {
            height: 800px;
            width: 600px;
            overflow: scroll;
    <div class="fragment section1page">
        <section aria-label="Main content" role="main">

            <!-- define a button that we can use to manually run or re-run tests -->
            <button id="mochaTestsRun">Run Tests</button>

            <!-- define a checkbox that allow us to toggle auto-run
                 of the tests when we start up the app -->
            Auto start <input type="checkbox" id="mochaTestsRunOnStart" />

            <!-- this is a blank div we use to inject U.I. related tests -->
            <div id="testElementContainer"></div>

            <!-- mocha throws the html output in the below div -->
            <div id="mocha"></div>
(function () {
    "use strict";

    var runMochaTests = function() {
        // clear any current test results since mocha just appends.
        element.querySelector('#mocha').innerHTML = "";

        // If you want your tests to verify nothing is leaked globally...

        // specify any globals that you are OK with your tests leaking

        // start the test run;

    var ControlConstructor = WinJS.UI.Pages.define("/js/tests/tests.html", {
        // This function is called after the page control contents
        // have been loaded, controls have been activated, and
        // the resulting elements have been parented to the DOM.
        ready: function (element, options) {
            options = options || {};

            // get our possibly already cached value of whether to run the tests on startup.
            var runOnStart = (JSON.parse(localStorage.getItem("mochaTestsRunOnStart") || "true"));

            // checkbox to manage auto-run state
            var runOnStartCheckbox = element.querySelector('#mochaTestsRunOnStart');
            runOnStartCheckbox.addEventListener('change', function () {
                localStorage.setItem("mochaTestsRunOnStart", runOnStartCheckbox.checked);

            // button to manually trigger a test run
            var mochaTestsRunButton = element.querySelector('#mochaTestsRun');
            mochaTestsRunButton.addEventListener('click', function(){
            runOnStartCheckbox.checked = runOnStart;

            if (runOnStart && !window.hasRunMochaTestsAtLeastOnce) {
                // this value is used to avoid extra test runs as we navigation around the app.
                // EX: if the test control is on a home pivot - we nav away and come back home
                // we probably don't want to auto-run the tests again. (use the menu button
                // instead if you want another test run)
                window.hasRunMochaTestsAtLeastOnce = true;


    // The following lines expose this control constructor as a global.
    // This lets you use the control as a declarative control inside the
    // data-win-control attribute.

    WinJS.Namespace.define("MyApp.Testing.Controls", {
        TestsControl: ControlConstructor

Handle global exceptions

If you write any async code (difficult not to these days) an exception or assertion failure will not be trapped by the internal try/catch mechanism's of Mocha in a Windows WinJS environment.

We have to give Mocha a hint on how to hook into global exceptions.

Mocha tries to attach to the browser's global window.onerror method, and since a WinJS app doesn't use this same handler, we have to forward the exceptions and to mocha's attached window.onerror handler.

In your default.js or wherever you configure the app you can attach to the WinJS.Application.onerror and after some exception massaging we can hand the exceptions to mocha so when a test fails it can be reported correctly.

    WinJS.Application.onerror = function (ex) {

        var errorMessage, errorLine, errorUrl;
        if (ex.detail.errorMessage) {
            errorMessage = ex.detail.errorMessage;
            errorLine = ex.detail.errorLine;
            errorUrl = ex.detail.errorUrl;
        } else if (ex &&
            ex.detail &&
            ex.detail.exception &&
            ex.detail.exception.stack) {
            errorMessage = ex.detail.exception.stack;
            errorLine = null;
            errorUrl = null;

        // if window.onerror exists, assume mochajs is here and call it's error handler
        // This may be a poor assumption because 3rd party libraries could also attach
        // their handlers, but it's working for me so far...
        if (window.onerror && errorMessage) {
            window.onerror(errorMessage, errorLine, errorUrl);

            // return true signalling that the error's been
            // handled (keeping the whole app from crashing)
            return true;

        // if we get here, assuming mochajs isn't running
        // let's log out the errors...
        try {
            var exMessage = JSON.stringify(ex, null, '  ');
        } catch (e) {
            // can't JSON serialize exception object here (probably circular reference)
            // log what we can...

        // I like to be stopped while debugging to possibly
        // poke around and do further inspection

Reference the test control.

While developing out the application, I like to throw it front and center in the first hub of my apps home page hub control.

Here's an example of how to reference the above control:

    <meta charset="utf-8" />

    <link href="/pages/hub/hub.css" rel="stylesheet" />
    <script src="/js/tests/tests.js"></script>
</head> details trimmed for brevity...

            <div class="hub" data-win-control="WinJS.UI.Hub">

                <div class="sectionTests"
                     data-win-options="{ isHeaderStatic: true }"
                     data-win-res="{ winControl: {'header': 'TestSection'} }">
                    <div id="sectionTestscontenthost" data-win-control="MyApp.Testing.TestsControl"></div>
            </div> of page details trimmed for brevity...

If you manage to get everything wired up correctly above, when you run the app (F5) your mocha tests should be all set and run automatically. Oh wait, let's not forget to add a mocha test :)...

describe('Mocha test spike', function(){

    it("should report a passing test", function(){
      // doing nothing should be a passing test

    it("should fail on a synchronous exception", function(){
        throw new Error("Some test error to make sure mocha reports a test failure")

    it("should fail on an asynchronous exception", function(done){
            throw new Error("Some test error to make sure mocha reports an async test failure")
        }, 100);
        throw new Error("Some test error to make sure mocha reports a test failure")


Save the above as your first test file, include it so it runs on startup and verify your tests run and report correctly within the test control.

Happy testing!

In-App Unit Tests


How often do you run your unit tests? Do you even have a test project? Is running your test suite automatic or does it only run when you choose to run them? If the latter is true, then how often do you run them?

For most, running tests is a simple command line tool like gulp test or rake test or a builtin function in your IDE. You may even have your system covered by a Continuous Integration server, or if you want to catch issues earlier possibly even a git pre-commit hook.

One thing we all probably know is the more difficult it is to run your tests, the less often they get run.

What if you're building in an environment that does not allow for easy test automation? What if there are no current command line task runners for the environment you're developing in?

You could spend way too much of your free time/life building a tool that can get the job done for you (cough - long live StatLight/Silverlight - cough). I would not necessarily recommend this approach unless you have the spare cycles to contribute a project like this to the broader community.

So, given that there are no external tools to rely on, and you don't want to spend however long it will take to create one for the development environment you are in, how about finding a way to integrate some sort of test run within your existing application?

I have always wanted to setup some unit tests that could run inside of an existing application. If you developed an app that had to ship on lots of devices (thinking android or iOS) and you could ask your users out in the wild to go to the settings page of your app and press a "run tests" button, how awesome would it be if this test run could collect data and send the test results back to you?

This is not a new idea, nor terribly unique, but I've pondered this for quite some time and until recently have not had a project were I was forced to execute on it, until now...

I'm playing with an app that is currently living inside a WinJS (Windows 8) (Metro - err do they call it that anymore?) environment and there are not any CLI tools out there that can be used to automate tests. However, it was easy enough to get Mocha running within the app.

With MochaJS in place, I proceeded to setup a WinJS Hub in my hub-based application, after make this testing hub the first hub in my project I now have test that run after every F5 (run) of the application.

This will eventually have to be something I move out of the main landing U.I. when I get closer to shipping my app, but I'm liking the idea of being able to toggle my test U.I. and have them run right within my application, instantly on startup.

A few benefits of this approach:

  • It allows me to actually write tests in an application that doesn't natively support a testing runner.
  • Fewer test gotch-ya's because of environmental reasons
  • Instant feedback on each run
  • It's an easy reminder to keep using and driving my development through tests, because the workflow is enabling it (and encouraging it).
  • (this could be considered a down side but...) since the tests are living within the actual application, I have to be careful to construct my components no to depend too heavily on the environment or configuration as we wouldn't want unit tests to mess up the running application. While it could be seen as a bit of a pain, I'm really liking how this is forcing me to write loosely coupled, highly compossable components that can be easy to swap in/out.

Will I Actually Ship In-App Tests?

This one is up in the air, but so far I'm of the mind-set that I'll hide the unit-test run from the main hub, place a button deep in some about or setting page and modify the reporter to send send any failed test results, along with some metadata about the environment or system it's running on, like versions, screen size, or whatever is needed...

Happy Testing!

Approval Tests - Command Line Tool (CLI)


In my previous post I introduced Approval Tests using the Approvals.NodeJS variant of the tool.

In this post I'd like to go over how you can use the command line version of Approvals.NodeJS for several different scenarios.

First thing first (How to Install)

Globally install approvals via npm.

npm install -g approvals

Now that you have it installed, let's go over some scenarios that you can use the approvals tool.

Scenario 1: Compare JSON files downloaded from a web server.

Let's say you want to see a quick file diff between two api requests.

You can use curl to download the file and pipe (|) it to the approvals CLI tool. We give it a name parameter which is used to generate the file name used to save to.

So if you were to run:

curl | approvals githubOrg

This would generate two files:

githubOrg.received.txt which at the time of this writing would look like:

  "login": "approvals",
  "id": 36907,
  "url": "",
  "repos_url": "",
  "events_url": "",
  "members_url": "{/member}",
  "public_members_url": "{/member}",
  "avatar_url": "",
  "description": null,
  "name": null,
  "company": null,
  "blog": "",
  "location": null,
  "email": null,
  "public_repos": 13,
  "public_gists": 0,
  "followers": 0,
  "following": 0,
  "html_url": "",
  "created_at": "2008-11-27T06:03:58Z",
  "updated_at": "2014-12-28T03:02:33Z",
  "type": "Organization"

and an empty githubOrg.approved.txt file.

Note when you first run this command you are prompted with the received file compared to the empty approved files; however, on an initial run, you can use the --forceapproveall argument to avoid the diff step and force all the contents of the received file into the approved file.

Now if the remote file were to change on you and you run the below command again:

curl | approvals githubOrg

You would get a diff between the the originally approved file and the newly downloaded file.

Scenario 2:

Ok, well I actually have another great scenario for using the approvals CLI, but I believe it deserves it's own post as I'm going to introduce some nifty configuration on a Mac that I've used to setup my own development servers automatically.

Until next time...

Approval Tests - Overview


I first started using the .Net version of Approval Tests and found so much value in it, I created a port of the tool that I can use in Node.JS. This post is intended as a rough overview of Approval Tests, and will be using the NodeJS port in my examples below.

There is support for other programming languages, so head over to the Approval Tests site at and check them out.

What are Approval Tests?

At it's core, Approval Tests are a very different way to execute the assertion step of a unit test.

Think of this as another tool in the bag and not a replacement for the good old fashion assert.equal(...) or your favorite assertion tool. But for certain types of tests, it IS the best tool to grab.

Typically, when we create assertions in our unit tests, we assert on very specific things. This property equals that value.

With Approval Tests, you can take whatever output you're wanting to Assert against, turn it into either a string representation or an image (say screenshot of app), and use our favorite diff tool to compare it with the previously approved version.

The diff tool is a great way to visualize change between failing/approved data which can help to raise the level of abstraction of your test. Instead of comparing a single value in an assertion, we can serialize an entire object graph to a string and use the diff tool to review any changes to the diff of the previously approved version.

The work to find and start up your favorite diff tool comparing previously "approved" files is where the libraries provided by Approval Tests come in handy.

Let's walk through an example

If you've done any testing before, by now you have probably heard of the AAA (Arrange, Act, Assert) style of tests. Below I've contrive a sample test using the AAA style in JavaScript.

var assert = require('assert');
describe("when testing something", function(){
    it("should do something special", function(){

        // Arrange
        // setup the initial state for the test
        var obj = { valueA: "test", valueB: 1234 };

        // Act - Do some business logic on the object
        obj.valueC = true;

        // Assert - verify the state of the item under test
        assert.equal(obj.valueA, "test")
        assert.equal(obj.valueB, 1234);
        assert.equal(obj.valueC, true);

At a high level, if you are writing tests with an object, and have a way to translate that object's state into a text or image representation, wouldn't it be great if you could save that state into a file, essentially locking down the state of the test?

This would allow future runs of the test to easily detect a change by comparing the previous state with the new state (strings of course) - but using the power of our diff tools to quickly highlight what is different.

Let's turn our AAA test above into an Approval Test

var approvals = require('approvals');
describe("when testing something", function(){
    it("should do something special", function(){

        // Arrange
        // setup the initial state for the test
        var obj = { valueA: "test", valueB: 1234 };

        // Act - Do some business logic on the object
        obj.valueC = true;

        // Assert - verify the state of the item under test
-        assert.equal(obj.valueA, "test")
-        assert.equal(obj.valueB, 1234);
-        assert.equal(obj.valueC, true);
+        approvals.verifyAsJSON(__dirname, "sampleTest", obj);

Notice how the 3 assert's turned into 1 approvals.verifyAsJSON?

How Approval Tests Work

Approval Tests works by taking the object or value you're trying to verify, serializing it to a text file (or image) and saving it to a file labeled with the name of your test and *.received.txt. It will then try to compare this *.received.txt file with a *.approved.txt file. If the file doesn't exist, then we will see a diff tool present with our received file on the left and an empty approved file on the right.

At this point we have to choose between a number of options:

  • We could take the received output and copy/save it to the approved file. Essentially approving the test.
  • We may only want parts of the received file, and copy over just the parts of what we want to the approved file.
  • Or we do nothing as we want to start over...

Now we run our test again and if the test passes - then we know that we've finished as what was re-generated in the received file matches the approved file. But if we see our diff tool appear, we can analyze the differences in the diff tool and determine if we need to either adjust the approved file or adjust our code that generates the received file.

If the above explanation still isn't clear, I'd recommend watching @LlewellynFalco's video's on the topic that can be found at He does a good job describing the concepts.

Next Steps!

Browse the github org, watch videos or read documentation on

Happy Approving!

You are responsible for making that feature work. Write a test. Just do it…

Today the PM of a project I am working on sent an email with a small list of issues that we needed to get resolved before shipping an early build to the customer for a weekly review. In his list of issues MY NAME was tagged next to a feature that I KNOW was working. I dev’d it, tested, saw it work.

Now the project I’m working on encourages unit tested code, which is a fantastic project to be on since I am a big proponent of Unit/Integration/Automated tests. Heck I wrote a tool to help run them easier (StatLight).

What I did.

The problem was this. I dev’d a feature out in like 5 minutes, took about 2 seconds to decide if I should write a unit test to prove my feature worked and this is where I failed. Manually verified my change checked in the production code without its test and I was hurriedly moved on to the next task.

About 20 min later I get a quick I.M. from a co-worker saying he had a small merge conflict in the file I just checked-in. Quickly told him how to get around his merge issue (not realizing after he checked in) that my “quick 5min dev task” was accidently removed in the merge.

What I SHOULD have done!

What I should have done was write the 2 lines of test code first. (you can argue test after/test first, I prefer test-first). Proven my code wasn’t working, by running the test, and then implement the 5 min feature making the test pass. Then when my co-worker ran into his merge issue.

Test would have failed telling him his merge didn’t go as planned.

This would have also avoided

  • PM wouldn’t have had to discover the issue, Screenshot and write up an email.
  • I wouldn’t have had to peruse source control history to understand why my “working” feature wasn’t working.
  • I wouldn’t have had to willingly confess my sins in this post.

If your feature doesn’t have a coinciding automated test. How do you know it’s still working?

Happy Testing!

StatLight – Goes Open Source

Although I made a very minor attempt at making StatLight a “for-sale” product, I knew when I started that open-source was most likely going to be my long term path for StatLight.

What is it? (Silverlight Testing Automation Tool)

StatLight is a tool developed for automating the setup, running, and gathering results of Silverlight unit tests. StatLight helps to speed up the feedback cycles while practicing TDD/BDD/(insert your test style here) during Silverlight development.

Where can I get StatLight?


Happy Coding !!!

StatLight (Silverlight Test Automation Tool) V0.8 Released


A new version of the StatLight tool has been released.


Major Feature Updates:

  • Support for 64bit
  • Back support for previous Microsoft.Testing.Silverlight assemblies
    • By giving the tool a specific -v=[December2008 | March2009 | July2009], StatLight can now run the asynchronous tests supported by the Microsoft Testing library.
  • Xml report output.
    • By giving the tool the -r=<FilePath.xml>. StatLight will write out an xml report of the test run.
  • Update to the Silverlight 3.0 Runtime – Should be compatible with Silverlight 2.0 assemblies.

Minor Updates:

  • Support for auto-closing modal Silverlight MessageBox dialogs.
  • Update feedback to the end user for when access to run statlight web service is not allowed (Added better feedback on how to get it up and running).
  • Added a timeout monitor to detect when the StatLight browser has not communicated with the StatLight server within a reasonable amount of time. StatLight will stop running if communication (a test message) has not arrived within 5 minutes. (Will look into making this configurable)
Also to note:

Silverlight Unity & Moq backed AutoMocker

I’ve pushed the Silverlight Unity/Moq AutoMocker into the moq-contrib project.

Well the other day I need the automocker and ended up running into a bug that was in my old-old-old pre-release version of moq for Silverlight, so I took the time to get the released version and re-compile the automocker.

I published it to the moq-contrib project. Haven’t taken the time to create a binary release, but should be able to get latest build, execute the build.Silverlight.cmd and in the drops\Current-Silverlight should be Moq.Contrib.UnityAutoMocker.Silverlight.dll


StatLight – V0.5 released!

A new version of StatLight has been released!

Major Features Added:

  • NUnit support
  • Further MSTest support
    • In the previous version (0.4) you could only use StatLight with the March 2009 build of Microsoft.Silverlight.Testing.dll. It will now work with any version.
      (Update: the Silverlight Asynchronous testing style is not currently supported)

Minor Tweaks:

  • The debug.assertion creates a failure test event
    • It used to write a message to the console. It will now create a failure event, similar to a test that fails.
  • Support for [Ignore]’d tests for each supported framework (MSTest, NUnit, Xunit)
Soon to come:

Introducing StatLight (Silverlight Testing Automation Tool)


What is it?

StatLight is your first class TDD/BDD/(Insert your testing style here) tool for Silverlight. Its main focus is developer productivity.

(The website is young, however the tool is ready to go… Download StatLight here)

In the realm of Silverlight, the tooling support has not had the chance to catch up to what you might have come to expect from normal .net development. When developing Silverlight applications using a test driven style, with the current tooling, it can become a little more tedious than one might like.

A number of testing methodologies have sprung up to enable some sort of testing for Silverlight developers; however, few actually run the tests in the browser.

Below is a short list of features StatLight currently has.


  1. In browser runner to test how your code will “actually” run in the wild.
  2. TeamCity Continuous Integration support
  3. Smooth Console Runner
    1. One time runner (screenshot below)
    2. Continuous Integration runner
      1. This runner will startup and watches for any re-builds of your text xap file. Every time you re-build it will pick up the new test xap and immediately start running the tests.
      2. This feature gives the most as far as developer productivity.
      3. I will put out a video to show how best to use it soon…
  4. Tag/Filtering support (to narrow down the tests run at a given time)
  5. XUnit support leveraging XUnit (Light)
  6. MSTest support. (March 09 build)

On the way:

  1. Better Documentation and how to/help videos
  2. NUnit support leveraging Jeff Wilcox’s NUnit port
  3. support
  4. MSBuild

Download StatLight here

(-2) for [Fluent] Specification Extensions

If you read my previous posts about [Fluent] Specification Extensions then you know that I'm still in an experimental phase of this idea.

  1. Fluent Specification Extensions
  2. (+1) for [Fluent] Specification Extensions

There are two more things I've found that are going against the specification extensions idea. The first one below is related to all specification extensions, not just the fluent flavored specification extensions.

-1. If you have a test that for some reason has an assertion in the middle of some code.
public void MockAsExistingInterfaceAfterObjectSucceedsIfNotNew()
var mock = new Mock<IBag>();

mock.As<IFoo>().SetupGet(x => x.Value).Returns(25);


var fm = mock.As<IFoo>();

fm.Setup(f => f.Execute());

It's a little difficult to tell where the assertion is because you don't get the syntax highlighting help from the static class name "Assert".

Assert.Equal(25, (IFoo)mock.Object);

However I don't think the reason above is anything to stop using the pattern for a couple of the following:

  1. How often do you have assertions in the middle of your code?
  2. If you are doing your tests right, it doesn't take long to scan the above code for the ".Should..." extension method.
-1. Daniel from the Moq project tweeted Scott Bellware about his spec unit projects SpecificationExtensions.cs

@bellware shouldn't specunit ShouldNotEqual/etc. return the actual object rather than the expected? To chain other calls

@bellware and ShouldBeNull/NotBeNull could also return the object: obj.ShouldNotBeNull().ShouldEqual(...), right?

with Scott's response being...

@kzu no because that would contribute to crowding many observations into a single method

To that I say, True, if you are doing pure Context/Specification based unit testing. However most of us aren't actually doing it (maybe we should?) so why not allow the test to say


So for now... I'm going to continue to go with the Fluent Specification Extensions.

(+1) for [Fluent] Specification Extensions

If you read my previous post about Fluent Specification Extensions then you know that I'm still in an experimental phase of this idea.

I'd like to share one more positive I found by using the specification extensions in my testing framework. This benefit is there weather you use standard specification extension methods or try the fluent specification extensions. The idea is very basic, but I didn't even realize it's benefit until I ran into it directly.

And the great benefit is... (drum roll...errr..actually it's not mind blowing). By using extensions methods to provide assertion extensions on the objects we're testing, we've abstracted the actual testing framework's assertion process. (told you it wasn't mind blowing, but read along and see an example of how this abstraction could be good)

Now I know most times you won't ever change testing frameworks, however I just ran into this when attempting to port the Castle.DynamicProxy2 (DP2) to Silverlight. Their test project leveraged the NUnit testing framework, which hasn't officially been ported. You can find a quick port by Jeff Wilcox that will run in the Microsoft Silverlight Unit Testing Framework. However when I was porting the DP2 that hadn't been done, and I didn't feel like porting NUnit at the time.

So, by providing this abstraction layer (through the extension methods). You could then go in and easily swap what testing framework your project is leveraging to do it's assertions.

NOTE: the port from NUnit to MSFT wouldn't have been that easy as the [TestMethod] in MSFT is sealed so I couldn't create a [Test] attribute that inherited from [TestMethod] to get the SL testing framework to run the tests w/out changing the DynamicProxy test code...aside from that issue...

Let's take a concrete example of this abstraction benefit.

Notice how the Assert.IsInstanceOfType() in both NUnit and Microsoft's testing framework have the parameters reversed.





If you were trying to switch from NUnit to MSFT or visa versa, a simple search and replace on [Test] for [TestMethod] would suffice for the majority of the needed port. However the Assert.IsInstanceOfType() would fail at compile time because of the parameter order. (and who know what else exactly is different)

If you could provide that layer of abstraction for the assertions, then to switch between NUnit and MSFT or visa versa would remain very simple, as you would only have to provide the framework specific changes only once.

Fluent Specification Extensions

FYI: If you're familiar with extension methods, and how to use them in testing sceneries...the interesting part of this post is at the bottom starting at: "Ok, on to the point..."

The C# extension methods give some amazing power when it comes to extending functionality of objects (we don't own) and I've spotted a pattern on several blogs and example unit testing snippets, especially in the Context Specification style testing areas that I find interesting.

The concept is to basically use the C# extension methods within a unit testing environment to give the system under test (SUT) more readability/understandability within the test code itself.

Here's an example of how you might normally write a unit test given the following SUT.

public class SystemUnderTest
public SystemUnderTest() { PropertyUnderTest = "Hello World!"; }
public string SomeStringProperty { get; set; }
public bool SomeBoolProperty { get; set; }

You might write some unit tests that might look like...

var sut = new SystemUnderTest();

Assert.AreEqual(sut.SomeStringProperty, "Hello World!");

Now, the assertions above are small enough it's pretty easy to tell what's going on, however when you think about what your looking at, it actually present the best readability.

Let's take the string's AreEqual assertion for example...  You first read the "AreEqual", so now you have to allocate some (undefined as of yet) space in your head to store some data points that need to be evaluated all at once. (maybe I'm getting lazy as I get old, but the less I have to think when reading tests the more time I can spend understanding the domain being tested...)

Again, the example is over simplified, but I think you get the point.

What if you could make the test syntax read and flow in a very readable and understandable manner?

That's what the specification extensions give you. Given the two tests above and an a couple helper extension methods living in the testing library I could write something like.

var sut = new SystemUnderTest();
sut.SomeBoolProperty.ShouldEqual("Hello World!");

It may just be me, but that just feels better, is more understandable, and the great thing is I didn't have to impact my domain objects to support this style of test...

Another great benefit is you don't have to type "Assert.xxxx(YYzzz)" each time you want to create an assertion. You can just type sut.SomeThing.{this where you get help from intellasense} giving you some great context based assertion options.

I googled for a library that had a pre-built set of extension assertions and ended up finding the by Scott Bellware. If you dig into the source of the project you can find a helper class called SpecificationExtensions.cs which basically gives you all the "Should..{your assertion here}" extension methods.

Ok, on to the point real point (sorry it's taken so long).

After downloading and playing with the extension specifications from Spec Unit, I thought what if we made that more fluent?

So I gave it a quick spike and instead of writing some tests that look like...

sut.SomeStringProperty.ShouldEqual("Hello World!");

You could have less wordy code and still retain all the meaning and readability with a set of fluent specification extensions.

.ShouldEqual("Hello World!");

I haven't figured out what sorts of bad things this style of assertion could bring... but we'll experiment for a while...

Here's an example console app with the extensions included.

DISCLAIMER: I haven't tested all the extensions so if you notice any issues please feel free to let me know...


I just added this helpful extension method to my code. Thought of sharing it.

public static IEnumerable ShouldContain(this IEnumerable collection, Func expectedCriteria)
return collection;
You might find this useful:

ShouldIt is an open source library of fluent specification extensions that can be used with any unit testing framework.
Whats up Stax! Man, you are always deep into stuff I can't understand!

Silverlight AutoMocker with Unity container and Moq

A couple weeks back I threw out a Teaser about a Silverlight AutoMocker... Here's the follow up I promised.

UPDATE: Moq 3.0 released with full Silverlight support

I've been working on this off and on for a while now, and finally decided to sit down and finalize (at least this version) of both Moq for Silverlight and an AutoMocking container backed by the Unity IoC from Microsoft.

This isn't by any means "Official", but probably not far from what will be coming out when it's done...

Below are the necessary steps (I can come up with) to get an official version of Moq.

  1. We need an "Official" build of the Castle.DynamicProxy2 dependency
    1. Create NAnt build scripts (Jonathon Rossi said he'd try to work on it this weekend 1/23) Thank God! It tried porting the NAnt build to Silverlight and each of the 3 times I got so lost I gave up... :(
    2. Fix the Castle.Core-Silverlight build (some changes have occurred to the main without being pushed to the Silverlight Version)
    3. Create an official binaries of
      1. Castle.Core
      2. Castle.DynamicProxy
  2. Port Moq to Silverlight
    1. I supplied a patch to Daniel a while ago, but he said he wouldn't include it until #1 above.

But until the official versions become available, here are what I have so far!!!

UPDATE: oops, realized I posted the files below as .7z. You can download and install the free open source zip utility here


This contains the Moq, Moq-Contrib (w/unity AutoMocker) source

This contains all the binaries you need to run Moq...


Please keep in mind I'm by no means an IoC pro and the whole AutoMocker was a one afternoon attempt at throwing one together that I could use in Silverlight. It's very basic, but seems to support most of what I need for now.

Could an AutoMocking container backed by Moq and Unity be a project that the community needs?



I've been disconnected from the castle dynamic proxy for quite a while. (Last I knew Mr. Rossi was going to get it into the build process) If that's true, building from src would be your best bet. If it's not possible through the NAnt script, then, the way I originally did it, was opening up the Silverlight project in VS and building that way... (I think you have to run the NAnt build at least once for it to setup the AssemblyInfo.cs stuff before you try to vs build)

Hope this helps. (sorry I don't have more info than this)
Re: the error with generic constraints - I created a test for this and it is fixed on the main build see here:

I'm not sure how to get this into a silverlight version - there doesn't seem to be a Silverlight build. Can you help?


Any chance you could send an example test?
Hi Jason,
This is really cool...thanks!

I thought you might want to know about an error I received. When mocking an interface containing generic parameter constraints, I receive the error...

System.BadImageFormatException: An attempt was made to load a program with an incorrect format.

...when running the test.

@Adam - There's two things you can do.

1. Moq-contrib at google code already has an AutoMocker that uses the AutoFac IoC.


2. Take my source, recompile the AutoMocker in .net (make sure to get the correct full-framework dependencies)

Hope that helps...
I would really like an Automocker for the *WPF* side of things.

Silverlight Moq AutoMocker Teaser...

Although we don't have an official Moq for Silverlight yet. You can go to the email list and pull the patch I submitted to get an early version up and running.

I have, in the queue of blog posts, a Silverlight AutoMocker leveraging the new Microsoft Unity container for Silverlight.

I'll wait till we get a full Silverlight version of Moq before posting.



The existing Moq automocker for the full .NET framework, over at Moq-contrib, is leveraging the Autofac contianer, and when I threw my Silverlight version together Autofac hadn't been ported (yet), however just read a blog today that said it has an early port available.


Not sure what you mean bout the "trick"... Mocking just works in Silverlight with Moq. There is one unit test that fails in the Silverlight version when trying to mock the System.Stream class. I didn't dig too far into the specific failure, however I'm sure there are lots of things out there that you won't be allowed to mock because of the security restrictions...
Justin Chase
In a nutshell, what's the trick to mocking with the given security constraints (on reflection) in Silverlight?