Adventures in JavaScript Development

Writing Unit Tests for Existing JavaScript

| Comments

My team at Bazaarvoice has been spending a lot of time lately thinking about quality and how we can have greater confidence that our software is working as it should.

We’ve long had functional tests in place that attempt to ask questions like “When a user clicks a button, will The Widget do The Thing?” These tests tell us a fair amount about the state of our product, but we’ve found that they’re brittle – even after we abstracted away the CSS selectors that they rely on – and that they take approximately forever to run, especially if we want to run them in all of the browsers we support. The quality of the tests themselves is all over the map, too – some of them are in fact unit tests, not really testing anything functional at all.

A few months ago we welcomed a new QA lead to our team as part of our renewed focus on quality. Having a team member who is taking a thoughtful, systematic approach to quality is a game-changer – he’s not just making sure that new features work, but rather has scrutinized our entire approach to delivering quality software, to great effect.

One of the things he has repeatedly emphasized is the need to push our tests down the stack. Our functional tests should be black-box – writing them shouldn’t require detailed knowledge of how the software works under the hood. Our unit tests, on the other hand, should provide broad and detailed coverage of the actual code base. In an ideal world, functional tests can be few and slow-ish, because they serve as an infrequent smoke test of the application; unit tests should be thorough, but execute quickly enough that we run them all the time.

Until now, our unit tests have been entirely focused on utility and framework code – do we properly parse a URL, for example? – not on code that’s up close and personal with getting The Widget to do The Thing. I’d told myself that this was fine and right and good, but in reality I was pretty terrified of trying to bolt unit tests onto feature code of incredibly varying quality, months or even years after it was first written.

A week or so ago, thanks to some coaxing/chiding from fellow team members, I decided to bite the bullet and see just how bad it would be. A week later, I feel like I’ve taken the first ten steps in a marathon. Of course, taking those first steps involves making the decision to run, and doing enough training ahead of time that you don’t die, so in that regard I’ve come a long way already. Here’s what I’ve done and learned so far.

Step 0

I was lucky in that I wasn’t starting entirely from scratch, but if you don’t already have a unit testing framework in place, don’t fret – it’s pretty easy to set up. We use Grunt with Mocha as our test framework and expect.js as our assertion library, but if I were starting over today I’d take a pretty serious look at Intern.

Our unit tests are organized into suites. Each suite consists of a number of files, each of which tests a single AMD module. Most of the modules under test when I started down this path were pretty isolated – they didn’t have a ton of dependencies generally, and had very few runtime dependencies. They didn’t interact with other modules that much. Almost all of the existing unit test files loaded a module, executed its methods, and inspected the return value. No big deal.

Feature-related code – especially already-written feature-related code – is a different story. Views have templates. Models expect data. Models pass information to views, and views pass information to models. Some models need parents; others expect children. And pretty much everything depended on a global-ish message broker to pass information around.

Since the code was originally written without tests, we are guaranteed that it would be in various states of testability, but a broad rewrite for testability is of course off the table. We’ll rewrite targeted pieces, but doing so comes with great risk. For the most part, our goal will be to write tests for what we have, then refactor cautiously once tests are in place.

We decided that the first place to start was with models, so I found the simplest model I could:

1
2
3
4
5
6
7
8
9
10
11
12
13
define([
  'framework/bmodel',
  'underscore'
], function (BModel, _) {
  return BModel.extend({
    options : {},
    name : 'mediaViewer',

    init : function (config, options) {
      _.extend(this.options, options);
    }
  });
});

Why do we have a model that does approximately nothing? I’m not going to attempt to answer that, though there are Reasons – but for the sake of this discussion, it certainly provides an easy place to start.

I created a new suite for model tests, and added a file to the suite to test the model. I could tell you that I naively plowed ahead thinking that I could just load the module and write some assertions, but that would be a lie.

Mocking: Squire.js

I knew from writing other tests, on this project and projects in the past, that I was going to need to “mock” some of my dependencies. For example, we have a module called ENV that is used for … well, way too much, though it’s better than it used to be. A large portion of ENV isn’t used by any given module, but ENV itself is required by essentially every model and view.

Squire.js is a really fantastic library for doing mocking in RequireJS-land. It lets you override how a certain dependency will be fulfilled; so, when a module under test asks for 'ENV', you can use Squire to say “use this object that I’ve hand-crafted for this specific test instead.”

I created an Injector module that does the work of loading Squire, plus mocking a couple of things that will be missing when the tests are executed in Node-land.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
define([
  'squire',
  'jquery'
], function (Squire, $) {
  return function () {
    var injector;

    if (typeof window === 'undefined') {
      injector = new Squire('_BV');

      injector.mock('jquery', function () {
        return $;
      });

      injector.mock('window', function () {
        return {};
      });
    }
    else {
      injector = new Squire();
    }

    return injector;
  };
});

Next, I wired up the test to see how far I could get without mocking anything. Note that the main module doesn’t actually load the thing we’re going to test – first, it sets up the mocks by calling the injector function, and then it uses the created injector to require the module we want to test. Just like a normal require, the injector.require is async, so we have to let our test framework know to wait until it’s loaded before proceeding with our assertions.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
define([
  'test/unit/injector'
], function (injector) {
  injector = injector();

  var MediaViewer;

  describe('MediaViewer Model', function () {
    before(function (done) {
      injector.require([
        'bv/c2013/model/mediaViewer'
      ], function (M) {
        MediaViewer = M;
        done();
      });
    });

    it('should be named', function () {
      var m = new MediaViewer({});
      expect(m.name).to.equal('mediaViewer');
    });

    it('should mix in provided options', function () {
      var m = new MediaViewer({}, { foo : 'bar' });
      expect(m.options.foo).to.equal('bar');
    });
  });
});

This, of course, still failed pretty spectacularly. In real life, a model gets instantiated with a component, and a model also expects to have access to an ENV that has knowledge of the component. Creating a “real” component and letting the “real” ENV know about it would be an exercise in inventing the universe, and this is exactly what mocks are for.

While the “real” ENV is a Backbone model that is instantiated using customer-specific configuration data, a much simpler ENV suffices for the sake of testing a model’s functionality:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
define([
  'backbone'
], function (Backbone) {
  return function (injector, opts) {
    injector.mock('ENV', function () {
      var ENV = new Backbone.Model({
        componentManager : {
          find : function () {
            return opts.component;
          }
        }
      });

      return ENV;
    });

    return injector;
  };
});

Likewise, a “real” component is complicated and difficult to create, but the pieces of a component that this model needs to function are limited. Here’s what the component mock ended up looking like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
define([
  'underscore'
], function (_) {
  return function (settings) {
    settings = settings || {};

    settings.features = settings.features || [];

    return {
      trigger : function () {},
      hasFeature : function (refName, featureName) {
        return _.contains(settings.features, featureName);
      },
      getScope : function () {
        return 'scope';
      },
      contentType : settings.contentType,
      componentId : settings.id,
      views : {}
    };
  };
});

In the case of both mocks, we’ve taken some dramatic shortcuts: the real hasFeature method of a component is a lot more complicated, but in the component mock we create a hasFeature method whose return value can be easily known by the test that uses the mock. Likewise, the behavior of the componentManager’s find method is complex in real life, but in our mock, the method just returns the same thing all the time. Our mocks are designed to be configurable by – and predictable for – the test that uses it.

Knowing what to mock and when and how is a learned skill. It’s entirely possible to mock something in such a way that a unit test passes but the actual functionality is broken. We actually have pretty decent tests around our real component code, but not so much around our real ENV code. We should probably fix that, and then I can feel better about mocking ENV as needed.

So far, my approach has been: try to make a test pass without mocking anything, and then mock as little as possible after that. I’ve also made a point of trying to centralize our mocks in a single place, so we aren’t reinventing the wheel for every test.

Finally: when I first set up the injector module, I accidentally made it so that the same injector would be shared by any test that included the module. This is bad, because you end up sharing mocks across tests – violating the “only mock what you must” rule. The injector module shown above is correct in that it returns a function that can be used to create a new injector, rather than the injector itself.

Here’s what the final MediaViewer test ended up looking like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
define([
  // This properly sets up Squire and mocks window and jQuery
  // if necessary (for running tests from the command line).
  'test/unit/injector',

  // This is a function that mocks the ENV module.
  'test/unit/mocks/ENV',

  // This is a function that mocks a component.
  'test/unit/mocks/component'
], function (injector, ENVMock, component) {
  injector = injector();

  // This will become the constructor for the model under test.
  var MediaViewer;

  // Create an object that can serve as a model's component.
  var c = component();

  // We also need to mock the ENV module and make it aware of
  // the fake component we just created.
  ENVMock(injector, { component : c });

  describe('MediaViewer Model', function () {
    before(function (done) {
      injector.require([
        'bv/c2013/model/mediaViewer'
      ], function (M) {
        MediaViewer = M;
        done();
      });
    });

    it('should be named', function () {
      var m = new MediaViewer({
        component : c
      }, {});
      expect(m.name).to.equal('mediaViewer');
    });

    it('should mix in provided options', function () {
      var m = new MediaViewer({
        component : c
      }, { foo : 'bar' });

      expect(m.options.foo).to.equal('bar');
    });
  });
});

Spying: Sinon

After my stunning success with writing 49 lines of test code to test a 13-line model, I was feeling optimistic about testing views, too. I decided to tackle this fairly simple view first:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
define([
  'framework/bview',
  'underscore',
  'hbs!contentAuthorProfileInline',
  'mf!bv/c2013/messages/avatar',
  'bv/util/productInfo',
  'framework/util/bvtracker',
  'util/specialKeys'
], function (BView, _, template, msgPack, ProductInfo, BVTracker, specialKeys) {
  return BView.extend({
    name : 'inlineProfile',

    templateName : 'contentAuthorProfileInline',

    events : {
      'click .bv-content-author-name .bv-fullprofile-popup-target' : 'launchProfile'
    },

    template : template,

    msgpacks : [msgPack],

    launchProfile : function (e) {
      // use r&r component outlet to trigger full profile popup component event
      this.getTopModel().trigger( 'showfullprofile', this.model.get('Author') );

      BVTracker.feature({
        type : 'Used',
        name : 'Click',
        detail1 : 'ViewProfileButton',
        detail2 : 'AuthorAvatar',
        bvProduct : ProductInfo.getType(this),
        productId : ProductInfo.getId(this)
      });
    }
  });
});

It turned out that I needed to do the same basic mocking for this as I did for the model, but this code presented a couple of interesting things to consider.

First, I wanted to test that this.getTopModel().trigger(...) triggered the proper event, but the getTopModel method was implemented in BView, not the code under test, and without a whole lot of gymnastics, it wasn’t going to return an object with a trigger method.

Second, I wanted to know that BVTracker.feature was getting called with the right values, so I needed a way to inspect the object that got passed to it, but without doing something terrible like exposing it globally.

Enter Sinon and its spies. Spies let you observe methods as they are called. You can either let the method still do its thing while watching how it is called, or simply replace the method with a spy.

I solved the first problem by defining my own getTopModel method on the model instance, and having it return an object. I gave that object a trigger method that was actually just a spy – for the sake of my test, I didn’t care what trigger did, only how it was called. Other tests [will eventually] ensure that triggering this event has the desired effect on the targeted model, but for the sake of this test, we don’t care.

Here’s what the test looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
describe('#launchProfile', function () {
  var spy;
  var v;

  before(function () {
    spy = sinon.spy();

    v = new InlineProfile({
      // model and component are defined elsewhere
      component : component,
      model : model
    });

    model.set('Author', 'author');

    v.getTopModel = function () {
      return {
        trigger : spy
      };
    };
  });

  it('should trigger showfullprofile event on top model', function () {
    v.launchProfile();

    expect(spy.lastCall.args[0]).to.equal('showfullprofile');
    expect(spy.lastCall.args[1]).to.equal('author');
  });
});

I solved the second problem – the need to see what’s getting passed to BVTracker.feature – by creating a BVTracker mock where every method is just a spy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// This is a mock for BVTracker that can be used by unit tests.
define([
  'underscore'
], function (_) {
  return function (injector, opts) {
    var BVTracker = {};

    injector.mock('framework/util/bvtracker', function () {
      _([
        'error',
        'pageview',
        'feature'
      ]).each(function (event) {
        BVTracker[event] = sinon.spy();
      });
    });

    return BVTracker;
  };
});

My test looked at the BVTracker.feature spy to see what it got when the view’s launchProfile method was called:

1
2
3
4
5
6
7
8
9
10
11
12
it('should send a feature analytics event', function () {
  v.launchProfile();

  var evt = BVTracker.feature.lastCall.args[0];

  expect(evt.type).to.equal('Used');
  expect(evt.name).to.equal('Click');
  expect(evt.detail1).to.equal('ViewProfileButton');
  expect(evt.detail2).to.equal('AuthorAvatar');
  expect(evt.bvProduct).to.equal('RatingsAndReviews');
  expect(evt.productId).to.equal('product1');
});

I’ve barely touched on what you can do with spies, or with Sinon in general. Besides providing simple spy functionality, Sinon delivers a host of functionality that makes tests easier to write – swaths of which I haven’t even begun to explore. One part I have explored is its ability to create fake XHRs and to fake whole servers, allowing you to test how your code behaves when things go wrong on the server. Do yourself a favor and spend some time reading through the excellent docs.

What to test … and not

I’ve written tests now for a tiny handful of models and views. Setting up the mocks was a bit of a hurdle – and there were plenty of other hurdles that are too specific to our project for me to talk about them in detail – but overall, the hardest part has been figuring out what, exactly, to test. I crafted the examples above to be pretty straightforward, but reality is a lot more complicated.

Writing tests for existing code requires first understanding the code that’s being tested and identifying interesting moments in that code. If there’s an operation that affects the “public” experience of the module – for example, if the value of a model attribute changes – then we need to write a test that covers that operation’s side effect(s). If there’s code that runs conditionally, we need to test the behavior of that code when that condition is true – and when it’s not. If there are six possible conditions, we need to test them all. If a model behaves completely differently when it has a parent – and this happens far too often in our code – then we need to simulate the parent case, and simulate the standalone case.

It can be tempting to try to test the implementation details of existing code – and difficult to realize that you’re doing it even when you don’t mean to. I try to stay focused on testing how other code might consume and interact with the module I’m testing. For example, if the module I’m testing triggers an event in a certain situation, I’m going to write a test that proves it, because some other code is probably expecting that event to get triggered. However, I’m not going to test that a method of a certain name gets called in a certain case – that’s an implementation detail that might change.

The exercise of writing unit tests against existing code proves to be a phenomenal incentive to write better code in the future. One comes to develop a great appreciation of methods that have return values, not side effects. One comes to loathe the person – often one’s past self – who authored complex, nested conditional logic. One comes to worship small methods that do exactly one thing.

So far, I haven’t rewritten any of the code I’ve been testing, even when I’ve spotted obvious flaws, and even when rewriting would make the tests themselves easier to write. I don’t know how long I’ll be able to stick to this; there are some specific views and models that I know will be nearly impossible to test without revisiting their innards. When that becomes necessary, I’m hoping I can do it incrementally, testing as I go – and that our functional tests will give me the cover I need to know I haven’t gone horribly wrong.

Spreading the love

Our team’s next step is to widen the effort to get better unit test coverage of our code. We have something like 100 modules that need testing, and their size and complexity are all over the map. Over the coming weeks, we’ll start to divide and conquer.

One thing I’ve done to try to make the effort easier is to create a scaffolding task using Grunt. Running grunt scaffold-test:model:modelName will generate a basic file that includes mocking that’s guaranteed to be needed, as well as the basic instantiation that will be required and a couple of simple tests.

There’s another senior team member who has led an effort in the past to apply unit tests to an existing code base, and he’s already warned me to expect a bit of a bumpy road as the team struggles through the inevitable early challenges of trying to write unit tests for existing feature code. I expect there to be a pretty steep hill to climb at first, but at the very least, the work I’ve done so far has – hopefully – gotten us to the top of the vertical wall that had been standing in our way.

Further Reading

I’m not exactly the first person to write about this. You may find these items interesting:

Austin

| Comments

In August 2002, it was a little more than a year since I’d left my job at my hometown newspaper. I had just sold my car and left my two jobs as a bartender. Between the tips in my pocket and the money I’d made from selling my car – a 1996 Neon with a probably cracked head gasket – I had about $2,000 to my name. I had a bicycle, camping gear, cooking gear, maps, a handheld GPS, a flip phone, two changes of bicycle clothing, and two changes of street clothes. I was in Camden, Maine, and my parents were taking my picture in front of a bicycle shop.

My destination was Austin. My plan was to ride to Savannah, GA – via Boston, New York, and the eastern shore of Maryland – and then turn right. I didn’t really have much of a plan beyond that, except that I hoped to crash with a friend of a friend when I got to Austin. I heard they had a good bus system. I figured I could sort out a job before my money ran out.

Three weeks and 1,000 miles later, I found myself outside of New Bern, NC, more tan and more fit than I’d ever been or would ever be again. I stopped at a grocery store and picked up food for the evening, tying a bag of apples off the side of my bike. I was planning to camp just south of town, but as I neared a park in the center of town, I found myself surrounded by cyclists setting up camp. They were there for a fund-raising ride, and no, no one would mind if I camped in the park with them instead of riding another 10 miles.

I pitched my tent. I followed them to the free dinner being served for them across the street.

I rode 150 miles – unencumbered by camping gear and all the rest – in the fund-raising ride for the next two days.

I made new friends. They invited me to come stay with them for a few days in Chapel Hill.

I lived with them for a month. I borrowed their 1990 Ford Festiva for a year.

I got a job painting a house. I got a job waitressing. I got a job doing desktop publishing. I got a job making web sites.

I got good at JavaScript. I traveled the world talking about it.

I met a girl. We bought a house. We adopted a baby.

I never made it to Austin, though life has taken me there a few days at a time more times than I can count. Finally, in 2013, I even got a job there. Since February, I’ve made the trek enough times that it’s truly become a home away from home. I’ve stopped using my phone to find my way around. Waitresses recognize me. People tell me about the secret back way to work, but I already know it. I have opinions about breakfast tacos.

It’s time to finish the story I started more than a decade ago, which brings me to the point: With much love for Durham, and for the irreplaceable people who have made our lives so full here, we’re moving to Austin this spring. At last.

Refactoring setInterval-based Polling

| Comments

I came across some code that looked something like this the other day, give or take a few details.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
App.Helpers.checkSyncStatus = function() {
  if (App.get('syncCheck')) { return; }

  var check = function() {
    $.ajax('/sync_status', {
      dataType: 'json',
      success: function(resp) {
        if (resp.status === 'done') {
          App.Helpers.reloadUser(function() {
            clearInterval(App.get('syncCheck'));
            App.set('syncCheck', null);
          });
        }
      }
    });
  };

  App.set('syncCheck', setInterval(check, 1000));
};

The code comes from an app whose server-side code queries a third-party service for new data every now and then. When the server is fetching that new data, certain actions on the front-end are forbidden. The code above was responsible for determining when the server-side sync is complete, and putting the app back in a state where those front-end interactions could be allowed again.

You might have heard that setInterval can be a dangerous thing when it comes to polling a server*, and, looking at the code above, it’s easy to see why. The polling happens every 1000 seconds, whether the request was successful or not. If the request results in an error, or fails, or takes more than 1000 milliseconds, setInterval doesn’t care – it will blindly kick off another request. The interval only gets cleared when the request succeeds and the sync is done.

The first refactoring for this is easy: switch to using setTimeout, and only enqueue another request once we know what happened with the previous one.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
App.Helpers.checkSyncStatus = function() {
  if (App.get('syncCheck')) { return; }

  var check = function() {
    $.ajax('/sync_status', {
      dataType: 'json',
      success: function(resp) {
        if (resp.status === 'done') {
          App.Helpers.reloadUser(function() {
            App.set('syncCheck', null);
          });
        } else {
          setTimeout(check, 1000);
        }
      }
    });
  };

  App.set('syncCheck', true);
};

Now, if the request fails, or takes more than 1000 milliseconds, at least we won’t be perpetrating a mini-DOS attack on our own server.

Our code still has some shortcomings, though. For one thing, we aren’t handling the failure case. Additionally, the rest of our application is stuck looking at the syncCheck property of our App object to figure out when the sync has completed.

We can use a promise to make our function a whole lot more powerful. We’ll return the promise from the function, and also store it as the value of our App object’s syncCheck property. This will let other pieces of code respond to the outcome of the request, whether it succeeds or fails. With a simple guard statement at the beginning of our function, we can also make it so that the checkSyncStatus function will return the promise immediately if a status check is already in progress.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
App.Helpers.checkSyncStatus = function() {
  var syncCheck = App.get('syncCheck');
  if (syncCheck) { return syncCheck; }

  var dfd = $.Deferred();
  App.set('syncCheck', dfd.promise());

  var success = function(resp) {
    if (resp.status === 'done') {
      App.Helpers.reloadUser(function() {
        dfd.resolve();
        App.set('syncCheck', null);
      });
    } else {
      setTimeout(check, 1000);
    }
  };

  var fail = function() {
    dfd.reject();
    App.set('syncCheck', null);
  };

  var check = function() {
    var req = $.ajax('/sync_status', { dataType: 'json' });
    req.then( success, fail );
  };

  setTimeout(check, 1000);

  return dfd.promise();
};

Now, we can call our new function, and use the returned promise to react to the eventual outcome of the sync:

1
2
3
4
5
6
7
8
App.Helpers.checkSyncStatus().then(
  // this will run if the sync was successful,
  // once the user has been reloaded
  function() { console.log('it worked'); },

  // this will run if the sync failed
  function() { console.log('it failed'); }
);

With a few more lines of code, we’ve made our function safer – eliminating the possibility of an out-of-control setInterval – and also made it vastly more useful to other pieces of the application that care about the outcome of the sync.

While the example above used jQuery’s promises implementation, there are plenty of other implementations as well, including Sam Breed’s underscore.Deferred, which mimics the behavior of jQuery’s promises without the dependency on jQuery.

* Websockets are a great way to eliminate polling all together, but in the case of this application, they weren’t an option.

Onward

| Comments

My friend IM’d me a link the other day to a document he and a colleague wrote at the end of 2011, listing all the things they wanted to make happen in the world of web development in 2012.

“We did almost all of it,” he said.

“Well shit,” I said. “I should write up something like this for 2013.”

“Why do you think I showed it to you?”


A year ago I was working at Toura, a startup based in New York that was developing software to make it easy to create content-centric mobile applications. I started there as a consultant back in 2010, helping them write a saner version of the prototype they’d developed in the months before.

I got the gig because apparently I spoke with their director of development, Matt, at a meetup in Brooklyn, though I actually have no recollection of this. By this time last year, I’d been there for more than a year, and Matt and I had convinced the company a few months before that the technology we’d developed – a JavaScript framework called Mulberry that ran inside of a Phonegap wrapper – was worth open-sourcing.

I spent much of January and February speaking at meetups and events – Vancouver, Boston, Austin, Charlotte – telling people why Mulberry was something they might consider using to develop their own content-centric mobile apps. By March, though, it was clear that Toura was headed in a direction that was different from where I wanted to go. As it turned out, Matt and I gave our notice on the same day.


April was the first time in almost 10 years that I purposefully didn’t work for a solid month. I spent almost two weeks in Europe, first in Berlin and then in Warsaw for Front Trends. I sold my car – it mostly just sat in the driveway anyway – to make Melissa feel a bit better about the part where I wasn’t making any money. Tiffany was a marvelous host; we took the train together from Berlin to Warsaw for the conference, barely talking the whole way as we worked on our respective presentations. Warsaw was a two-day whirlwind of wonderful people – Melanie, Milos, Chris, Alex, Frances – memorable for my terrible laryngitis and capped by endless hours of post-conference celebration in the hotel lobby, which was magically spotless when we made our way, bleary-eyed, to the train station early the next morning.

I flew home two days later; two days after that, I started at Bocoup.


Taking a job at Bocoup was a strategic change of pace for me. For 18 months, I had been immersed in a single product and a single codebase, and I was the architect of it and the expert on it. As fun as that was, I was ready to broaden my horizons and face a steady stream of new challenges in the company of some extremely bright people.

As it turned out, I ended up focusing a lot more on the training and education side of things at Bocoup – I spent the summer developing an updated and more interactive version of jQuery Fundamentals, and worked through the summer and fall on developing and teaching various JavaScript trainings, including a really fun two-day course on writing testable JavaScript. I also worked on creating a coaching offering, kicked off a screencasts project, and had some great conversations as part of Bocoup on Air. Throughout it all, I kept up a steady schedule of speaking – TXJS, the jQuery Conference, Fronteers, Full Frontal, and more.

Though I was keeping busy and creating lots of new content, there was one thing I wasn’t doing nearly enough of: writing code.


I went to New York in November to speak at the New York Times Open Source Science Fair, and the next day I dropped in on Matt, my old boss from Toura, before heading to the airport. He’s working at another startup these days, and they’re using Ember for their front-end. Though I was lucky enough to get a guided tour of Ember from Tom Dale over the summer, I’d always felt like I wouldn’t really appreciate it until I saw it in use on a sufficiently complex project.

As it turned out, Matt was looking for some JavaScript help; I wasn’t really looking for extra work, but I figured it would be a good chance to dig in to a real Ember project. I told him I’d work for cheap if he’d tolerate me working on nights and weekends. He gave me a feature to work on and access to the repo.

The first few hours with Ember were brutal. The next few hours were manageable. The hours after that were magical. The most exciting part of all, despite all the brain hurting along the way, was that I was solving problems with code again. It felt good.


With much love to my friends and colleagues at Bocoup, I’ve realized it is time to move on. I’ll be taking a few weeks off before starting as a senior software engineer at Bazaarvoice, the company behind the ratings and reviews on the websites of companies such as WalMart, Lowe’s, Costco, Best Buy, and lots more.

If you’re in the JS world, Bazaarvoice might sound familiar because Alex Sexton, of yayQuery, TXJS, and redhead fame, works there. I’ll be joining the team he works on, helping to flesh out, document, test, and implement a JavaScript framework he’s been prototyping for the last several months.

I’ve gotten tiny peeks at the framework as Alex and the rest of the team have been working on it, starting way back in February of last year, when I flew out to Austin, signed an NDA, and spoke at BVJS, an internal conference the company organized to encourage appreciation for JS as a first-class citizen. Talking to Alex and his colleagues over the last few weeks about the work that’s ahead of them, and how I might be able to help, has quite literally given me goosebumps more than once. I can’t wait.


I look back on 2012 with a lot of mixed emotions. I traveled to the UK, to Amsterdam, to Warsaw, to Berlin two times. I broke a bone in a foreign country, achievement unlocked. I learned about hardware and made my first significant code contribution to an open-source project in the process. I met amazing people who inspired me and humbled me, and even made a few new friends.

What I lost sight of, though, was making sure that I was seeking out new challenges and facing them head on, making sure that I was seeking opportunities to learn new things, even when they were hard, even when I didn’t have to. I didn’t realize til my work with Ember just how thoroughly I’d let that slip, and how very much I need it in order to stay sane.

And so while my friend probably has his list of things he will change in the world of web development in 2013, and while maybe I’ll get around to making that list for myself too, the list I want to be sure to look back on, 12 months or so from now, is more personal, and contains one item:

Do work that requires learning new things all the time. Even if that’s a little scary sometimes. Especially if that’s a little scary sometimes. In the end you will be glad.

Two Things About Conditionals in JavaScript

| Comments

Just a quick post, inspired by Laura Kalbag’s post, which included this gem:

We shouldn’t be fearful of writing about what we know. Even if you write from the most basic point of view, about something which has been ‘around for ages’, you’ll likely be saying something new to someone.

One: There is no else if

When you write something like this …

1
2
3
4
5
6
7
function saySomething( msg ) {
  if ( msg === 'Hello' ) {
    console.log('Hello there');
  } else if ( msg === 'Yo' ) {
    console.log('Yo dawg');
  }
}

… then what you’re actually writing is this …

1
2
3
4
5
6
7
8
9
function saySomething( msg ) {
  if ( msg === 'Hello' ) {
    console.log('Hello there');
  } else {
    if ( msg === 'Yo' ) {
      console.log('Yo dawg');
    }
  }
}

That’s because there is no else if in JavaScript. You know how you can write an if statement without any curly braces?

1
if ( foo ) bar() // please don't do this if you want your code to be legible

You’re doing the same thing with the else part of the initial if statement when you write else if: you’re skipping the curly braces for the second if block, the one you’re providing to else. There’s nothing wrong with else if per se, but it’s worth knowing about what’s actually happening.

Two: return Means Never Having to Say else

Consider some code like this:

1
2
3
4
5
6
7
8
9
function howBig( num ) {
  if ( num < 10 ) {
    return 'small';
  } else if ( num >= 10 && num < 100 ) {
    return 'medium';
  } else if ( num >= 100 ) {
    return 'big';
  }
}

If the number we pass to howBig is less than 10, then our function will return 'small'. As soon as it returns, none of the rest of the function will run – this means we can skip the else part entirely, which means our code could look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
function howBig( num ) {
  if ( num < 10 ) {
    return 'small';
  }

  if ( num < 100 ) {
    return 'medium';
  }

  if ( num >= 100 ) {
    return 'big';
  }
}

But wait – if the first if statement isn’t true, and the second if statement isn’t true, then we will always return 'big'. That means the third if statement isn’t even required:

1
2
3
4
5
6
7
8
9
10
11
function howBig( num ) {
  if ( num < 10 ) {
    return 'small';
  }

  if ( num < 100 ) {
    return 'medium';
  }

  return 'big';
}

Note: this post was edited to improve a couple of the examples and to fix some typos.

This Is the Cigarette

This is the cigarette I smoked* on Wednesday after I got out of a meeting in Boston and went to my desk and read my messages and learned that our birthmother “match” had fallen through.

The last three weeks have been among the happiest, most exciting, most terrifying times I can remember. Saying that we are sad and disappointed and et cetera doesn’t really cover it, but, well, there it is. Our search will continue.

* Don’t worry, Mom, I don’t usually smoke. Desperate times, desperate measures.

On Choosing a Syntax Highlighting Scheme for Your Next Presentation

| Comments

This is a projector screen:

You will notice that it is white, or some reasonable approximation thereof. It is probably made of a reflective material that sparkles a bit when light shines on it. Still: white.

Do you know what color this screen is when you use a projector to display this image onto it?

It is still white. Crazy, I know! The thing is, projectors cannot project black; they can only not project any light on a region that you intend to be black.

Chances are you are reading this on an LCD screen of some sort, where the rules are completely different: they usually start out essentially black, not white, and pixels are brightened as required. The pixels that start out dark can generally stay pretty dark.

On a projection screen, on the other hand, the appearance of black is nothing more than an optical illusion, made possible by the projector projecting brightness everywhere else.

What does this mean? Lots of things, but in particular, it means that you should never, ever, ever use a color scheme with a dark background – no matter how high-contrast and good it looks on your monitor – if you will be presenting using a projector that is projecting onto a white screen. At least, assuming that you intend for your audience to be able to actually read the code.

Presentation Color Schemes That I Have Loved

  • Ben Alman’s TextMate Theme: Ben has tailored this to be incredible for presenting about JS code.
  • Tomorrow Theme: The light-background flavor is decent, but could probably stand to be higher-contrast, at least for some languages.

Show & Tell

| Comments

I spoke at the Times Open Source Science Fair a couple of weeks ago. I’ll admit that I was pretty skeptical of the concept when I was first asked, but as someone who used to work as an editor at a tiny newspaper in upstate New York, I wasn’t about to say no when the Times asked me to come say hi.

A few days before the event, I got an email asking me for information about what I’d be showing off at my booth. Booth? Wat? They weren’t kidding about the science fair thing, but what the heck was I going to show at a booth?

It turns out this is basically the best idea ever. I recruited my Bocoup colleague Rick Waldron to join me, and together we spent a whirlwind hour showing off robots powered by JavaScript to an endless stream of people walking up to our booth. Rick did a great job of setting up a demo that people could play with, and they took turns moving sliding potentiometers that controlled servos that moved an arm with a gripper at the end, trying to pick up Bocoup stickers. Ours was one of about a dozen booths showing off open-source projects, and the room was a wonderful madhouse.

After a break for dinner, I, Jeremy Ashkenas, and Zach Holman each gave 20-minute talks, but the talks were really just icing on the evening. The “science fair” format promoted such intentional interaction, in a way that traditional conferences just can’t, no matter how great the hall track or the parties may be. The format invited and encouraged attendees to talk to the presenters – indeed, if they didn’t talk to the presenters, there wasn’t much else for them to do. By the time the official talks came around, a super-casual, super-conversational atmosphere had already been established, and the energy that created was tangibly different from any event I’ve been to before.

I love conferences, and the sharing of knowledge that happens there, and there’s a whole lot to be said for their speaker-audience format – don’t get me wrong. But I’d also love to see more events figure out how to integrate this show and tell format. “Booths” don’t need to mean “vendors trying to sell things” – they can actually be a great opportunity to facilitate conversation, and to let open source contributors show off their hard work.

Recent Talks

| Comments

A post from Alex Russell reminded me that I’ve given a number of talks in the last few months, and some of them even have video on the internet.

I’ve been ridiculously spoiled to get to travel all over the place these last few months – San Francisco, New York, Amsterdam, Berlin, Brighton – and speak at some truly first-class conferences, sharing the stage, sharing meals, and sharing beers with some seriously amazing folks. My recent news means I’ll be doing a lot less travel for the next little bit, but I’m ever-so-grateful for the opportunities I’ve had and the people I’ve gotten to see and meet these last few months.

Writing Testable JavaScript

This is the first talk I’ve developed that I’ve managed to give several times in rapid succession: three times in six days, including at Full Frontal, the online JS Summit, and to a group of developers at the New York Times. There’s no video yet, but the slides are here, and there should be video soon, I think.

JS Minty Fresh

A fun talk at Fronteers about eliminating code smells from your JavaScript. The best feedback I got afterwards was from an attendee who said they felt at the beginning of the talk like the material was going to be too basic for them, and by the end of the talk, the material was nearly over their head. “I guess that makes you a good teacher,” he said. Aw!

Rebecca Murphey | JS Minty Fresh: Identifying and Eliminating Smells in Your Code Base | Fronteers 2012 from Fronteers on Vimeo.

Slides

If you like this, you should also check out the screencasts we released at Bocoup earlier this week.

Beyond the DOM: Sane Structure for JS Apps

An update of my code organization talk, delivered at the jQuery Conference in San Francisco. It’s fun for me to see how my thinking around code organization has evolved and improved since my first, now-almost-embarassing talk at the 2009 jQuery Conference in Boston.

Slides

Johnny Five: Bringing the JavaScript Culture to Hardware

This one was from the New York Times Open Source Science Fair, a fun night of about a dozen folks presenting open-source projects at “booths,” followed by short talks about open source by Jeremy Ashkenas, me, and Zach Holman. The slides don’t necessarily stand on their own very well, but the short version is: use JavaScript to make things in the real world, because it’s ridiculously easy and ridiculously fun.

Getting Better at JavaScript

I put this together as a quickie for the Berlin UpFront user group – it was the first talk I gave with my broken foot, and the last talk I’d give for weeks because I lost my voice a couple of hours later. There’s not a whole lot here, but it was a fun talk and a fun group, and a topic that I get plenty of questions about. Again, no video, but here are the slides:

This Is the Cup of Coffee

| Comments

This is the cup of coffee I was making earlier this week when Melissa gave me a thumbs-up while she talked on the phone to a woman in Pennsylvania who had just finished telling Melissa that yes, indeed, after 10 weeks or three years of waiting depending on how you count, a 29-year-old woman who’s due to give birth in Iowa at the beginning of February has decided that Melissa and I should be so lucky as to get to be her baby girl’s forever family.

Most people get to post ultrasound pictures on Twitter at moments like these, but for now this will suffice to remind me of the moment I found out I would get to be a mom. My head is spinning, and while on the one hand it’s a little difficult to fathom that this is all just 10 weeks away, on the other hand I’m counting down the days.

Our adoption will be an open one; the meaning of “open” varies widely, but in our case it means we talked to the birth mother before she chose us, we’ll be meeting her in a few weeks, we’ll do our very best to be in Iowa for the delivery, and we’ll stay in touch with letters and pictures afterwards. Melissa and I are grateful that we’ll be able to adopt as a couple, though we are saddened that we have to adopt outside of our home state of North Carolina in order to do so. It’s important to us that our child have both of us as her legal parents, and I don’t hesitate to say that it’s downright shitty that we have to jump through significant legal and financial hoops – and stay in a hotel in Iowa with a newborn for an unknown number of days – to make it so. It is what it is, and good people are working and voting to make it better, and it can’t happen fast enough.

I’ve learned a lot about adoption these past few months, and I know a lot of people have a lot of questions, some of which they’re reluctant to ask. If you’re interested in learning more, I highly recommend In On It: What Adoptive Parents Would Like You to Know About Adoption. You’re also welcome to ask me questions if you see me in real life or on the internets – I can’t promise I’ll know the answers, but I promise to do my best.

In the meantime, wish us luck :)