Adventures in JavaScript Development

So You’re Going on a Podcast

| Comments

I got a fair bit of experience recording a podcast back in the yayQuery days; when I decided to start another one, the technical side of it felt pretty familiar, minus the part where yayQuery was crazy enough to also record video. Back then, we mostly were talking to each other, so it was easy for us to all be on the same page about the technical requirements: headphones always, typing never (at least when you’re also talking), and buy a good microphone. We also had 20-some episodes to get good at working with each other.

I’ve been recording the TTL Podcast for a few months now; it’s a show with a different guest every week, so the tech challenges are different, and each show is completely different from the last. It has been great fun, and I can’t believe how lucky I am to get to ask all of these people to talk to me and they keep saying yes.

I’ve learned a few things about how to be a good podcast guest along the way, but I haven’t always been good at sharing them ahead of time with guests. This, then, is really just an attempt to write them down in a single place, with the added benefit that maybe they will be useful to others. This is mostly focused on being a guest of a show; I have lots to say about being a host, but I feel like that’s a lot more complicated than this.

Technical

  • Wear headphones, preferably the best ones you own. The iPhone headphones aren’t nice, and actually leak noise like crazy. I alternate between using Klipsch and Shure (sorry, not sure of the model, so no link) in-ear headphones, both of which have a nice silicone seal to keep the sound I’m hearing in my ears and out of my microphone.
  • Use the best microphone you can. A MacBook’s built-in microphone is decent enough in a pinch, but it’s probably worth springing for an external microphone. I used the AT2020 for most of the yayQuery episodes, but I stepped up to a Shure SM7B to record TTL at the suggest of Alex Sexton. The USB mic is just fine and very reasonably priced; the Shure sounds absolutely lovely but is a bit more of an investment. If you don’t want to spring for a mic, see if someone in your office has one you can borrow. If you have questions about audio gear, I am mostly clueless beyond what I’ve written above.
  • If you’re a guest, always plan to record your side of the conversation. (If you’re a host, always plan to record all sides of the conversation; I’ve lost an episode by failing to do this.) On a Mac, Quicktime has a simple audio recording feature. There’s also plenty of other software that will do the same.

Preparation

  • Listen to at least one episode of the show before you go on (and possibly before you even agree to go on).
  • Ask the host what they want to talk to you about, and try to have a decent sense of the outline of the conversation before you start. If the host doesn’t have great guidance – she’s almost certainly less familiar with your work than you are – it’s generally very welcome for you to propose an outline yourself.
  • If you have access to a soundproofed room, consider using it. Avoid large, echo-y rooms, or rooms that will be subject to a lot of hallway or construction noise.

The Show

  • Consider your biological needs before you start recording :) Except for a live show, you’re always welcome to pause if you need to step away, but you may find yourself distracted in the meantime. Make sure you have water nearby!
  • Silence phone notifications (no vibrating phones; silence means silent); on your computer, close Twitter, your mail client, etc.; option-click the Notification Center icon in your Mac toolbar to put it in do-not-disturb mode (thanks Ralph Holzmann for that tip).
  • Unless it’s a live show, feel free to pause and try again if you make a mistake or say something wrong. It’s important that you announce that you’re starting over, then pause, then start over – that way it’s easy to fix in post-production.
  • Remember that a podcast is a conversation, not a presentation. Unlike a presentation, you’re conversing with a host who knows the audience and can ask you the questions that will help that audience connect with you. Use a video chat so you can watch the host for visual cues that she might want to interject.

That’s my list, though undoubtedly I’ve left things out. If you have stuff to add, please share in the comments —

Browser Testing and Code Coverage With Karma, Tape, and Webpack

| Comments

We recently set up a new project at Bazaarvoice for centralizing common UI modules. We started by using node-tap for unit tests, but given that these are UI modules, we quickly switched to using tape, because it has a fairly easy browser testing story with the help of Karma.

One thing that node-tap provided that tape did not provide out of the box was the ability to measure the code coverage of unit tests. Karma does provide this, but getting it hooked up while using Webpack – which is our build tool of choice these days – wasn’t quite as clear as I would have liked. If you’re looking to use Karma, tape, and Webpack, then hopefully this post will help you spend a bit less time than I did.

What You’ll Need

By the time it was all said and done, I needed to npm install the following modules:

  • karma
  • karma-phantomjs-launcher
  • karma-chrome-launcher
  • karma-tap
  • karma-webpack
  • karma-coverage
  • istanbul-instrumenter-loader
  • tape

The directory structure was simple:

  • a root directory, containing karma.conf.js and package.json
  • a lib subdirectory, containing module files
  • a test/unit subdirectory, containing the unit tests

An example application file at lib/global/index.js looked like this:

1
2
3
4
5
6
7
8
9
10
11
/**
 *  @fileOverview Provides a reference to the global object
 *
 *  Functions created via the Function constructor in strict mode are sloppy
 *  unless the function body contains a strict mode pragma. This is a reliable
 *  way to obtain a reference to the global object in any ES3+ environment.
 *  see http://stackoverflow.com/a/3277192/46867
 */
'use strict';

module.exports = (new Function('return this;'))();

An example test in test/unit/global/index.js looked like this:

1
2
3
4
5
6
7
var test = require('tape');
var global = require('../../../lib/global');

test('Exports window', function (t) {
  t.equal(global, window);
  t.end();
});

Testing CommonJS Modules in the Browser

The applications that consume these UI modules use Webpack, so we author the modules (and their tests) as CommonJS modules. Of course, browsers can’t consume CommonJS directly, so we need to generate files that browsers can consume. There are several tools we can choose for this task, but since we’ve otherwise standardized on Webpack, we wanted to use Webpack here as well.

Since our goal is to load the tests in the browser, we use the test file as the “entry” file. Webpack processes the dependencies of an entry file to generate a new file that contains the entry file’s contents as well as the contents of its dependencies. This new file is the one that Karma will load into the browser to run the tests.

Getting this to happen is pretty straightforward with the karma-webpack plugin to Karma. The only catch was the need to tell Webpack how to deal with the fs dependency in tape. Here’s the initial Karma configuration that got the tests running:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
var webpack = require('webpack');

module.exports = function(config) {
  config.set({
    plugins: [
      require('karma-webpack'),
      require('karma-tap'),
      require('karma-chrome-launcher'),
      require('karma-phantomjs-launcher')
    ],

    basePath: '',
    frameworks: [ 'tap' ],
    files: [ 'test/**/*.js' ],

    preprocessors: {
      'test/**/*.js': [ 'webpack' ]
    },

    webpack: {
      node : {
        fs: 'empty'
      }
    },

    webpackMiddleware: {
      noInfo: true
    },

    reporters: [ 'dots' ],
    port: 9876,
    colors: true,
    logLevel: config.LOG_INFO,
    autoWatch: true,
    browsers: ['Chrome'],
    singleRun: false
  })
};

However, as I mentioned above, I wanted to get code coverage information. Karma offers the karma-coverage plugin, but that alone was insufficient in Webpack land: it would end up instrumenting the whole Webpack output – including the test code itself! – and thus reporting highly inaccurate coverage numbers.

I ended up reading a karma-webpack issue that told me someone else had already solved this exact problem by creating a Webpack loader to instrument modules at build time. By adjusting our Webpack configuration to only apply this loader to application modules – not to test code or vendor code – the Webpack output ends up properly instrumented for the karma-coverage plugin to work with it. Our final Karma config ends up looking like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
var webpack = require('webpack');

module.exports = function(config) {
  config.set({
    plugins: [
      require('karma-webpack'),
      require('karma-tap'),
      require('karma-chrome-launcher'),
      require('karma-phantomjs-launcher'),
      require('karma-coverage')
    ],

    basePath: '',
    frameworks: [ 'tap' ],
    files: [ 'test/**/*.js' ],

    preprocessors: {
      'test/**/*.js': [ 'webpack' ]
    },

    webpack: {
      node : {
        fs: 'empty'
      },

      // Instrument code that isn't test or vendor code.
      module: {
        postLoaders: [{
          test: /\.js$/,
          exclude: /(test|node_modules)\//,
          loader: 'istanbul-instrumenter'
        }]
      }
    },

    webpackMiddleware: {
      noInfo: true
    },

    reporters: [
      'dots',
      'coverage'
    ],

    coverageReporter: {
      type: 'text',
      dir: 'coverage/'
    },

    port: 9876,
    colors: true,
    logLevel: config.LOG_INFO,
    autoWatch: true,
    browsers: ['Chrome'],
    singleRun: false
  })
};

Even with the coverage hiccup, the speed with which I was able to get Karma set up the way I wanted – and working with TravisCI – was nothing short of breathtaking. I’m late to the Karma party, but I had no idea it could be this easy. If you haven’t checked it out yet, you should.

The TTL Podcast

| Comments

Over the past several months, I’ve been on a few different podcasts, plus I’ve been having a lot of fun doing office hours, and generally talking a lot with other people who do the kind of work that I do. I’ve particulary enjoyed talking about a subject that Alex Sexton dubbed Front-End Ops.

It has occurred to me that a) I’m really super fortunate to get to have conversations about this stuff with super-smart people; b) there aren’t yet a lot of great sources of information about front-end ops in the real world; and c) I used to be on a podcast and that sure was fun.

To that end, I threw a tweet out into the world to see who might be willing to talk to me and let me record the conversation. I got enough great responses that I decided to don my podcasting hat again for a little bit, and the result is the TTL Podcast.

ttlpodcast.com

If you’re a mid-level front-end dev looking to level up, I’d humbly suggest that this is very much a show for you – you’ll get to listen in on the thought process of some of the best front-end devs I know. That said, it’s not just a show for those aspiring to take the front-end world by storm; it’s also a chance for those who are already in the trenches, doing daily battle with WebDriver and trying to shave 10 more milliseconds off page load, to commiserate asynchronously. I know I personally have learned a ton – in some cases I’ve seen a new angle on a problem, and in other cases I’ve had some serious Developer Guilt assuaged.

I’ve released three episodes so far – conversations with Alex, Burak Yiğit Kaya (Disqus), and Daniel Espeset and Seth Walker (Etsy). More episodes are in the pipeline, including developers from Walmart Labs, Yammer, FT Labs, and The Guardian.

While the initial focus has been on front-end ops, I can see the scope widening over time to cover, generally, the tools and challenges of doing front-end dev at significant scale. If you or someone you know would be a good person to talk to about that sort of thing, I hope you’ll let me know.

While I’m here, I want to give huge and sincere thanks to SauceLabs and Travis CI for their support of the show; to Una Kravets for finding time in her busy life to make me a website; to my sister, who’s been kind enough to pitch in with the editing; and to Bazaarvoice for giving me the freedom to take on a project like this.

A Baseline for Front-End [JS] Developers: 2015

| Comments

It’s been almost three years since I wrote A Baseline for Front-End Developers, probably my most popular post ever. Three years later, I still get Twitter mentions from people who are discovering it for the first time.

In some ways, my words have aged well: there is, shockingly, nothing from that 2012 post that has me hanging my head in shame. Still, though: three years is a long time, and a whole lot has changed. In 2012 I encouraged people to learn browser dev tools and get on the module bandwagon; CSS pre-processors and client-side templating were still worthy of mention as new-ish things that people might not be sold on; and JSHint was a welcome relief from the #getoffmylawn admonitions – accurate though they may have been – of JSLint.

It’s 2015. I want to write an update, but as I sit down to do just that, I realize a couple of things. One, it’s arguably not fair to call this stuff a “baseline” – if you thought that about the original post, you’ll find it doubly true for this one. One could argue we should consider the good-enough-to-get-a-job skills to be the “baseline.” But there are a whole lot of front-end jobs to choose from, and getting one doesn’t establish much of a baseline. For me, I don’t want to get a job; I want to get invited to great jobs. I don’t want to go to work; I want to go to work with talented people. And I don’t want to be satisfied with knowing enough to do the work that needed to be done yesterday; I want to know how to do the work that will need to get done tomorrow.

Two, my world has become entirely JavaScript-centric: knowledge of the ins and outs of CSS has become less and less relevant to my day-to-day work, except where performance is concerned. I know there are plenty of very smart front-end developers for whom this isn’t true, but I have also noticed a growing gulf between those who focus on CSS and those who focus on JavaScript. That’s probably a subject for another blog post, but I bring it up just to say: I am woefully unequipped to make recommendations about what you should know about CSS these days, so I’m not going to try.

In short: if this list of things doesn’t fit your vision of the front-end world, that’s OK! We’re both still good people. Promise.

JavaScript

Remember back in 2009 when you read that HTML5 would be ready to use in 2014, and that seemed like a day that would never come? If so, you’re well prepared for the slow-but-steady emergence of ES6 (which is now called ES2015, a name that is sure to catch on any day now), the next version of JavaScript. Getting my bearings with ES6 – er, ES2015 – is hands-down my biggest JavaScript to-do item at the moment; it is going to be somewhere between game-changing and life-altering, what with classes, real privacy, better functions and arguments, import-able modules, and so much more. Those who are competent and productive with the new syntax will have no trouble standing out in the JS community. Required reading:

  • Understanding ES6, a work-in-progress book being developed in the open by Nicholas Zakas.
  • BabelJS, a tool that lets you write ES6 today and “compile” it to ES5 that will run in current browsers. They also have a good learning section.
  • ES6 Rocks, with various posts that explore ES6 features, semantics, and gotchas.

Do you need to be an ES6/ES2015 expert? Probably not today, but you should know at least as much about it as your peers, and possibly more. It’s also worth at least entertaining the possibility of writing your next greenfield project using ES6; the future will be here before you know it.

New language features aside, you should be able to speak fluently about the asynchronicity of JavaScript, and using callbacks and promises to manage it. You should have well-formed opinions about strategies for loading applications in the browser and communicating between pieces of an application. You should maybe have a favorite application development framework, but not at the expense of having a general understanding of how other frameworks operate, and the tradeoffs you accept when you choose one.

Modules & Build Tools

There’s no debate that modules should be the building blocks of client-side web applications. Back in 2012, there was lots of debate about what kind of modules we should use for building apps destined for the browser – AMD or CommonJS. The somewhat-gross UMD wrapper arose to try to avoid answering the question while still allowing code reuse – because hey, what’s a few more bytes between friends?

I don’t feel like this debate is anywhere near resolved, but this is the area where I feel like we’ve seen the largest transformation since my 2012 article, though perhaps that’s a reflection of my personal change of heart. I’m not ready to say that I’m done with AMD, but let’s just say I’m floored by how practical it has become to develop and deploy web applications using CommonJS, including modules imported with npm.

With much love for all that RequireJS has contributed to the module conversation, I’m a bit enamored of webpack right now. Its features – such as easy-to-understand build flags – feel more accessible than RequireJS. Its hot-swap builds via its built-in dev server make for a fast and delightful development story. It doesn’t force an AMD vs. CommonJS decision, because it supports both. It also comes with a ton of loaders, making it fairly trivial to do lots of common tasks. Browserify is worth knowing about, but lags far behind Webpack in my opinion. Smart people I trust tell me that systemjs is also a serious contender in this space, but I haven’t used it yet, and its docs leave me wanting. Its companion package manager jspm is intriguing, allowing you to pull in modules from multiple sources including npm, but I’m a bit wary of combining those two concerns. Then again, I never thought I’d break up with AMD, yet here I seem to be, so we’ll see.

I still long for a day when we stop having module and build tool debates, and there is a single module system and sharing code between arbitrary projects becomes realistic and trivial without the overhead of UMD. Ideally, the arrival of ES6 modules will bring that day – and transpilers will fill in the gaps as the day draws closer – but I find it just as likely that we’ll keep finding ways to make it complicated.

In the meantime, front-end developers need to have an opinion about at least a couple of build tools and the associated module system, and that opinion should be backed up by experience. For better or worse, JavaScript is still in a state where the module decision you make will inform the rest of your project.

Testing

Testing of client-side code has become more commonplace, and a few new testing frameworks have arrived on the scene, including Karma and Intern. I find Intern’s promise-based approach to async testing to be particulary pleasing, though I confess that I still write most of my tests using Mocha – sometimes I’m just a creature of habit.

The main blocker to testing is the code that front-end devs tend to write. I gave a talk toward the end of 2012 about writing testable JavaScript, and followed up with an article on the topic a few months later.

The second biggest blocker to testing remains the tooling. Webdriver is still a huge pain to work with. Continuous automated testing of a complex UI across all supported browsers continues to be either impossible, or so practically expensive that it might as well be impossible – and never mind mobile. We’re still largely stuck doing lightweight automated functional tests on a small subset of supported browser/device/OS combinations, and leaning as hard as we can on lower-level tests that can run quickly and inexpensively. This is a bummer.

If you’re interested in improving the problem of untested – or untestable – code, the single most valuable book you can read is Working Effectively with Legacy Code. The author, Michael Feathers, defines “legacy code” as any code that does not have tests. On the topic of testing, the baseline is to accept the truth of that statement, even if other constraints are preventing you from addressing it.

Process Automation

You, hopefully, take for granted the existence of Grunt for task automation. Gulp and Broccoli provide a different approach to automating builds in particular. I haven’t used Broccoli, and I’ve only dabbled in Gulp, but I’ve definitely come to appreciate some of the limitations of Grunt when it comes to automating complex tasks that depend on other services – especially when that task needs to run thousands of times a day.

The arrival of Yeoman was a mere 45 days away when I wrote my 2012 post. I confess I didn’t use it when it first came out, but recently I’ve been a) starting projects from scratch using unfamiliar tech; and b) trying to figure out how to standardize our approach to developing third-party JS apps at Bazaarvoice. Yeoman really shines in both of these cases. A simple yo react-webpack from the command line creates a whole new project for you, with all the bells and whistles you could possibly want – tests, a dev server, a hello world app, and more. If React and Webpack aren’t your thing, there’s probably a generator to meet your needs, and it’s also easy to create your own.

Given that Yeoman is a tool that you generally use only at the start of a project, and given that new projects don’t get started all the time, it’s mostly just something worth knowing about. Unless, of course, you’re also trying to standardize practices across projects – then it might be a bit more valuable.

Broccoli has gotten its biggest adoption as the basis for ember-cli, and folks I trust suggest that pairing may get a makeover – and a new name – to form the basis of a Grunt/Yeoman replacement in the future. Development on both Grunt and Yeoman has certainly slowed down, so it will be interesting to see what the future brings there.

Code Quality

If you, like me, start to twitch when you see code that violates a project’s well-documented style guide, then tools like JSCS and ESLint are godsends, and neither of them existed for you to know about them back in 2012. They both provide a means to document your style guide rules, and then verify your code against those rules automatically, before it ever makes it into a pull request. Which brings me to …

Git

I don’t think a whole lot has changed in the world of Git workflows since 2012, and I’d like to point out Github still hasn’t made branch names linkable on the pull request page, for f@#$s sake.

You should obviously be comfortable working with feature branches, rebasing your work on the work of others, squashing commits using interactive rebase, and doing work in small units that are unlikely to cause conflicts whenever possible. Another Git tool to add to your toolbox if you haven’t already is the ability to run hooks – specifically, pre-push and pre-commit hooks to run your tests and execute any code quality checks. You can write them yourself, but tools like ghooks make it so trivial that there’s little excuse not to integrate them into your workflow.

Client-Side Templating

This may be the thing I got the most wrong in my original post, for some definition of “wrong.” Client-side templating is still highly valuable, of course – so valuable that it will be built-in to ES2015 – but there can be too much of a good thing. It’s been a hard-earned lesson for lots of teams that moving all rendering to the browser has high costs when it comes to performance, and thus has the “generate all the HTML client-side” approach rightfully fallen out of favor. Smart projects are now generating HTML server-side – maybe even pre-generating it, and storing it as static files that can be served quickly – and then “hydrating” that HTML client-side, updating it with client-side templates as events warrant.

The new expectation here – and I say this to myself as much as to anyone else – is that you are considering the performance implications of your decisions, and maybe not restricting yourself quite so thoroughly to the realm of the browser. Which, conveniently, leads to …

Node

You say you know JavaScript, so these days I expect that you can hop on over to the Node side of things and at least pitch in, if not get at least knee-deep. Yes, there are file systems and streams and servers – and some paradigms that are fundamentally different from front-end dev – but front-end developers who keep the back end at arm’s length are definitely limiting their potential.

Even if your actual production back-end doesn’t use Node, it’s an invaluable tool when it comes to keeping you from getting blocked by back-end development. At the very least, you should be familiar with how to initialize a Node project; how to set up an Express server and routes; and how use the request module to proxy requests.

The End

Thanks to Paul, Alex, Adam, and Ralph for their thorough review of this post, and for generously pointing out places where I could do better. Thank them for the good parts, and blame any errors on me.

With that, good luck. See you again in another three years, perhaps.

Writing Conference Proposals

| Comments

I’ve had several office hours sessions in the last couple of weeks, and one topic that comes up again and again is how to write a talk description.

If you think about it, conference organizers don’t have a whole lot to go on when they’re choosing talks, unless they already know who you are. Even if your name is well-known, though, organizers may still not know who you are – lots of conferences are taking a blind approach to selecting speakers. That means that no matter who you are, your talk description might be the only thing organizers have on which to base their decision. When you give your talk, you’ll need to engage your audience; the abstract is your chance to engage the organizer.

After answering the question several times, I’ve realized that I have a pretty explainable – some might call it formulaic – approach to writing abstracts for a certain common type of talk. It works well for talks about how you solved a problem, talks about how you came to learn a thing you didn’t know, and even “10 things you didn’t know about X” talks. I thought I’d try to explain it here.

Paragraph 1: The context

The first paragraph is where you set the scene, and make it clear to your reader that they have been in the situation you’re going to talk about. This is where you establish a connection, baiting a hook that you’ll set later.

You’ve got the hang of this whole JavaScript thing. Your code works on ancient browsers, and positively sings on new ones. AMD, SPA, MVC – you can do that stuff in your sleep.

Paragraph 2: Well, actually …

The second paragraph is where you break the bad news, which savvy readers may already know: the thing you laid out in the first paragraph is more complicated than it seems, or has downsides that people don’t realize, or generally is a bad approach … but only with the benefit of hindsight, which you just happen to have.

But now your users are trying to type in your Very Important Form, and nothing is showing up; that widget that’s supposed to end up in a certain div is showing up somewhere completely different; and, rarely but not never, your app just doesn’t load at all. You thought you had the hang of this whole JavaScript thing, but now you’re in the world of third-party JavaScript, where all you control is a single script tag and where it’s all but impossible to dream up every hostile environment in which your code will be expected to work. “It works on my machine” has never rung quite so hollow.

Paragraph 3: The promise

You’ve successfully induced a bit of anxiety in your reader – and a strong desire to know what they don’t know. The hook is set, so the last paragraph is the time to promise to relieve that anxiety – but only if your talk is chosen!

In this talk, we’ll take a look at some of the delightful bugs we’ve had to solve at Bazaarvoice while working on the third-party JavaScript app that collects and displays ratings and reviews for some of the world’s largest retailers. We’ll also look at some strategies for early detection – and at some scenarios where you are just plain SOL.

Next

It turns out that in the process of writing your abstract, you’ve also written the most basic outline for your talk: on stage, you’ll want to set the context, explain the complexity, then deliver on your promise. Pretty handy, if you ask me.

Office Hours for Aspiring Speakers

| Comments

I’m expecting that my 2015 is going to include a bit less speaking than in years past, so I’m hoping I can use some of that newly available time to help new speakers find their way to the stage. To that end, I’m kicking off “office hours” this week: a few slots a week where aspiring and up-and-coming speakers can borrow my ear for a bit to talk about their ideas, their fears, their questions, and their ambitions.

This idea isn’t mine; I was inspired by a similar effort by Jen Myers, who has been offering mentoring sessions to aspiring speakers since 2013. I’m forever indebted to the folks who helped me get through my first talk, and I’ve been honored to give a gentle nudge to several other speakers in the years since.

If you’re interested, you can sign up here. There’s no script or agenda, and – at least to start with – I’m not going to try to suggest who should or shouldn’t sign up. If you think it would be useful to you, go for it! My only ask is that you be seriously interested in giving coherent, informative, engaging talks on technical topics.

Writing Unit Tests for Existing JavaScript

| Comments

My team at Bazaarvoice has been spending a lot of time lately thinking about quality and how we can have greater confidence that our software is working as it should.

We’ve long had functional tests in place that attempt to ask questions like “When a user clicks a button, will The Widget do The Thing?” These tests tell us a fair amount about the state of our product, but we’ve found that they’re brittle – even after we abstracted away the CSS selectors that they rely on – and that they take approximately forever to run, especially if we want to run them in all of the browsers we support. The quality of the tests themselves is all over the map, too – some of them are in fact unit tests, not really testing anything functional at all.

A few months ago we welcomed a new QA lead to our team as part of our renewed focus on quality. Having a team member who is taking a thoughtful, systematic approach to quality is a game-changer – he’s not just making sure that new features work, but rather has scrutinized our entire approach to delivering quality software, to great effect.

One of the things he has repeatedly emphasized is the need to push our tests down the stack. Our functional tests should be black-box – writing them shouldn’t require detailed knowledge of how the software works under the hood. Our unit tests, on the other hand, should provide broad and detailed coverage of the actual code base. In an ideal world, functional tests can be few and slow-ish, because they serve as an infrequent smoke test of the application; unit tests should be thorough, but execute quickly enough that we run them all the time.

Until now, our unit tests have been entirely focused on utility and framework code – do we properly parse a URL, for example? – not on code that’s up close and personal with getting The Widget to do The Thing. I’d told myself that this was fine and right and good, but in reality I was pretty terrified of trying to bolt unit tests onto feature code of incredibly varying quality, months or even years after it was first written.

A week or so ago, thanks to some coaxing/chiding from fellow team members, I decided to bite the bullet and see just how bad it would be. A week later, I feel like I’ve taken the first ten steps in a marathon. Of course, taking those first steps involves making the decision to run, and doing enough training ahead of time that you don’t die, so in that regard I’ve come a long way already. Here’s what I’ve done and learned so far.

Step 0

I was lucky in that I wasn’t starting entirely from scratch, but if you don’t already have a unit testing framework in place, don’t fret – it’s pretty easy to set up. We use Grunt with Mocha as our test framework and expect.js as our assertion library, but if I were starting over today I’d take a pretty serious look at Intern.

Our unit tests are organized into suites. Each suite consists of a number of files, each of which tests a single AMD module. Most of the modules under test when I started down this path were pretty isolated – they didn’t have a ton of dependencies generally, and had very few runtime dependencies. They didn’t interact with other modules that much. Almost all of the existing unit test files loaded a module, executed its methods, and inspected the return value. No big deal.

Feature-related code – especially already-written feature-related code – is a different story. Views have templates. Models expect data. Models pass information to views, and views pass information to models. Some models need parents; others expect children. And pretty much everything depended on a global-ish message broker to pass information around.

Since the code was originally written without tests, we are guaranteed that it would be in various states of testability, but a broad rewrite for testability is of course off the table. We’ll rewrite targeted pieces, but doing so comes with great risk. For the most part, our goal will be to write tests for what we have, then refactor cautiously once tests are in place.

We decided that the first place to start was with models, so I found the simplest model I could:

1
2
3
4
5
6
7
8
9
10
11
12
13
define([
  'framework/bmodel',
  'underscore'
], function (BModel, _) {
  return BModel.extend({
    options : {},
    name : 'mediaViewer',

    init : function (config, options) {
      _.extend(this.options, options);
    }
  });
});

Why do we have a model that does approximately nothing? I’m not going to attempt to answer that, though there are Reasons – but for the sake of this discussion, it certainly provides an easy place to start.

I created a new suite for model tests, and added a file to the suite to test the model. I could tell you that I naively plowed ahead thinking that I could just load the module and write some assertions, but that would be a lie.

Mocking: Squire.js

I knew from writing other tests, on this project and projects in the past, that I was going to need to “mock” some of my dependencies. For example, we have a module called ENV that is used for … well, way too much, though it’s better than it used to be. A large portion of ENV isn’t used by any given module, but ENV itself is required by essentially every model and view.

Squire.js is a really fantastic library for doing mocking in RequireJS-land. It lets you override how a certain dependency will be fulfilled; so, when a module under test asks for 'ENV', you can use Squire to say “use this object that I’ve hand-crafted for this specific test instead.”

I created an Injector module that does the work of loading Squire, plus mocking a couple of things that will be missing when the tests are executed in Node-land.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
define([
  'squire',
  'jquery'
], function (Squire, $) {
  return function () {
    var injector;

    if (typeof window === 'undefined') {
      injector = new Squire('_BV');

      injector.mock('jquery', function () {
        return $;
      });

      injector.mock('window', function () {
        return {};
      });
    }
    else {
      injector = new Squire();
    }

    return injector;
  };
});

Next, I wired up the test to see how far I could get without mocking anything. Note that the main module doesn’t actually load the thing we’re going to test – first, it sets up the mocks by calling the injector function, and then it uses the created injector to require the module we want to test. Just like a normal require, the injector.require is async, so we have to let our test framework know to wait until it’s loaded before proceeding with our assertions.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
define([
  'test/unit/injector'
], function (injector) {
  injector = injector();

  var MediaViewer;

  describe('MediaViewer Model', function () {
    before(function (done) {
      injector.require([
        'bv/c2013/model/mediaViewer'
      ], function (M) {
        MediaViewer = M;
        done();
      });
    });

    it('should be named', function () {
      var m = new MediaViewer({});
      expect(m.name).to.equal('mediaViewer');
    });

    it('should mix in provided options', function () {
      var m = new MediaViewer({}, { foo : 'bar' });
      expect(m.options.foo).to.equal('bar');
    });
  });
});

This, of course, still failed pretty spectacularly. In real life, a model gets instantiated with a component, and a model also expects to have access to an ENV that has knowledge of the component. Creating a “real” component and letting the “real” ENV know about it would be an exercise in inventing the universe, and this is exactly what mocks are for.

While the “real” ENV is a Backbone model that is instantiated using customer-specific configuration data, a much simpler ENV suffices for the sake of testing a model’s functionality:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
define([
  'backbone'
], function (Backbone) {
  return function (injector, opts) {
    injector.mock('ENV', function () {
      var ENV = new Backbone.Model({
        componentManager : {
          find : function () {
            return opts.component;
          }
        }
      });

      return ENV;
    });

    return injector;
  };
});

Likewise, a “real” component is complicated and difficult to create, but the pieces of a component that this model needs to function are limited. Here’s what the component mock ended up looking like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
define([
  'underscore'
], function (_) {
  return function (settings) {
    settings = settings || {};

    settings.features = settings.features || [];

    return {
      trigger : function () {},
      hasFeature : function (refName, featureName) {
        return _.contains(settings.features, featureName);
      },
      getScope : function () {
        return 'scope';
      },
      contentType : settings.contentType,
      componentId : settings.id,
      views : {}
    };
  };
});

In the case of both mocks, we’ve taken some dramatic shortcuts: the real hasFeature method of a component is a lot more complicated, but in the component mock we create a hasFeature method whose return value can be easily known by the test that uses the mock. Likewise, the behavior of the componentManager’s find method is complex in real life, but in our mock, the method just returns the same thing all the time. Our mocks are designed to be configurable by – and predictable for – the test that uses it.

Knowing what to mock and when and how is a learned skill. It’s entirely possible to mock something in such a way that a unit test passes but the actual functionality is broken. We actually have pretty decent tests around our real component code, but not so much around our real ENV code. We should probably fix that, and then I can feel better about mocking ENV as needed.

So far, my approach has been: try to make a test pass without mocking anything, and then mock as little as possible after that. I’ve also made a point of trying to centralize our mocks in a single place, so we aren’t reinventing the wheel for every test.

Finally: when I first set up the injector module, I accidentally made it so that the same injector would be shared by any test that included the module. This is bad, because you end up sharing mocks across tests – violating the “only mock what you must” rule. The injector module shown above is correct in that it returns a function that can be used to create a new injector, rather than the injector itself.

Here’s what the final MediaViewer test ended up looking like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
define([
  // This properly sets up Squire and mocks window and jQuery
  // if necessary (for running tests from the command line).
  'test/unit/injector',

  // This is a function that mocks the ENV module.
  'test/unit/mocks/ENV',

  // This is a function that mocks a component.
  'test/unit/mocks/component'
], function (injector, ENVMock, component) {
  injector = injector();

  // This will become the constructor for the model under test.
  var MediaViewer;

  // Create an object that can serve as a model's component.
  var c = component();

  // We also need to mock the ENV module and make it aware of
  // the fake component we just created.
  ENVMock(injector, { component : c });

  describe('MediaViewer Model', function () {
    before(function (done) {
      injector.require([
        'bv/c2013/model/mediaViewer'
      ], function (M) {
        MediaViewer = M;
        done();
      });
    });

    it('should be named', function () {
      var m = new MediaViewer({
        component : c
      }, {});
      expect(m.name).to.equal('mediaViewer');
    });

    it('should mix in provided options', function () {
      var m = new MediaViewer({
        component : c
      }, { foo : 'bar' });

      expect(m.options.foo).to.equal('bar');
    });
  });
});

Spying: Sinon

After my stunning success with writing 49 lines of test code to test a 13-line model, I was feeling optimistic about testing views, too. I decided to tackle this fairly simple view first:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
define([
  'framework/bview',
  'underscore',
  'hbs!contentAuthorProfileInline',
  'mf!bv/c2013/messages/avatar',
  'bv/util/productInfo',
  'framework/util/bvtracker',
  'util/specialKeys'
], function (BView, _, template, msgPack, ProductInfo, BVTracker, specialKeys) {
  return BView.extend({
    name : 'inlineProfile',

    templateName : 'contentAuthorProfileInline',

    events : {
      'click .bv-content-author-name .bv-fullprofile-popup-target' : 'launchProfile'
    },

    template : template,

    msgpacks : [msgPack],

    launchProfile : function (e) {
      // use r&r component outlet to trigger full profile popup component event
      this.getTopModel().trigger( 'showfullprofile', this.model.get('Author') );

      BVTracker.feature({
        type : 'Used',
        name : 'Click',
        detail1 : 'ViewProfileButton',
        detail2 : 'AuthorAvatar',
        bvProduct : ProductInfo.getType(this),
        productId : ProductInfo.getId(this)
      });
    }
  });
});

It turned out that I needed to do the same basic mocking for this as I did for the model, but this code presented a couple of interesting things to consider.

First, I wanted to test that this.getTopModel().trigger(...) triggered the proper event, but the getTopModel method was implemented in BView, not the code under test, and without a whole lot of gymnastics, it wasn’t going to return an object with a trigger method.

Second, I wanted to know that BVTracker.feature was getting called with the right values, so I needed a way to inspect the object that got passed to it, but without doing something terrible like exposing it globally.

Enter Sinon and its spies. Spies let you observe methods as they are called. You can either let the method still do its thing while watching how it is called, or simply replace the method with a spy.

I solved the first problem by defining my own getTopModel method on the model instance, and having it return an object. I gave that object a trigger method that was actually just a spy – for the sake of my test, I didn’t care what trigger did, only how it was called. Other tests [will eventually] ensure that triggering this event has the desired effect on the targeted model, but for the sake of this test, we don’t care.

Here’s what the test looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
describe('#launchProfile', function () {
  var spy;
  var v;

  before(function () {
    spy = sinon.spy();

    v = new InlineProfile({
      // model and component are defined elsewhere
      component : component,
      model : model
    });

    model.set('Author', 'author');

    v.getTopModel = function () {
      return {
        trigger : spy
      };
    };
  });

  it('should trigger showfullprofile event on top model', function () {
    v.launchProfile();

    expect(spy.lastCall.args[0]).to.equal('showfullprofile');
    expect(spy.lastCall.args[1]).to.equal('author');
  });
});

I solved the second problem – the need to see what’s getting passed to BVTracker.feature – by creating a BVTracker mock where every method is just a spy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// This is a mock for BVTracker that can be used by unit tests.
define([
  'underscore'
], function (_) {
  return function (injector, opts) {
    var BVTracker = {};

    injector.mock('framework/util/bvtracker', function () {
      _([
        'error',
        'pageview',
        'feature'
      ]).each(function (event) {
        BVTracker[event] = sinon.spy();
      });
    });

    return BVTracker;
  };
});

My test looked at the BVTracker.feature spy to see what it got when the view’s launchProfile method was called:

1
2
3
4
5
6
7
8
9
10
11
12
it('should send a feature analytics event', function () {
  v.launchProfile();

  var evt = BVTracker.feature.lastCall.args[0];

  expect(evt.type).to.equal('Used');
  expect(evt.name).to.equal('Click');
  expect(evt.detail1).to.equal('ViewProfileButton');
  expect(evt.detail2).to.equal('AuthorAvatar');
  expect(evt.bvProduct).to.equal('RatingsAndReviews');
  expect(evt.productId).to.equal('product1');
});

I’ve barely touched on what you can do with spies, or with Sinon in general. Besides providing simple spy functionality, Sinon delivers a host of functionality that makes tests easier to write – swaths of which I haven’t even begun to explore. One part I have explored is its ability to create fake XHRs and to fake whole servers, allowing you to test how your code behaves when things go wrong on the server. Do yourself a favor and spend some time reading through the excellent docs.

What to test … and not

I’ve written tests now for a tiny handful of models and views. Setting up the mocks was a bit of a hurdle – and there were plenty of other hurdles that are too specific to our project for me to talk about them in detail – but overall, the hardest part has been figuring out what, exactly, to test. I crafted the examples above to be pretty straightforward, but reality is a lot more complicated.

Writing tests for existing code requires first understanding the code that’s being tested and identifying interesting moments in that code. If there’s an operation that affects the “public” experience of the module – for example, if the value of a model attribute changes – then we need to write a test that covers that operation’s side effect(s). If there’s code that runs conditionally, we need to test the behavior of that code when that condition is true – and when it’s not. If there are six possible conditions, we need to test them all. If a model behaves completely differently when it has a parent – and this happens far too often in our code – then we need to simulate the parent case, and simulate the standalone case.

It can be tempting to try to test the implementation details of existing code – and difficult to realize that you’re doing it even when you don’t mean to. I try to stay focused on testing how other code might consume and interact with the module I’m testing. For example, if the module I’m testing triggers an event in a certain situation, I’m going to write a test that proves it, because some other code is probably expecting that event to get triggered. However, I’m not going to test that a method of a certain name gets called in a certain case – that’s an implementation detail that might change.

The exercise of writing unit tests against existing code proves to be a phenomenal incentive to write better code in the future. One comes to develop a great appreciation of methods that have return values, not side effects. One comes to loathe the person – often one’s past self – who authored complex, nested conditional logic. One comes to worship small methods that do exactly one thing.

So far, I haven’t rewritten any of the code I’ve been testing, even when I’ve spotted obvious flaws, and even when rewriting would make the tests themselves easier to write. I don’t know how long I’ll be able to stick to this; there are some specific views and models that I know will be nearly impossible to test without revisiting their innards. When that becomes necessary, I’m hoping I can do it incrementally, testing as I go – and that our functional tests will give me the cover I need to know I haven’t gone horribly wrong.

Spreading the love

Our team’s next step is to widen the effort to get better unit test coverage of our code. We have something like 100 modules that need testing, and their size and complexity are all over the map. Over the coming weeks, we’ll start to divide and conquer.

One thing I’ve done to try to make the effort easier is to create a scaffolding task using Grunt. Running grunt scaffold-test:model:modelName will generate a basic file that includes mocking that’s guaranteed to be needed, as well as the basic instantiation that will be required and a couple of simple tests.

There’s another senior team member who has led an effort in the past to apply unit tests to an existing code base, and he’s already warned me to expect a bit of a bumpy road as the team struggles through the inevitable early challenges of trying to write unit tests for existing feature code. I expect there to be a pretty steep hill to climb at first, but at the very least, the work I’ve done so far has – hopefully – gotten us to the top of the vertical wall that had been standing in our way.

Further Reading

I’m not exactly the first person to write about this. You may find these items interesting:

Austin

| Comments

In August 2002, it was a little more than a year since I’d left my job at my hometown newspaper. I had just sold my car and left my two jobs as a bartender. Between the tips in my pocket and the money I’d made from selling my car – a 1996 Neon with a probably cracked head gasket – I had about $2,000 to my name. I had a bicycle, camping gear, cooking gear, maps, a handheld GPS, a flip phone, two changes of bicycle clothing, and two changes of street clothes. I was in Camden, Maine, and my parents were taking my picture in front of a bicycle shop.

My destination was Austin. My plan was to ride to Savannah, GA – via Boston, New York, and the eastern shore of Maryland – and then turn right. I didn’t really have much of a plan beyond that, except that I hoped to crash with a friend of a friend when I got to Austin. I heard they had a good bus system. I figured I could sort out a job before my money ran out.

Three weeks and 1,000 miles later, I found myself outside of New Bern, NC, more tan and more fit than I’d ever been or would ever be again. I stopped at a grocery store and picked up food for the evening, tying a bag of apples off the side of my bike. I was planning to camp just south of town, but as I neared a park in the center of town, I found myself surrounded by cyclists setting up camp. They were there for a fund-raising ride, and no, no one would mind if I camped in the park with them instead of riding another 10 miles.

I pitched my tent. I followed them to the free dinner being served for them across the street.

I rode 150 miles – unencumbered by camping gear and all the rest – in the fund-raising ride for the next two days.

I made new friends. They invited me to come stay with them for a few days in Chapel Hill.

I lived with them for a month. I borrowed their 1990 Ford Festiva for a year.

I got a job painting a house. I got a job waitressing. I got a job doing desktop publishing. I got a job making web sites.

I got good at JavaScript. I traveled the world talking about it.

I met a girl. We bought a house. We adopted a baby.

I never made it to Austin, though life has taken me there a few days at a time more times than I can count. Finally, in 2013, I even got a job there. Since February, I’ve made the trek enough times that it’s truly become a home away from home. I’ve stopped using my phone to find my way around. Waitresses recognize me. People tell me about the secret back way to work, but I already know it. I have opinions about breakfast tacos.

It’s time to finish the story I started more than a decade ago, which brings me to the point: With much love for Durham, and for the irreplaceable people who have made our lives so full here, we’re moving to Austin this spring. At last.

Refactoring setInterval-based Polling

| Comments

I came across some code that looked something like this the other day, give or take a few details.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
App.Helpers.checkSyncStatus = function() {
  if (App.get('syncCheck')) { return; }

  var check = function() {
    $.ajax('/sync_status', {
      dataType: 'json',
      success: function(resp) {
        if (resp.status === 'done') {
          App.Helpers.reloadUser(function() {
            clearInterval(App.get('syncCheck'));
            App.set('syncCheck', null);
          });
        }
      }
    });
  };

  App.set('syncCheck', setInterval(check, 1000));
};

The code comes from an app whose server-side code queries a third-party service for new data every now and then. When the server is fetching that new data, certain actions on the front-end are forbidden. The code above was responsible for determining when the server-side sync is complete, and putting the app back in a state where those front-end interactions could be allowed again.

You might have heard that setInterval can be a dangerous thing when it comes to polling a server*, and, looking at the code above, it’s easy to see why. The polling happens every 1000 seconds, whether the request was successful or not. If the request results in an error, or fails, or takes more than 1000 milliseconds, setInterval doesn’t care – it will blindly kick off another request. The interval only gets cleared when the request succeeds and the sync is done.

The first refactoring for this is easy: switch to using setTimeout, and only enqueue another request once we know what happened with the previous one.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
App.Helpers.checkSyncStatus = function() {
  if (App.get('syncCheck')) { return; }

  var check = function() {
    $.ajax('/sync_status', {
      dataType: 'json',
      success: function(resp) {
        if (resp.status === 'done') {
          App.Helpers.reloadUser(function() {
            App.set('syncCheck', null);
          });
        } else {
          setTimeout(check, 1000);
        }
      }
    });
  };

  App.set('syncCheck', true);
};

Now, if the request fails, or takes more than 1000 milliseconds, at least we won’t be perpetrating a mini-DOS attack on our own server.

Our code still has some shortcomings, though. For one thing, we aren’t handling the failure case. Additionally, the rest of our application is stuck looking at the syncCheck property of our App object to figure out when the sync has completed.

We can use a promise to make our function a whole lot more powerful. We’ll return the promise from the function, and also store it as the value of our App object’s syncCheck property. This will let other pieces of code respond to the outcome of the request, whether it succeeds or fails. With a simple guard statement at the beginning of our function, we can also make it so that the checkSyncStatus function will return the promise immediately if a status check is already in progress.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
App.Helpers.checkSyncStatus = function() {
  var syncCheck = App.get('syncCheck');
  if (syncCheck) { return syncCheck; }

  var dfd = $.Deferred();
  App.set('syncCheck', dfd.promise());

  var success = function(resp) {
    if (resp.status === 'done') {
      App.Helpers.reloadUser(function() {
        dfd.resolve();
        App.set('syncCheck', null);
      });
    } else {
      setTimeout(check, 1000);
    }
  };

  var fail = function() {
    dfd.reject();
    App.set('syncCheck', null);
  };

  var check = function() {
    var req = $.ajax('/sync_status', { dataType: 'json' });
    req.then( success, fail );
  };

  setTimeout(check, 1000);

  return dfd.promise();
};

Now, we can call our new function, and use the returned promise to react to the eventual outcome of the sync:

1
2
3
4
5
6
7
8
App.Helpers.checkSyncStatus().then(
  // this will run if the sync was successful,
  // once the user has been reloaded
  function() { console.log('it worked'); },

  // this will run if the sync failed
  function() { console.log('it failed'); }
);

With a few more lines of code, we’ve made our function safer – eliminating the possibility of an out-of-control setInterval – and also made it vastly more useful to other pieces of the application that care about the outcome of the sync.

While the example above used jQuery’s promises implementation, there are plenty of other implementations as well, including Sam Breed’s underscore.Deferred, which mimics the behavior of jQuery’s promises without the dependency on jQuery.

* Websockets are a great way to eliminate polling all together, but in the case of this application, they weren’t an option.

Onward

| Comments

My friend IM’d me a link the other day to a document he and a colleague wrote at the end of 2011, listing all the things they wanted to make happen in the world of web development in 2012.

“We did almost all of it,” he said.

“Well shit,” I said. “I should write up something like this for 2013.”

“Why do you think I showed it to you?”


A year ago I was working at Toura, a startup based in New York that was developing software to make it easy to create content-centric mobile applications. I started there as a consultant back in 2010, helping them write a saner version of the prototype they’d developed in the months before.

I got the gig because apparently I spoke with their director of development, Matt, at a meetup in Brooklyn, though I actually have no recollection of this. By this time last year, I’d been there for more than a year, and Matt and I had convinced the company a few months before that the technology we’d developed – a JavaScript framework called Mulberry that ran inside of a Phonegap wrapper – was worth open-sourcing.

I spent much of January and February speaking at meetups and events – Vancouver, Boston, Austin, Charlotte – telling people why Mulberry was something they might consider using to develop their own content-centric mobile apps. By March, though, it was clear that Toura was headed in a direction that was different from where I wanted to go. As it turned out, Matt and I gave our notice on the same day.


April was the first time in almost 10 years that I purposefully didn’t work for a solid month. I spent almost two weeks in Europe, first in Berlin and then in Warsaw for Front Trends. I sold my car – it mostly just sat in the driveway anyway – to make Melissa feel a bit better about the part where I wasn’t making any money. Tiffany was a marvelous host; we took the train together from Berlin to Warsaw for the conference, barely talking the whole way as we worked on our respective presentations. Warsaw was a two-day whirlwind of wonderful people – Melanie, Milos, Chris, Alex, Frances – memorable for my terrible laryngitis and capped by endless hours of post-conference celebration in the hotel lobby, which was magically spotless when we made our way, bleary-eyed, to the train station early the next morning.

I flew home two days later; two days after that, I started at Bocoup.


Taking a job at Bocoup was a strategic change of pace for me. For 18 months, I had been immersed in a single product and a single codebase, and I was the architect of it and the expert on it. As fun as that was, I was ready to broaden my horizons and face a steady stream of new challenges in the company of some extremely bright people.

As it turned out, I ended up focusing a lot more on the training and education side of things at Bocoup – I spent the summer developing an updated and more interactive version of jQuery Fundamentals, and worked through the summer and fall on developing and teaching various JavaScript trainings, including a really fun two-day course on writing testable JavaScript. I also worked on creating a coaching offering, kicked off a screencasts project, and had some great conversations as part of Bocoup on Air. Throughout it all, I kept up a steady schedule of speaking – TXJS, the jQuery Conference, Fronteers, Full Frontal, and more.

Though I was keeping busy and creating lots of new content, there was one thing I wasn’t doing nearly enough of: writing code.


I went to New York in November to speak at the New York Times Open Source Science Fair, and the next day I dropped in on Matt, my old boss from Toura, before heading to the airport. He’s working at another startup these days, and they’re using Ember for their front-end. Though I was lucky enough to get a guided tour of Ember from Tom Dale over the summer, I’d always felt like I wouldn’t really appreciate it until I saw it in use on a sufficiently complex project.

As it turned out, Matt was looking for some JavaScript help; I wasn’t really looking for extra work, but I figured it would be a good chance to dig in to a real Ember project. I told him I’d work for cheap if he’d tolerate me working on nights and weekends. He gave me a feature to work on and access to the repo.

The first few hours with Ember were brutal. The next few hours were manageable. The hours after that were magical. The most exciting part of all, despite all the brain hurting along the way, was that I was solving problems with code again. It felt good.


With much love to my friends and colleagues at Bocoup, I’ve realized it is time to move on. I’ll be taking a few weeks off before starting as a senior software engineer at Bazaarvoice, the company behind the ratings and reviews on the websites of companies such as WalMart, Lowe’s, Costco, Best Buy, and lots more.

If you’re in the JS world, Bazaarvoice might sound familiar because Alex Sexton, of yayQuery, TXJS, and redhead fame, works there. I’ll be joining the team he works on, helping to flesh out, document, test, and implement a JavaScript framework he’s been prototyping for the last several months.

I’ve gotten tiny peeks at the framework as Alex and the rest of the team have been working on it, starting way back in February of last year, when I flew out to Austin, signed an NDA, and spoke at BVJS, an internal conference the company organized to encourage appreciation for JS as a first-class citizen. Talking to Alex and his colleagues over the last few weeks about the work that’s ahead of them, and how I might be able to help, has quite literally given me goosebumps more than once. I can’t wait.


I look back on 2012 with a lot of mixed emotions. I traveled to the UK, to Amsterdam, to Warsaw, to Berlin two times. I broke a bone in a foreign country, achievement unlocked. I learned about hardware and made my first significant code contribution to an open-source project in the process. I met amazing people who inspired me and humbled me, and even made a few new friends.

What I lost sight of, though, was making sure that I was seeking out new challenges and facing them head on, making sure that I was seeking opportunities to learn new things, even when they were hard, even when I didn’t have to. I didn’t realize til my work with Ember just how thoroughly I’d let that slip, and how very much I need it in order to stay sane.

And so while my friend probably has his list of things he will change in the world of web development in 2013, and while maybe I’ll get around to making that list for myself too, the list I want to be sure to look back on, 12 months or so from now, is more personal, and contains one item:

Do work that requires learning new things all the time. Even if that’s a little scary sometimes. Especially if that’s a little scary sometimes. In the end you will be glad.