Adventures in JavaScript Development

Bocoup

| Comments

bocoup

It wasn’t so long ago that I was giving my first talk about JavaScript at the 2009 jQuery Conference, and it was there that Bocoup’s Boaz Sender and Rick Waldron created the (now-defunct) objectlateral.com, a celebration of an unfortunate typo in the conference program’s listing of my talk.

A bond was forged, and ever since I’ve watched as Bocoup has grown and prospered. I’ve watched them do mind-boggling work for the likes of Mozilla, The Guardian, Google, and others, all while staying true to their mission of embracing, contributing to, and evangelizing open-web technologies.

Today, I’m beyond excited – and also a wee bit humbled – to announce that I’m joining their consulting team. As part of that role, I look forward to spending even more time working on and talking about patterns and best practices for developing client-side JavaScript applications. I also hope to work on new training offerings aimed at helping people make great client-side applications with web technology.

New beginnings have a terrible tendency to be accompanied by endings, and while the Bocoup opportunity is one I couldn’t refuse, it’s with a heavy heart that I bid farewell to the team at Toura. I’m proud of what we’ve built together, and that we’ve shared so much of it with the developer community in the form of Mulberry. The beauty of open source means that I fully expect to continue working on and with Mulberry once I leave Toura, but I know it won’t be the same.

I’ll be spending the next few days tying up loose ends at Toura, and then I’m taking a break in April to hit JSConf, spend some time in Berlin, and head to Warsaw to speak at FrontTrends. I’ll make my way back home in time to start with Bocoup on May 1.

And so. To my teammates at Toura: I wish you nothing but the best, and look forward to hearing news of your continued success. To Bocoup: Thanks for welcoming me to the family. It’s been a long time coming, and I’m glad the day is finally here.

Girls and Computers

| Comments

After a week that seemed just chock full of people being stupid about women in technology, I just found myself thinking back on how it was that I ended up doing this whole computer thing in the first place. I recorded a video a while back for the High Visibility Project, but that really just told the story of how I ended up doing web development. The story of how I got into computers begain when I was unequivocally a girl. It was 1982.

Back then, my dad made eyeglasses. My mom stayed at home with me and my year-old sister – which she’d continue to do til I was a teenager, when my brother finally entered kindergarten eight years later. Their mortgage was $79 – about $190 in today’s dollars – which is a good thing because my dad made about $13,000 a year. We lived in Weedsport, New York, a small town just outside of Syracuse. We walked to the post office to get our mail. The farmers who lived just outside town were the rich people. In the winters the fire department filled a small depression behind the elementary school with water for a tiny skating rink. There were dish-to-pass suppers in the gym at church.

In 1982, Timex came out with the Timex Sinclair TS-1000, selling 500,000 of them in just six months. The computer, a few times thicker than the original iPad but with about the same footprint, cost $99.95 – more than that mortgage payment. When everyone else in town was getting cable, my parents decided that three channels were good enough for them – it’s possible they still had a black-and-white TV – and bought a computer instead.

Timex Sinclair TS-1000

I remember tiny snippets of that time – playing kickball in my best friend Beth’s yard, getting in trouble for tricking my mother into giving us milk that we used to make mud pies, throwing sand in the face of my friend Nathan because I didn’t yet appreciate that it really sucks to get sand thrown in your face – but I vividly remember sitting in the living room of our house on Horton Street with my father, playing with the computer.

A cassette player was our disk drive, and we had to set the volume just right in order to read anything off a tape – there was actually some semblance of a flight simulator program that we’d play, after listening to the tape player screech for minutes on end. Eventually we upgraded the computer with a fist-sized brick of RAM that we plugged into the back of the computer, bumping our total capacity from 2K to 34K. I wrote programs in BASIC, though for the life of me I can’t remember what any of them did. The programs that were the most fun, though, were the ones whose assembly I painstakingly transcribed, with my five-year-old fingers, from the back of magazines – pages and pages of letters and numbers I didn’t understand on any level, and yet they made magic happen if I got every single one right.

A string of computers followed. My parents bought a Coleco Adam when we moved to Horseheads, New York – apparently the computer came with a certificate redeemable for $500 upon my graduation from high school, but Coleco folded long before they could cash it in. I made my first real money by typing a crazy lady’s crazy manuscript about crazy food into an Apple IIe that we had plugged into our TV, and my uncle and I spent almost the entirety of his visit from Oklahoma writing a game of Yahtzee! on that computer, again in BASIC.

Me at a computer fair at the mall with my sister, my mother, and my friend
Michael

Above: Me at a computer fair at the mall with my sister, my mother, and my friend Michael. “You were giving us all a tutorial, I can tell,” says my mom. Note the 5-1/4” external floppy drive.

In middle school, I started a school newspaper, and I think we used some prehistoric version of PageMaker to lay it out. When high school rolled around, I toiled through hand-crafting the perfect letters and lines and arrows in Technical Drawing so I could take CAD and CAM classes and make the computer draw letters and lines and arrows for me, and quickly proceeded to school just about every boy in the class. In my senior year of high school, I oversaw the school yearbook’s transition from laying out pages on paper to laying out pages with computers, this time the vaguely portable (it had a handle on the back!) Mac Classic. We used PageMaker again; the screen was black and white and 9”, diagonally.

Macintosh Classic

It was around then that a friend gave me a modem and – to his eventual chagrin, when he got the bill – access to his Delphi account, giving me my first taste of the whole Internet thing in the form of telnet, gopher, and IRC. When I went to college the following year, I took with me a computer with perhaps a 10MB hard drive, and no mouse.

Once again I found myself poring over magazines to discover URIs and, eventually, URLs that I could type to discover a whole new world of information. In 1995, I spent the summer making my college newspaper’s web site, previewing it in Lynx – it felt like there wasn’t much to learn when there was so little difference between the markup and what I saw on the screen. I would go to the computer lab to use NCSA’s Mosaic on the powerful RISC 6000 workstations, because they had a mouse. Yahoo! was about one year old. My friend Dave, who lived down the street, installed Windows 95 that summer and invited me over to show me. It was amazing. We were living in the future.

My early years with computers seem pretty tame – I wasn’t tearing them apart or building my own or doing anything particularly interesting with them, but I was using them, I was telling them what to do and they were mostly listening, and it never made me feel like I was weird. To the contrary, it made me feel powerful and empowered. I felt like a part of this ever-growing community of people who understood, eventually, that computers were going to change the world. It was the people who didn’t understand this who were weird and beneath us. It was the people who understood computers better than me of whom I stood in awe.

I can barely remember a time when computers weren’t a part of my life, and yet when they first entered my life, their presence was incredibly exceptional. These days, of course, computers are ubiquitous, but interaction with them at the copy-assembly-from-the-back-of-a-magazine level is almost nonexistent. Parents who can approach a computer with the same awe and wonder and determination as a child – as I must imagine that my dad did in 1982 – are likely equally rare.

In some ways, it is like the very ubiquity of technology has led us back to a world where socially normative gender roles take hold all over again, and the effort we’re going to need to put into overcoming that feels overwhelming sometimes. Words can’t express my gratitude for the parents I have, for that $99.95 investment they made in me, and for fact that I was lucky enough to be 5 and full of wonder in 1982.

Thoughts on a (Very) Small Project With Backbone and Backbone Boilerplate

| Comments

I worked with Backbone and the Backbone Boilerplate for the first time last weekend, putting together a small demo app for a presentation I gave last week at BazaarVoice. I realize I’m about 18 months late to the Backbone party, here, but I wanted to write down my thoughts, mostly because I’m pretty sure they’ll change as I get a chance to work with both tools more.

Backbone

Backbone describes itself as a tool that “gives structure to web applications,” but, at the risk of sounding pedantic, I think it would be more accurate to say that it gives you tools that can help you structure your applications. There’s incredibly little prescription about how to use the tools that Backbone provides, and I have a feeling that the code I wrote to build my simple app looks a lot different than what someone else might come up with.

This lack of prescription feels good and bad – good, because I was able to use Backbone to pretty quickly set up an infrastructure that mirrored ones I’ve built in the past; bad, because it leaves open the possibility of lots of people inventing lots of wheels. To its credit, it packs a lot of power in a very small package – 5.3k in production – but a real app is going to require layering a lot more functionality on top of it. Ultimately, the best way to think of Backbone is as the client-side app boilerplate you’d otherwise have to write yourself.

My biggest complaint about Backbone is probably how unopinionated it is about the view layer. Its focus seems to be entirely on the data layer, but the view is still where we spend the vast majority of our time. Specifically, I think Backbone could take a page from Dojo, and embrace the concept of “templated widgets”, because that’s what people seem to be doing with Backbone views anyway: mixing data with a template to create a DOM fragment, placing that fragment on the page, listening for user interaction with the fragment, and updating it as required. Backbone provides for some of this, specifically the event stuff, but it leaves you to write your own functionality when it comes to templating, placing, and updating. I think this is a solveable problem without a whole lot of code, and want to spend some time trying to prove it, but I know I need to look into the Backbone Layout Manager before I get too carried away.

Backbone Boilerplate

This project from Tim Branyen was a life-saver – it gave me an absolutely enormous head start when it came to incorporating RequireJS, setting up my application directories, and setting up a development server. It also included some great inline docs that helped me get my bearings with Backbone.

There are a couple of ways I think the boilerplate could be improved, and I’d be curious for others’ opinions:

  • The sample app includes the concept of “modules,” which seem to be a single file that include the models, collections, views, and routes for a … module. I don’t love the idea of combining all of this into a single file, because it seems to discourage smart reuse and unit testing of each piece of functionality. In the app I created, I abandoned the concept of modules, and instead broke my app into “components”, “controllers”, and “services”. I explain this breakdown in a bit more depth in the presentation I gave at BazaarVoice. I’m not sure this is the right answer for all apps, but I think modules oversimplify things.
  • The boilerplate includes a namespace.js file. It defines a namespace object, and that object includes a fetchTemplate method. It seems this method should only be used by views, and so I’d rather see something along the lines of an enhanced View that provides this functionality. That’s what I did with the base component module in my sample app.
  • I’m super-glad to see Jasmine included in the test directory, but unfortunately the examples show how to write Jasmine tests, not Jasmine tests for a Backbone app. As a community, we definitely need to be showing more examples of how to test things, and this seems like a good opportunity to distribute that knowledge.

Overall

I feel a little silly that I’m just now getting around to spending any time with Backbone, and I know that I only scratched the surface, but I like what I saw. I think it’s important to take it for what it is: an uber-tiny library that gets you pointed in the right direction. What I really want to see are fuller-fledged frameworks that build on top of Backbone, because I think there’s a lot more that can be standardized beyond what Backbone offers. I’m hoping to have a bit more time in April to dig in, and hopefully I can flesh out some of these ideas into something useful.

Community Conferences

| Comments

In 2010, I helped put on the first TXJS. We sold our first tickets for $29, and I think the most expensive tickets went for something like $129. We had about 200 people buy tickets, we had speakers like Douglas Crockford, Paul Irish, and John Resig, and we had sponsors like Facebook and Google. Our total budget was something like $30,000, and every out-of-town speaker had their travel and accommodations paid for.

In May, O’Reilly Media is holding another JavaScript conference in San Francisco, called FluentConf. I recently came to know that they are charging $100,000 for top-tier sponsorships, and that they are offering a 10-minute keynote as part of the package.

This turned my stomach, and not just because I believe it cheapens the experience of attendees, who will pay hundreds of dollars themselves. What really upset me was that a few weeks ago, I was approached to be on the speaker selection committee of FluentConf, and that conversation led me to discover that FluentConf would not be paying for speaker travel and accommodations. And so the other day, I tweeted:

conference #protip: save your money – and your speaking skills – for events that don’t sell their keynotes for $100k

Last night, I was at the Ginger Man in Austin, and I checked the Twitters, discovering that Peter Cooper, one of the chairs of FluentConf, had replied to a conversation that arose from that tweet:

@rmurphey @tomdale If you’re referring to Fluent, that is news to me.

I will accept the weird fact that the co-chair of a conference didn’t know its speaking slots were for sale – I gather that it is essentially a volunteer role, and the co-chairs aren’t necessarily in the driver’s seat when it comes to decisions like this. I let Peter know that, indeed, I had a PDF that outlined all the sponsorship options.

This is the part where, in some alternate reality, a mutual understanding of the offensiveness of this fact would have been achieved. What happened instead was a whole lot of name-calling, misquoting, and general weirdness.

Here’s the deal. Conferences can run their event however they want, and they can make money hand over fist. They can even claim they are the giving JavaScript developers “an event of their own,” ignoring the existence of the actual community-run JavaScript events that have been around for years now. I probably won’t go to or speak at an event that makes money hand over fist, but I don’t have any problem with the existence of such events, or with people’s involvement with them. However, when a conference is making money hand over fist – my back of the napkin calculations would suggest that FluentConf stands to have revenues of well over a million dollars – then that conference has no excuse not to pay the relatively paltry costs associated with speaker travel and accommodations.

A conference does not exist without its speakers. Those who speak at an event – the good ones, anyway – spend countless hours preparing and rehearsing, and they are away from home and work for days. While I do not discount the benefits that accrue to good speakers, the costs of being a speaker are non-trivial – and that’s before you get into the dollar costs of travel and accommodations.

When an event is unwilling to cover even those hard costs – nevermind the preparation time and time away from work and home – it materially affects the selection of speakers. It’s even worse when those same conferences claim to desire diversity; the people they claim to want so badly are the very people most likely to be discouraged when they find out they have to pay their own way to the stage.

In the conversation last night, I made this point:

when only the people who can afford to speak can speak, then only the people who can afford to speak will speak.

Amy Hoy responded with a criticism of community-run conferences:

and when only ppl who can order a ticket in 3 seconds can afford to come, only ppl who can order a ticket in 3 seconds can come

I know that getting tickets to the actual community-run events is hard, but that is because the community-run events flat-out ignore the economics of supply and demand, choosing instead to sell tickets at affordable prices even if it means they will sell out in a heartbeat, leaving a boatload of potential profit on the table. And yet those events – JSConf, TXJS, and the like – have still figured out how to cover speaker costs and provide attendees and sponsors with unforgettable experiences.

When an event with revenues exceeding a million dollars is unwilling to cover those costs, while simultaneously selling speaking slots, I do not hesitate for a moment to call that event out, and I do not hesitate to call on respected members of the community to sever their ties with the event. I’m not embarrassed about it, and you can call me all the names you want.

Mulberry: A Development Framework for Mobile Apps

| Comments

I’ll be getting on stage in a bit at CapitolJS, another great event from Chris Williams, the creator of JSConf and all-around conference organizer extraordinaire. My schtick at conferences in the past has been to talk about the pain and pitfalls of large app development with JavaScript, but this time is a little different: I’ll be announcing that Toura Mobile has created a framework built on top of PhoneGap that aims eliminate some of those pains and pitfalls for mobile developers. We’re calling it Mulberry, and you’ll be seeing it on GitHub in the next few weeks.

tl;dr: go here and watch this video.

While the lawyers are dotting the i’s and crossing the t’s as far as getting the code in your hands – we’re aiming for a permissive license similar to the licenses for PhoneGap and Dojo – I wanted to tell you a little bit about it.

Mulberry is two things. First, it’s command line tools (written in Ruby) that help you rapidly scaffold and configure an app, create content using simple Markdown and YAML, and test it in your browser, in a simulator, and on device. Second, and much more exciting to me as a JavaScript developer, it’s a framework built on top of the Dojo Toolkit for structuring an application and adding custom functionality in a sane way.

Mulberry lets you focus on the things that are unique to your application. It provides an underlying framework that includes a “router” for managing application state; built-in components and templates for displaying standard content types like text, audios, videos, feeds, and images; a simple API for defining custom functionality and integrating it with the system; and an HTML/CSS framework that uses SASS and HAML templates to make it easy to style your apps.

The basics of setting up an app are pretty well covered at the Mulberry site, but if you’re reading this, you’re probably a JavaScript developer, so I want to focus here on what Mulberry can do for you. First, though, let me back up and cover some terminology: Mulberry apps consist of a set of “nodes”; each node is assigned a template, and each template consists of components arranged in a layout. Nodes can have assets associated with them – text, audio, images, video, feeds, and data.

It’s the data asset that provides the most power to developers – you can create an arbitrary object, associate it with a node, and then any components that are in the template that’s being used to display the node will get access to that data.

A Twitter component offers a simple example. A node might have a data asset like this associated with it:

1
{ term : 'capitoljs', type : 'twitter' }

We could define a custom template for this page (mulberry create_template Twitter), and tell that template to include a Twitter component:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Twitter:
  screens:
    - name: index
      regions:
        -
          size: fixed
          scrollable: false
          components:
            - PageNav
        -
          size: flex
          scrollable: true
          components:
            - PageHeaderImage
            - custom:Twitter

Next, we’d define our Twitter component (mulberry create_component Twitter), which would create the skeleton of a component file:

1
2
3
4
5
6
7
8
9
10
11
12
13
dojo.provide('client.components.Twitter');

toura.component('Twitter', {
  componentTemplate : dojo.cache('client.components', 'Twitter/Twitter.haml'),

  prep : function() {

  },

  init : function() {

  }
});

One of the things the skeleton contains is a reference to the template for the component. The create_component command creates this file, which defines the DOM structure for the component. For the sake of this component, that template will just need to contain one line:

1
%ul.component.twitter

As I mentioned earlier, Mulberry components automatically get access to all of the assets that are attached to the node they’re displaying. This information is available as an object at this.node. Mulberry components also have two default methods that you can implement: the prep method and the init method.

The prep method is an opportunity to prepare your data before it’s rendered using the template; we won’t use it for the Twitter component, because the Twitter component will go out and fetch its data after the template is rendered. This is where the init method comes in – this is where you can tell your component what to do. Here’s what our Twitter component ends up looking like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
dojo.provide('client.components.Twitter');

mulberry.component('Twitter', {
  componentTemplate : dojo.cache('client.components', 'Twitter/Twitter.haml'),
  tweetTemplate : dojo.cache('client.components', 'Twitter/Tweet.haml'),

  init : function() {
    var data = dojo.filter(this.node.data, function(d) {
          return d.type === 'twitter'
        })[0].json;

    $.ajax('http://search.twitter.com/search.json?q=' + data.term, {
      dataType : 'jsonp',
      success : $.proxy(this, '_onLoad')
    });
  },

  _onLoad : function(data) {
    var tweets = data.results,
        tpl = mulberry.haml(this.tweetTemplate),
        html = $.map(tweets, function(tweet) {
          tweet.link = 'http://twitter.com/capitoljs/status/' + tweet.id_str;

          tweet.created_at = dojo.date.locale.format(
            new Date(tweet.created_at), {
              datePattern : 'EEE',
              timePattern : 'h:m a'
            }
          );

          tweet.text = tweet.text.replace(
            /@(\S+)/g,
            "<a href='http://twitter.com/#!/$1'>@$1</a>"
          );

          return tpl(tweet);
        }).join('');

    this.$domNode.html(html);
    this.region.refreshScroller();
  }
});

Note that when we define the data variable in the init method, we look at this.node.data, which is an array of all of the data objects associated with the node. We filter this array to find the first data object that is the right type – this means we can have lots of different data objects associated with a given node.

Note also that there’s a property this.$domNode that we’re calling jQuery methods on, and that we’re using jQuery’s $.ajax – Mulberry apps come with jQuery enabled by default, and if it’s enabled, helpers like this.$domNode become available to you. This means that very little knowledge of Dojo is required to start adding your own functionality to an app – if you need it, though, the full power of the Dojo Toolkit is available to you too.

Here’s what our component ends up looking like, with a little bit of custom CSS applied to our app:

screenshot

This is a pretty basic demo – Twitter is, indeed, the new hello world – but I hope it gives you a little bit of an idea about what you might be able to build with Mulberry. We’ve been using it in production to create content-rich mobile apps for our users for months now (connected to a web-based CMS instead of the filesystem, of course), and we’ve designed it specifically to be flexible enough to meet arbitrary client requests without the need to re-architect the underlying application.

If you know JavaScript, HTML, and CSS, Mulberry is a powerful tool to rapidly create a content-rich mobile application while taking advantage of an established infrastructure, rather than building it yourself. I’m excited to see what you’ll do with it!

Switching to Octopress

| Comments

I’m taking a stab at starting a new blog at rmurphey.com, powered by Octopress, which is a set of tools, themes, and other goodness around a static site generator (SSG) called jekyll. A couple of people have noticed the new site and wondered what I’m doing, so I thought I’d take a couple of minutes to explain.

My old blog at blog.rebeccamurphey.com is managed using Posterous. It used to be a self-hosted WordPress site, but self-hosted WordPress sites are so 2009. One too many attacks by hackers made it way more trouble than it seemed to be worth. Posterous made switching from a WordPress install pretty easy, so, I did that. All told, it took a few hours, and I was pretty happy.

For a few reasons, the old blog isn’t going anywhere:

  • I ran into some trouble importing the old content into jekyll. I was tired and I didn’t investigate the issues too much, so they’re probably solveable, but …
  • Some of the old content just isn’t that good, and since time is a finite resource, I don’t want to get too wrapped up in moving it over. Plus …
  • Frighteningly or otherwise, some of my posts have become reference material on the internet. If I move them, I’ve got to deal with redirections, and I have a feeling that’s not going to be an easy task with Posterous.

In hindsight, I should have switched directly from WordPress to an SSG. Despite my many complaints about Posterous – misformatted posts, lack of comment hyperlinks, a sign-in requirement for commenting, and lots more – in the end my decision to switch to a static site generator instead was more about having easy control over my content on my filesystem.

This article explains it well, but the bottom line, I think, is that static site generators are blogging tools for people who don’t need all the bullshit that’s been added to online tools in the interest of making them usable by people who don’t know wtf they’re doing. So, yes, to use an SSG, you have to know wtf you’re doing, and for me that’s a good thing: the tool gets out of my way and lets me focus on the writing.

As for Octopress, it seems pretty damn nifty – the default theme looks gorgeous on my desktop and on my phone, and it seems they’ve taken care to put common customization points in a single sass file. All that aside, though, one of my favorite parts about it is that my content is truly my content. If Octopress pisses me off – though I hope it won’t! – then I can simply take my markdown files and put them in some other SSG, upload the whole thing to my GitHub pages, and be done with it. Win all around.

Using Object Literals for Flow Control and Settings

| Comments

I got an email the other day from someone reading through jQuery Fundamentals – they’d come across the section about patterns for performance and compression, which is based on a presentation by Paul Irish gave back at the 2009 jQuery Conference in Boston.

In that section, there’s a bit about alternative patterns for flow control – that is, deciding what a program should do next. We’re all familiar with the standard if statement:

1
2
3
4
5
6
7
function isAnimal(thing) {
  if (thing === 'dog' || thing === 'cat') {
    console.log("yes!");
  } else {
    console.log("no");
  }
}

What stumped the person who emailed me, though, was when the same logic as we see above was written like this:

1
2
3
4
5
6
7
function isAnimal(thing) {
  if (({ cat : 1, dog : 1 })[ thing ]) {
    console.log("yes!");
  } else {
    console.log("no");
  }
}

What’s happening here is that we’re using a throwaway object literal to express the conditions under which we will say a thing is an animal. We could have stored the object in a variable first:

1
2
3
4
5
6
7
8
9
10
11
12
function isAnimal(thing) {
  var animals = {
    cat : 1,
    dog : 1
  };

  if (animals[ thing ]) {
    console.log("yes!");
  } else {
    console.log("no");
  }
}

However, that variable’s only purpose would be to provide this one lookup, so it can be argued that the version that doesn’t bother setting the variable is more economical. Reasonable people can probably disagree about whether this economy of bytes is a good tradeoff for readability – something like this is perfectly readable to a seasoned developer, but potentially puzzling otherwise – but it’s an interesting example of how we can use literals in JavaScript without bothering to store a value in a variable.

The pattern works with an array, too:

1
2
3
function animalByIndex(index) {
  return [ 'cat', 'dog' ][ index ];
}

It’s also useful for looking up values generally, which is how I find myself using it most often these days in my work with Toura, where we routinely branch our code depending on the form factor of the device we’re targeting:

1
2
3
4
5
6
function getBlingLevel(device) {
  return ({
    phone : 100,
    tablet : 200
  })[ device.type ];
}

As an added benefit, constructs that use this pattern will return the conveniently falsy undefined if you try to look up a value that doesn’t have a corresponding property in the object literal.

A great way to come across techniques like this is to read the source code of your favorite library (and other libraries too). Unfortunately, once discovered, these patterns can be difficult to decipher, even if you have pretty good Google fu. Just in case your neighborhood blogger isn’t available, IRC is alive and well in 2011, and it’s an excellent place to get access to smart folks eager to take the time to explain.

Lessons From a Rewrite

| Comments

MVC and friends have been around for decades, but it’s only in the last couple of years that broad swaths of developers have started applying those patterns to JavaScript. As that awareness spreads, developers eager to use their newfound insight are presented with a target-rich environment, and the temptation to rewrite can be strong.

There’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. … The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming: It’s harder to read code than to write it. - Joel Spolsky

When I started working with Toura Mobile late last year, they already had a product: a web-based CMS to create the structure of a mobile application and populate it with content, and a PhoneGap-based application to consume the output of the CMS inside a native application. Customers were paying, but the development team was finding that delivering new features was a struggle, and bug fixes seemed just as likely to break something else as not. They contacted me to see whether they should consider a rewrite.

With due deference to Spolsky, I don’t think it was a lack of readability driving their inclination to rewrite. In fact, the code wasn’t all that difficult to read or follow. The problem was that the PhoneGap side of things had been written to solve the problems of a single-purpose, one-off application, and it was becoming clear that it needed to be a flexible, extensible delivery system for all of the content combinations clients could dream up. It wasn’t an app — it was an app that made there be an app.

Where a new system concept or new technology is used, one has to build a system to throw away, for even the best planning is not so omniscient as to get it right the first time. Hence plan to throw one away; you will, anyhow. - Fred Brooks, The Mythical Man Month

By the time I’d reviewed the code and started writing up my findings, the decision had already been made: Toura was going to throw one away and start from scratch. For four grueling and exciting months, I helped them figure out how to do it better the second time around. In the end, I like to think we’ve come up with a solid architecture that’s going to adapt well to clients’ ever-changing needs. Here, then, are some of the lessons we learned along the way.

Understand what you’re rewriting

I had spent only a few days with the codebase when we decided that we were going to rewrite it. In some ways, this was good — I was a fresh set of eyes, someone who could think about the system in a new way — but in other ways, it was a major hindrance. We spent a lot of time at the beginning getting me up to speed on what, exactly, we were making; things that went without saying for existing team members did not, in fact, go without saying for me.

This constant need for explanation and clarification was frustrating at times, both for me and for the existing team, but it forced us to state the problem in plain terms. The value of this was incredible — as a team, we were far less likely to accept assumptions from the original implementation, even assumptions that seemed obvious.

One of the key features of Toura applications is the ability to update them “over the air” — it’s not necessary to put a new version in an app store in order to update an app’s content or even its structure. In the original app, this was accomplished via generated SQL diffs of the data. If the app was at version 3, and the data in the CMS was at version 10, then the app would request a patch file to upgrade version 3 to version 10. The CMS had to generate a diff for all possible combinations: version 3 to version 10, version 4 to version 10, etc. The diff consisted of queries to run against an SQLite database on the device. Opportunities for failures or errors were rampant, a situation exacerbated by the async nature of the SQLite interface.

In the new app, we replicated the feature with vastly less complexity — whenever there is an update, we just make the full data available at an app-specific URL as a JSON file, using the same format that we use to provide the initial data for the app on the device. The new data is stored on the device, but it’s also retained in memory while the application is running via Dojo’s Item File Read Store, which allows us to query it synchronously. The need for version-by-version diffs has been eliminated.

Restating the problem led to a simpler, more elegant solution that greatly reduced the opportunities for errors and failure. As an added benefit, using JSON has allowed us to meet needs that we never anticipated — the flexibility it provides has become a valuable tool in our toolbox.

Identify pain points

If the point of a rewrite is to make development easier, then an important step is to figure out what, exactly, is making development hard. Again, this was a time to question assumptions — as it turned out, there were things that had come to be accepted burdens that were actually relatively easy to address.

One of the biggest examples of this was the time required to develop and test anything that might behave differently on one operating system versus another. For example, the Android OS has limited support for the audio and video tags, so a native workaround is required to play media on Android that is not required on iOS.

In the original code, this device-specific branching was handled in a way that undoubtedly made sense at the beginning but grew unwieldy over time. Developers would create Mustache templates, wrapping the template tags in /* */ so the templates were actually executable, and then compile those templates into plain JavaScript files for production. Here are a few lines from one of those templates:

1
2
3
4
5
6
7
8
9
10
11
/*  */
var mediaPath = "www/media/" + toura.pages.currentId + "/";
/*  */
/*  */
var mediaPath = [Toura.getTouraPath(), toura.pages.currentId].join("/");
/*  */
var imagesList = [], dimensionsList = [], namesList = [], thumbsList = [];
var pos = -1, count = 0;
/*  */
var pos = 0, count = 0;
/*  */

These templates were impossible to check with a code quality tool like JSHint, because it was standard to declare the same variable multiple times. Multiple declarations of the same variable meant that the order of those declarations was important, which made the templates tremendously fragile. The theoretical payoff was smaller code in production, but the cost of that byte shaving was high, and the benefit somewhat questionable — after all, we’d be delivering the code directly from the device, not over HTTP.

In the rewrite, we used a simple configuration object to specify information about the environment, and then we look at the values in that configuration object to determine how the app should behave. The configuration object is created as part of building a production-ready app, but in development we can alter configuration settings at will. Simple if statements replaced fragile template tags.

Since Dojo allows specifying code blocks for exclusion based on the settings you provide to the build process, we could mark code for exclusion if we really didn’t want it in production.

By using a configuration object instead of template tags for branching, we eliminated a major pain point in day-to-day development. While nothing matches the proving ground of the device itself, it’s now trivial to effectively simulate different device experiences from the comfort of the browser. We do the majority of our development there, with a high degree of confidence that things will work mostly as expected once we reach the device. If you’ve ever waited for an app to build and install to a device, then you know how much faster it is to just press Command-R in your browser instead.

Have a communication manifesto

Deciding that you’re going to embrace an MVC-ish approach to an application is a big step, but only a first step — there are a million more decisions you’re going to need to make, big and small. One of the widest-reaching decisions to make is how you’ll communicate among the various pieces of the application. There are all sorts of levels of communication, from application-wide state management — what page am I on? — to communication between UI components — when a user enters a search term, how do I get and display the results?

From the outset, I had a fairly clear idea of how this should work based on past experiences, but at first I took for granted that the other developers would see things the same way I did, and I wasn’t necessarily consistent myself. For a while we had several different patterns of communication, depending on who had written the code and when. Every time you went to use a component, it was pretty much a surprise which pattern it would use.

After one too many episodes of frustration, I realized that part of my job was going to be to lay down the law about this — it wasn’t that my way was more right than others, but rather that we needed to choose a way, or else reuse and maintenance was going to become a nightmare. Here’s what I came up with:

  • myComponent.set(key, value) to change state (with the help of setter methods from Dojo’s dijit._Widget mixin)
  • myComponent.on&lt;Event&gt;(componentEventData) to announce state changes and user interaction; Dojo lets us connect to the execution of arbitrary methods, so other pieces could listen for these methods to be executed.
  • dojo.publish(topic, [ data ]) to announce occurrences of app-wide interest, such as when the window is resized
  • myComponent.subscribe(topic) to allow individual components react to published topics

Once we spelled out the patterns, the immediate benefit wasn’t maintainability or reuse; rather, we found that we didn’t have to make these decisions on a component-by-component basis anymore, and we could focus on the questions that were actually unique to a component. With conventions we could rely on, we were constantly discovering new ways to abstract and DRY our code, and the consistency across components meant it was easier to work with code someone else had written.

Sanify asynchronicity

One of the biggest challenges of JavaScript development — well, besides working with the DOM — is managing the asynchronicity of it all. In the old system, this was dealt with in various ways: sometimes a method would take a success callback and a failure callback; other times a function would return an object and check one of its properties on an interval.

1
2
3
4
5
6
7
8
9
images = toura.sqlite.getMedias(id, "image");

var onGetComplete = setInterval(function () {
  if (images.incomplete)
    return;

  clearInterval(onGetComplete);
  showImagesHelper(images.objs, choice)
},10);

The problem here, of course, is that if images.incomplete never gets set to false — that is, if the getMedias method fails — then the interval will never get cleared. Dojo and now jQuery (since version 1.5) offer a facility for handling this situation in an elegant and powerful way. In the new version of the app, the above functionality looks something like this:

1
toura.app.Data.get(id, image).then(showImages, showImagesFail);

The get method of toura.app.Data returns an immutable promise — the promise’s then method makes the resulting value of the asynchronous get method available to showImages, but does not allow showImages to alter the value. The promise returned by the get method can also be stored in a variable, so that additional callbacks can be attached to it.

Using promises vastly simplifies asynchronous code, which can be one of the biggest sources of complexity in a non-trivial application. By using promises, we got code that was easier to follow, components that were thoroughly decoupled, and new flexibility in how we responded to the outcome of an asynchronous operation.

Naming things is hard

Throughout the course of the rewrite we were constantly confronted with one of those pressing questions developers wrestle with: what should I name this variable/module/method/thing? Sometimes I would find myself feeling slightly absurd about the amount of time we’d spend naming a thing, but just recently I was reminded how much power those names have over our thinking.

Every application generated by the Toura CMS consists of a set of “nodes,” organized into a hierarchy. With the exception of pages that are standard across all apps, such as the search page, the base content type for a page inside APP is always a node — or rather, it was, until the other day. I was working on a new feature and struggling to figure out how I’d display a piece of content that was unique to the app but wasn’t really associated with a node at all. I pored over our existing code, seeing the word node on what felt like every other line. As an experiment, I changed that word node to baseObj in a few high-level files, and suddenly a whole world of solutions opened up to me — the name of a thing had limiting my thinking.

The lesson here, for me, is that the time we spent (and spend) figuring out what to name a thing is not lost time; perhaps even more importantly, the goal should be to give a thing the most generic name that still conveys what the thing’s job — in the context in which you’ll use the thing — actually is.

Never write large apps

I touched on this earlier, but if there is one lesson I take from every large app I’ve worked on, it is this:

The secret to building large apps is never build large apps. Break up your applications into small pieces. Then, assemble those testable, bite-sized pieces into your big application. - Justin Meyer

The more tied components are to each other, the less reusable they will be, and the more difficult it becomes to make changes to one without accidentally affecting another. Much like we had a manifesto of sorts for communication among components, we strived for a clear delineation of responsibilities among our components. Each one should do one thing and do it well.

For example, simply rendering a page involves several small, single-purpose components:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
function nodeRoute(route, nodeId, pageState) {
  pageState = pageState || {};

  var nodeModel = toura.app.Data.getModel(nodeId),
      page = toura.app.UI.getCurrentPage();

  if (!nodeModel) {
    toura.app.Router.home();
    return;
  }

  if (!page || !page.node || nodeId !== page.node.id) {
    page = toura.app.PageFactory.createPage('node', nodeModel);

    if (page.failure) {
      toura.app.Router.back();
      return;
    }

    toura.app.UI.showPage(pf, nodeModel);
  }

  page.init(pageState);

  // record node pageview if it is node-only
  if (nodeId && !pageState.assetType) {
    dojo.publish('/node/view', [ route.hash ]);
  }

  return true;
}

The router observes a URL change, parses the parameters for the route from the URL, and passes those parameters to a function. The Data component gets the relevant data, and then hands it to the PageFactory component to generate the page. As the page is generated, the individual components for the page are also created and placed in the page. The PageFactory component returns the generated page, but at this point the page is not in the DOM. The UI component receives it, places it in the DOM, and handles the animation from the old page to the new one.

Every step is its own tiny app, making the whole process tremendously testable. The output of one step may become the input to another step, but when input and output are predictable, the questions our tests need to answer are trivial: “When I asked the Data component for the data for node123, did I get the data for node123?”

Individual UI components are their own tiny apps as well. On a page that displays a videos node, we have a video player component, a video list component, and a video caption component. Selecting a video in the list announces the selection via the list’s onSelect method. Dojo allows us to connect to the execution of object methods, so in the page controller, we have this:

1
2
3
4
5
this.connect(this.videoList, 'onSelect', function(assetId) {
  var video = this.\_videoById(assetId);
  this.videoCaption.set('content', video.caption || '');
  this.videoPlayer.play(assetId);
});

The page controller receives the message and passes it along to the other components that need to know about it — components don’t communicate directly with one another. This means the component that lists the videos can list anything, not just videos — its only job is to announce a selection, not to do anything as a result.

Keep rewriting

It takes confidence to throw work away … When people first start drawing, they’re often reluctant to redo parts that aren’t right … they convince themselves that the drawing is not that bad, really — in fact, maybe they meant it to look that way. - Paul Graham, “Taste for Makers”

The blank slate offered by a rewrite allows us to fix old mistakes, but inevitably we will make new ones in the process. As good stewards of our code, we must always be open to the possibility of a better way of doing a thing. “It works” should never be mistaken for “it’s done.”

A New Chapter

| Comments

It was three years ago this summer that I got the call, bought the Yuengling, smoked the cigarettes, and began life as an independent consultant. It’s been (almost) three years of ups and downs, and, eventually, among the most rewarding experiences of my life. Day by day, I wrote my own job description, found my own clients, set my own schedule, and set my own agenda.

Starting tomorrow, it’s time for a new chapter in my working life: I’ll be joining Toura Mobile full-time as their lead JavaScript developer, continuing my work with them on creating a PhoneGap- and Dojo-based platform for the rapid creation of content-rich mobile applications.

I’ve been working with Toura for about six months now, starting shortly after I met Matt Rogish, their director of development, at a JavaScript event in New York. They brought me on as a consultant to review their existing application, and the eventual decision was to rewrite it from the ground up, using the lessons learned and knowledge gained from the first version to inform the second. It was a risky decision, but it’s paid off: earlier this year, Toura started shipping apps built with the rewritten system, and the care we took to create modular, loosely coupled components from the get-go has paid off immensely, meeting current needs while making it easier to develop new features. With the rewrite behind us, these days we’re using the solid foundation we built to allow users of the platform to create ever more customized experiences in their applications.

If you know me at all, you know that I’ve been pretty die-hard about being an independent consultant, so you might think this was a difficult decision. Oddly, it wasn’t — I’ve enjoyed these last several months immensely, the team I work with is fantastic, and I’ve never felt more proud of work I’ve done. Whenever I found myself wondering whether Toura might eventually tire of paying my consulting rates, I’d get downright mopey. Over the course of three years, I’ve worked hard for all of my clients, but this is the first time I’ve felt so invested in a project’s success or failure, like there was a real and direct correlation between my efforts and the outcome. It’s a heady feeling, and I hope and expect it to continue for a while.

By the way, I’ll be talking about the rewrite at both TXJS and GothamJS in the next few weeks.

Also: we’re hiring :)

Getting Better at JavaScript

I seem to be getting a lot of emails these days asking a deceptively simple question: “How do I get better at JavaScript?” What follows are some semi-random thoughts on the subject:

The thing that I’ve come to realize about these questions is that some things just take time. I wish I could write down “Ten Things You Need to Know to Make You Amazing at the JavaScript,” but it doesn’t work that way. Books are fantastic at exposing you to guiding principles and patterns, but if your brain isn’t ready to connect them with real-world problems, it won’t.

The number one thing that will make you better at writing JavaScript is writing JavaScript. It’s OK if you cringe at it six months from now. It’s OK if you know it could be better if you only understood X, Y, or Z a little bit better. Cultivate dissatisfaction , and fear the day when you aren’t disappointed with the code you wrote last month.

Encounters with new concepts are almost always eventually rewarding, but in the short term I’ve found they can be downright demoralizing if you’re not aware of the bigger picture. The first step to being better at a thing is realizing you could be better at that thing, and initially that realization tends to involve being overwhelmed with all you don’t know. The first JSConf , in 2009, was exactly this for me. I showed up eager to learn but feeling pretty cocky about my skills. I left brutally aware of the smallness of my knowledge, and it was a transformational experience: getting good at a thing involves seeking out opportunities to feel small.

One of the most helpful things in my learning has been having access to smart people who are willing to answer my questions and help me when I get stuck. Meeting these people and maintaining relationships with them is hard work, and it generally involves interacting with them in real life, not just on the internet, but the dividends of this investment are unfathomable.

To that end, attend conferences. Talk to the speakers and ask them questions. Write them emails afterwards saying that it was nice to meet them. Subscribe to their blogs. Pay attention to what they’re doing and evangelize their good work.

Remember, too, that local meetups can be good exposure to new ideas too, even if on a smaller scale. The added bonus of local meetups is that the people you’ll meet there are … local! It’s easy to maintain relationships with them and share in learning with them in real life.

(An aside: If your company won’t pay for you to attend any conferences, make clear how short-sighted your company’s decision is and start looking for a new job, because your company does not deserve you. Then, if you can, cough up the money and go anyway. As a self-employed consultant, I still managed to find something like $10,000 to spend on travel- and conference-related expenses last year, and I consider every penny of it to be money spent on being better at what I do. When I hear about big companies that won’t fork over even a fraction of that for an employee who is raising their hand and saying “help me be better at what I do!”, I rage.)

Make a point of following the bug tracker and repository for an active open-source project. Read the bug reports. Try the test cases. Understand the commits. I admit that I have never been able to make myself do this for extended periods of time, but I try to drop in on certain projects now and then because it exposes me to arbitrary code and concepts that I might not otherwise run into.

Read the source for your favorite library, and refer to it when you need to know how a method works. Consult the documentation when there’s some part of the source you don’t understand. When choosing tools and plugins, read the source, and see whether there are things you’d do differently.

Eavesdrop on communities, and participate when you have something helpful to add. Lurk on a mailing list or a forum or in an IRC channel, help other people solve problems. If you’re not a help vampire — if you give more than you take — the “elders” of a community will notice, and you will be rewarded with their willingness to help you when it matters.

Finally, books:

  • JavaScript: The Good Parts, by Douglas Crockford. It took me more than one try to get through this not-very-thick book, and it is not gospel. However, it is mandatory reading for any serious JavaScript developer.
  • Eloquent JavaScript , Marijn Haverbeke (also in print). This is another book that I consider mandatory; you may not read straight through it, but you should have it close at hand. I like it so much that I actually bought the print version, and then was lucky enough to get a signed copy from Marijn at JSConf 2011.
  • JavaScript Patterns, by Stoyan Stefanov. This was the book that showed me there were names for so many patterns that I’d discovered purely through fumbling around with my own code. I read it on the flight to the 2010 Boston jQuery Conference, and it’s definitely the kind of book that I wouldn’t have gotten as much out of a year earlier, when I had a lot less experience with the kinds of problems it addresses.
  • Object-Oriented JavaScript, by Stoyan Stefanov. It’s been ages since I read this book, and so I confess that I don’t have a strong recollection of it, but it was probably the first book I read that got me thinking about structuring JavaScript code beyond the “get some elements, do something with them” paradigm of jQuery.

Good luck.