Adventures in JavaScript Development

Lessons From a Rewrite

| Comments

MVC and friends have been around for decades, but it’s only in the last couple of years that broad swaths of developers have started applying those patterns to JavaScript. As that awareness spreads, developers eager to use their newfound insight are presented with a target-rich environment, and the temptation to rewrite can be strong.

There’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. … The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming: It’s harder to read code than to write it. - Joel Spolsky

When I started working with Toura Mobile late last year, they already had a product: a web-based CMS to create the structure of a mobile application and populate it with content, and a PhoneGap-based application to consume the output of the CMS inside a native application. Customers were paying, but the development team was finding that delivering new features was a struggle, and bug fixes seemed just as likely to break something else as not. They contacted me to see whether they should consider a rewrite.

With due deference to Spolsky, I don’t think it was a lack of readability driving their inclination to rewrite. In fact, the code wasn’t all that difficult to read or follow. The problem was that the PhoneGap side of things had been written to solve the problems of a single-purpose, one-off application, and it was becoming clear that it needed to be a flexible, extensible delivery system for all of the content combinations clients could dream up. It wasn’t an app — it was an app that made there be an app.

Where a new system concept or new technology is used, one has to build a system to throw away, for even the best planning is not so omniscient as to get it right the first time. Hence plan to throw one away; you will, anyhow. - Fred Brooks, The Mythical Man Month

By the time I’d reviewed the code and started writing up my findings, the decision had already been made: Toura was going to throw one away and start from scratch. For four grueling and exciting months, I helped them figure out how to do it better the second time around. In the end, I like to think we’ve come up with a solid architecture that’s going to adapt well to clients’ ever-changing needs. Here, then, are some of the lessons we learned along the way.

Understand what you’re rewriting

I had spent only a few days with the codebase when we decided that we were going to rewrite it. In some ways, this was good — I was a fresh set of eyes, someone who could think about the system in a new way — but in other ways, it was a major hindrance. We spent a lot of time at the beginning getting me up to speed on what, exactly, we were making; things that went without saying for existing team members did not, in fact, go without saying for me.

This constant need for explanation and clarification was frustrating at times, both for me and for the existing team, but it forced us to state the problem in plain terms. The value of this was incredible — as a team, we were far less likely to accept assumptions from the original implementation, even assumptions that seemed obvious.

One of the key features of Toura applications is the ability to update them “over the air” — it’s not necessary to put a new version in an app store in order to update an app’s content or even its structure. In the original app, this was accomplished via generated SQL diffs of the data. If the app was at version 3, and the data in the CMS was at version 10, then the app would request a patch file to upgrade version 3 to version 10. The CMS had to generate a diff for all possible combinations: version 3 to version 10, version 4 to version 10, etc. The diff consisted of queries to run against an SQLite database on the device. Opportunities for failures or errors were rampant, a situation exacerbated by the async nature of the SQLite interface.

In the new app, we replicated the feature with vastly less complexity — whenever there is an update, we just make the full data available at an app-specific URL as a JSON file, using the same format that we use to provide the initial data for the app on the device. The new data is stored on the device, but it’s also retained in memory while the application is running via Dojo’s Item File Read Store, which allows us to query it synchronously. The need for version-by-version diffs has been eliminated.

Restating the problem led to a simpler, more elegant solution that greatly reduced the opportunities for errors and failure. As an added benefit, using JSON has allowed us to meet needs that we never anticipated — the flexibility it provides has become a valuable tool in our toolbox.

Identify pain points

If the point of a rewrite is to make development easier, then an important step is to figure out what, exactly, is making development hard. Again, this was a time to question assumptions — as it turned out, there were things that had come to be accepted burdens that were actually relatively easy to address.

One of the biggest examples of this was the time required to develop and test anything that might behave differently on one operating system versus another. For example, the Android OS has limited support for the audio and video tags, so a native workaround is required to play media on Android that is not required on iOS.

In the original code, this device-specific branching was handled in a way that undoubtedly made sense at the beginning but grew unwieldy over time. Developers would create Mustache templates, wrapping the template tags in /* */ so the templates were actually executable, and then compile those templates into plain JavaScript files for production. Here are a few lines from one of those templates:

1
2
3
4
5
6
7
8
9
10
11
/*  */
var mediaPath = "www/media/" + toura.pages.currentId + "/";
/*  */
/*  */
var mediaPath = [Toura.getTouraPath(), toura.pages.currentId].join("/");
/*  */
var imagesList = [], dimensionsList = [], namesList = [], thumbsList = [];
var pos = -1, count = 0;
/*  */
var pos = 0, count = 0;
/*  */

These templates were impossible to check with a code quality tool like JSHint, because it was standard to declare the same variable multiple times. Multiple declarations of the same variable meant that the order of those declarations was important, which made the templates tremendously fragile. The theoretical payoff was smaller code in production, but the cost of that byte shaving was high, and the benefit somewhat questionable — after all, we’d be delivering the code directly from the device, not over HTTP.

In the rewrite, we used a simple configuration object to specify information about the environment, and then we look at the values in that configuration object to determine how the app should behave. The configuration object is created as part of building a production-ready app, but in development we can alter configuration settings at will. Simple if statements replaced fragile template tags.

Since Dojo allows specifying code blocks for exclusion based on the settings you provide to the build process, we could mark code for exclusion if we really didn’t want it in production.

By using a configuration object instead of template tags for branching, we eliminated a major pain point in day-to-day development. While nothing matches the proving ground of the device itself, it’s now trivial to effectively simulate different device experiences from the comfort of the browser. We do the majority of our development there, with a high degree of confidence that things will work mostly as expected once we reach the device. If you’ve ever waited for an app to build and install to a device, then you know how much faster it is to just press Command-R in your browser instead.

Have a communication manifesto

Deciding that you’re going to embrace an MVC-ish approach to an application is a big step, but only a first step — there are a million more decisions you’re going to need to make, big and small. One of the widest-reaching decisions to make is how you’ll communicate among the various pieces of the application. There are all sorts of levels of communication, from application-wide state management — what page am I on? — to communication between UI components — when a user enters a search term, how do I get and display the results?

From the outset, I had a fairly clear idea of how this should work based on past experiences, but at first I took for granted that the other developers would see things the same way I did, and I wasn’t necessarily consistent myself. For a while we had several different patterns of communication, depending on who had written the code and when. Every time you went to use a component, it was pretty much a surprise which pattern it would use.

After one too many episodes of frustration, I realized that part of my job was going to be to lay down the law about this — it wasn’t that my way was more right than others, but rather that we needed to choose a way, or else reuse and maintenance was going to become a nightmare. Here’s what I came up with:

  • myComponent.set(key, value) to change state (with the help of setter methods from Dojo’s dijit._Widget mixin)
  • myComponent.on<Event>(componentEventData) to announce state changes and user interaction; Dojo lets us connect to the execution of arbitrary methods, so other pieces could listen for these methods to be executed.
  • dojo.publish(topic, [ data ]) to announce occurrences of app-wide interest, such as when the window is resized
  • myComponent.subscribe(topic) to allow individual components react to published topics

Once we spelled out the patterns, the immediate benefit wasn’t maintainability or reuse; rather, we found that we didn’t have to make these decisions on a component-by-component basis anymore, and we could focus on the questions that were actually unique to a component. With conventions we could rely on, we were constantly discovering new ways to abstract and DRY our code, and the consistency across components meant it was easier to work with code someone else had written.

Sanify asynchronicity

One of the biggest challenges of JavaScript development — well, besides working with the DOM — is managing the asynchronicity of it all. In the old system, this was dealt with in various ways: sometimes a method would take a success callback and a failure callback; other times a function would return an object and check one of its properties on an interval.

1
2
3
4
5
6
7
8
9
images = toura.sqlite.getMedias(id, "image");

var onGetComplete = setInterval(function () {
  if (images.incomplete)
    return;

  clearInterval(onGetComplete);
  showImagesHelper(images.objs, choice)
},10);

The problem here, of course, is that if images.incomplete never gets set to false — that is, if the getMedias method fails — then the interval will never get cleared. Dojo and now jQuery (since version 1.5) offer a facility for handling this situation in an elegant and powerful way. In the new version of the app, the above functionality looks something like this:

1
toura.app.Data.get(id, image).then(showImages, showImagesFail);

The get method of toura.app.Data returns an immutable promise — the promise’s then method makes the resulting value of the asynchronous get method available to showImages, but does not allow showImages to alter the value. The promise returned by the get method can also be stored in a variable, so that additional callbacks can be attached to it.

Using promises vastly simplifies asynchronous code, which can be one of the biggest sources of complexity in a non-trivial application. By using promises, we got code that was easier to follow, components that were thoroughly decoupled, and new flexibility in how we responded to the outcome of an asynchronous operation.

Naming things is hard

Throughout the course of the rewrite we were constantly confronted with one of those pressing questions developers wrestle with: what should I name this variable/module/method/thing? Sometimes I would find myself feeling slightly absurd about the amount of time we’d spend naming a thing, but just recently I was reminded how much power those names have over our thinking.

Every application generated by the Toura CMS consists of a set of “nodes,” organized into a hierarchy. With the exception of pages that are standard across all apps, such as the search page, the base content type for a page inside APP is always a node — or rather, it was, until the other day. I was working on a new feature and struggling to figure out how I’d display a piece of content that was unique to the app but wasn’t really associated with a node at all. I pored over our existing code, seeing the word node on what felt like every other line. As an experiment, I changed that word node to baseObj in a few high-level files, and suddenly a whole world of solutions opened up to me — the name of a thing had limiting my thinking.

The lesson here, for me, is that the time we spent (and spend) figuring out what to name a thing is not lost time; perhaps even more importantly, the goal should be to give a thing the most generic name that still conveys what the thing’s job — in the context in which you’ll use the thing — actually is.

Never write large apps

I touched on this earlier, but if there is one lesson I take from every large app I’ve worked on, it is this:

The secret to building large apps is never build large apps. Break up your applications into small pieces. Then, assemble those testable, bite-sized pieces into your big application. - Justin Meyer

The more tied components are to each other, the less reusable they will be, and the more difficult it becomes to make changes to one without accidentally affecting another. Much like we had a manifesto of sorts for communication among components, we strived for a clear delineation of responsibilities among our components. Each one should do one thing and do it well.

For example, simply rendering a page involves several small, single-purpose components:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
function nodeRoute(route, nodeId, pageState) {
  pageState = pageState || {};

  var nodeModel = toura.app.Data.getModel(nodeId),
      page = toura.app.UI.getCurrentPage();

  if (!nodeModel) {
    toura.app.Router.home();
    return;
  }

  if (!page || !page.node || nodeId !== page.node.id) {
    page = toura.app.PageFactory.createPage('node', nodeModel);

    if (page.failure) {
      toura.app.Router.back();
      return;
    }

    toura.app.UI.showPage(pf, nodeModel);
  }

  page.init(pageState);

  // record node pageview if it is node-only
  if (nodeId && !pageState.assetType) {
    dojo.publish('/node/view', [ route.hash ]);
  }

  return true;
}

The router observes a URL change, parses the parameters for the route from the URL, and passes those parameters to a function. The Data component gets the relevant data, and then hands it to the PageFactory component to generate the page. As the page is generated, the individual components for the page are also created and placed in the page. The PageFactory component returns the generated page, but at this point the page is not in the DOM. The UI component receives it, places it in the DOM, and handles the animation from the old page to the new one.

Every step is its own tiny app, making the whole process tremendously testable. The output of one step may become the input to another step, but when input and output are predictable, the questions our tests need to answer are trivial: “When I asked the Data component for the data for node123, did I get the data for node123?”

Individual UI components are their own tiny apps as well. On a page that displays a videos node, we have a video player component, a video list component, and a video caption component. Selecting a video in the list announces the selection via the list’s onSelect method. Dojo allows us to connect to the execution of object methods, so in the page controller, we have this:

1
2
3
4
5
this.connect(this.videoList, 'onSelect', function(assetId) {
  var video = this.\_videoById(assetId);
  this.videoCaption.set('content', video.caption || '');
  this.videoPlayer.play(assetId);
});

The page controller receives the message and passes it along to the other components that need to know about it — components don’t communicate directly with one another. This means the component that lists the videos can list anything, not just videos — its only job is to announce a selection, not to do anything as a result.

Keep rewriting

It takes confidence to throw work away … When people first start drawing, they’re often reluctant to redo parts that aren’t right … they convince themselves that the drawing is not that bad, really — in fact, maybe they meant it to look that way. - Paul Graham, “Taste for Makers”

The blank slate offered by a rewrite allows us to fix old mistakes, but inevitably we will make new ones in the process. As good stewards of our code, we must always be open to the possibility of a better way of doing a thing. “It works” should never be mistaken for “it’s done.”

A New Chapter

| Comments

It was three years ago this summer that I got the call, bought the Yuengling, smoked the cigarettes, and began life as an independent consultant. It’s been (almost) three years of ups and downs, and, eventually, among the most rewarding experiences of my life. Day by day, I wrote my own job description, found my own clients, set my own schedule, and set my own agenda.

Starting tomorrow, it’s time for a new chapter in my working life: I’ll be joining Toura Mobile full-time as their lead JavaScript developer, continuing my work with them on creating a PhoneGap- and Dojo-based platform for the rapid creation of content-rich mobile applications.

I’ve been working with Toura for about six months now, starting shortly after I met Matt Rogish, their director of development, at a JavaScript event in New York. They brought me on as a consultant to review their existing application, and the eventual decision was to rewrite it from the ground up, using the lessons learned and knowledge gained from the first version to inform the second. It was a risky decision, but it’s paid off: earlier this year, Toura started shipping apps built with the rewritten system, and the care we took to create modular, loosely coupled components from the get-go has paid off immensely, meeting current needs while making it easier to develop new features. With the rewrite behind us, these days we’re using the solid foundation we built to allow users of the platform to create ever more customized experiences in their applications.

If you know me at all, you know that I’ve been pretty die-hard about being an independent consultant, so you might think this was a difficult decision. Oddly, it wasn’t — I’ve enjoyed these last several months immensely, the team I work with is fantastic, and I’ve never felt more proud of work I’ve done. Whenever I found myself wondering whether Toura might eventually tire of paying my consulting rates, I’d get downright mopey. Over the course of three years, I’ve worked hard for all of my clients, but this is the first time I’ve felt so invested in a project’s success or failure, like there was a real and direct correlation between my efforts and the outcome. It’s a heady feeling, and I hope and expect it to continue for a while.

By the way, I’ll be talking about the rewrite at both TXJS and GothamJS in the next few weeks.

Also: we’re hiring :)

Getting Better at JavaScript

I seem to be getting a lot of emails these days asking a deceptively simple question: “How do I get better at JavaScript?” What follows are some semi-random thoughts on the subject:

The thing that I’ve come to realize about these questions is that some things just take time. I wish I could write down “Ten Things You Need to Know to Make You Amazing at the JavaScript,” but it doesn’t work that way. Books are fantastic at exposing you to guiding principles and patterns, but if your brain isn’t ready to connect them with real-world problems, it won’t.

The number one thing that will make you better at writing JavaScript is writing JavaScript. It’s OK if you cringe at it six months from now. It’s OK if you know it could be better if you only understood X, Y, or Z a little bit better. Cultivate dissatisfaction , and fear the day when you aren’t disappointed with the code you wrote last month.

Encounters with new concepts are almost always eventually rewarding, but in the short term I’ve found they can be downright demoralizing if you’re not aware of the bigger picture. The first step to being better at a thing is realizing you could be better at that thing, and initially that realization tends to involve being overwhelmed with all you don’t know. The first JSConf , in 2009, was exactly this for me. I showed up eager to learn but feeling pretty cocky about my skills. I left brutally aware of the smallness of my knowledge, and it was a transformational experience: getting good at a thing involves seeking out opportunities to feel small.

One of the most helpful things in my learning has been having access to smart people who are willing to answer my questions and help me when I get stuck. Meeting these people and maintaining relationships with them is hard work, and it generally involves interacting with them in real life, not just on the internet, but the dividends of this investment are unfathomable.

To that end, attend conferences. Talk to the speakers and ask them questions. Write them emails afterwards saying that it was nice to meet them. Subscribe to their blogs. Pay attention to what they’re doing and evangelize their good work.

Remember, too, that local meetups can be good exposure to new ideas too, even if on a smaller scale. The added bonus of local meetups is that the people you’ll meet there are … local! It’s easy to maintain relationships with them and share in learning with them in real life.

(An aside: If your company won’t pay for you to attend any conferences, make clear how short-sighted your company’s decision is and start looking for a new job, because your company does not deserve you. Then, if you can, cough up the money and go anyway. As a self-employed consultant, I still managed to find something like $10,000 to spend on travel- and conference-related expenses last year, and I consider every penny of it to be money spent on being better at what I do. When I hear about big companies that won’t fork over even a fraction of that for an employee who is raising their hand and saying “help me be better at what I do!”, I rage.)

Make a point of following the bug tracker and repository for an active open-source project. Read the bug reports. Try the test cases. Understand the commits. I admit that I have never been able to make myself do this for extended periods of time, but I try to drop in on certain projects now and then because it exposes me to arbitrary code and concepts that I might not otherwise run into.

Read the source for your favorite library, and refer to it when you need to know how a method works. Consult the documentation when there’s some part of the source you don’t understand. When choosing tools and plugins, read the source, and see whether there are things you’d do differently.

Eavesdrop on communities, and participate when you have something helpful to add. Lurk on a mailing list or a forum or in an IRC channel, help other people solve problems. If you’re not a help vampire — if you give more than you take — the “elders” of a community will notice, and you will be rewarded with their willingness to help you when it matters.

Finally, books:

  • JavaScript: The Good Parts, by Douglas Crockford. It took me more than one try to get through this not-very-thick book, and it is not gospel. However, it is mandatory reading for any serious JavaScript developer.
  • Eloquent JavaScript , Marijn Haverbeke (also in print). This is another book that I consider mandatory; you may not read straight through it, but you should have it close at hand. I like it so much that I actually bought the print version, and then was lucky enough to get a signed copy from Marijn at JSConf 2011.
  • JavaScript Patterns, by Stoyan Stefanov. This was the book that showed me there were names for so many patterns that I’d discovered purely through fumbling around with my own code. I read it on the flight to the 2010 Boston jQuery Conference, and it’s definitely the kind of book that I wouldn’t have gotten as much out of a year earlier, when I had a lot less experience with the kinds of problems it addresses.
  • Object-Oriented JavaScript, by Stoyan Stefanov. It’s been ages since I read this book, and so I confess that I don’t have a strong recollection of it, but it was probably the first book I read that got me thinking about structuring JavaScript code beyond the “get some elements, do something with them” paradigm of jQuery.

Good luck.

Objects as Arguments: Where Do You Draw the Line?

I was reviewing some code last week and came across a snippet that looked a lot like this:

1
2
3
4
5
6
7
8
9
10
11
12
var someObject = {
  // ...

  onSuccess : function(resp) {
    // ...
    this.someMethod(resp.token, resp.host, resp.key, resp.secret);
  },

  someMethod : function(token, host, key, secret) {
    // ...
  }
};

My immediate response was to suggest that it didn’t make sense to be passing four separate arguments to someMethod, especially when the arguments were being “unpacked” from an already-existing object. Certainly we could just pass the resp object directly to someMethod, and let someMethod unpack it as necessary – we’d save some bytes, and we’d also leave ourselves some room to grow. “I’m not a big fan of functions that take four arguments,” I said in my GitHub comment.

To the original author’s credit, “because I say so” wasn’t sufficient reason to rewrite code that was working just fine, thank you very much. If four arguments was too many, was two arguments too many? Why draw the line at four? Surely the four-argument signature helped indicate to future developers what was required in order for the function to … function. Right? My hackles momentarily raised, I parried by pointing out that if the arguments were actually required by the function, maybe the function ought to actually check for their presence before using them. Ha! While the original author was distracted by my disarming logic, I fretted over the fact that I use a function that take four arguments every day: dojo.connect(node, 'click', contextObj, 'handlerMethod'). Ohnoes.

So where do you draw the line? Certainly you could write that dojo.connect call like so:

1
2
3
4
5
6
dojo.connect({
  node : node,
  event : 'click',
  context : contextObj,
  method : 'handlerMethod'
});

This, though, might make you poke your eyes out. It certainly isn’t as concise as the four-argument approach, and it makes a lot of things like partial application a lot harder. Clearly there’s more to this than “if there’s more than four arguments, put them in an object” … but what are the rules?

Optional Arguments

Probably the most compelling reason to use an object is when there are several optional arguments. For example, last fall I was reviewing some code from a potential training client, and I came across this:

1
addBling('#awesome', 'fuchsia', 'slow', null, null, 3, 'done!');

No one can argue that this is not terrible, and yet every experienced JavaScript developer knows how the developer(s) who wrote it arrived there. At first, the function needed three arguments, and all was good with the world. But then, it seemed like the same function could be used to do another thing by just passing two more arguments – no big deal, because if those two arguments weren’t present, then just the first three would suffice. Five arguments certainly isn’t that bad, right? After that, though, things went south: for whatever undoubtedly marketing-department-driven reason, suddenly both the original three-argument case and the later five-argument case both needed to receive two more arguments, and these two new arguments were mandatory. Now both cases had seven-argument signatures, and in some cases, two of those seven arguments needed to be null so nothing would break.

This case demonstrates the most compelling reason to switch to using an object instead: optional arguments. When the developer discovered that the original, three-argument addBling could be used for the five-argument case as well, it was probably time to refactor:

1
2
3
4
5
6
7
8
// original
addBling('#awesome', 'fuchsia', 'slow');

// new hotness
addBling('#awesome', {
  color : 'fuchsia',
  speed : 'slow'
});

Then, the same function could be used while passing it more information about how to behave in the five-argument case:

1
2
3
4
5
6
addBling('#omgSoAwesome', {
  color : 'fuchsia',
  speed : 'slow',
  unicorns : 3,
  rainbows : 5
});

Then, when it came time to add yet more bling, the function signature wouldn’t need to change,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
addBling('#awesome', {
  color : 'fuchsia',
  speed : 'slow',
  timesToBlink : 3,
  alertOnSuccess : 'done!'
});

addBling('#omgSoAwesome', {
  color : 'purple',
  speed : 'fast',
  unicorns : 3,
  rainbows : 5,
  timesToBlink : 9001,
  alertOnSuccess : 'woohoo!'
});

Extensibility and Future-Proofing

Another case for passing in an object is when you want the flexibility that an object provides, even if your code doesn’t require it for now:

1
2
3
4
5
var Person = function(args) {
  this.firstName = args.firstName;
  this.lastName = args.lastName;
  return this;
};

For now, you only want to be able to provide the first and last name of the person – it would work just fine to create a function signature for the Person constructor that took exactly those two arguments, because indeed they are required. On the other hand, though, this is incredibly short-sighted – while first and last name may be all that you care about now, there’s obviously more to a person than those two attributes, and eventually you may want to provide attributes such as age, occupation, etc. Doing this with individual arguments quickly becomes unsustainable. Besides that, though, it also makes assigning instance properties a pain in the ass. By passing an object, we can rewrite the above code as such:

1
2
3
4
5
var Person = function(args) {
  dojo.mixin(this, args);
  // jQuery: $.extend(this, args);
  return this;
};

Now – assuming this is what we want – we can mix in any settings we provide in the args argument. Dojo, for example, bakes this ability in to anything that inherits from dijit._Widget:

1
2
3
4
var thinger = new my.Thinger({
  title : 'Favorite Animals',
  animals : [ 'Narwhal', 'Lemur', 'Honey Badger' ]
});

Use Objects for Related Data

An important qualifier here is that all of the properties of an object that we’ve talked about passing to our Person constructor are related – they all are saying something about the Person you’re creating. What if creating our Person was asynchronous, and we wanted to run a function once our Person was created? In a (contrived) case like that, I think it does make sense to pass in a separate argument:

1
new Person(configObj, fn);

In this particular example, we still only have two arguments – we haven’t wandered into that muddy realm of four or more. That said, I think this distinction is part of what makes dojo.connect(node, 'click', contextObj, 'handlerMethod') OK: the arguments are four distinctly different types of information. Taken together, they have an almost narrative quality: when this node receives a click, use the context object’s handlerMethod. A signature like new Person('Rebecca', 'Murphey', 34, 'web developer', 2 /*cats*/, 2 /*dogs*/) doesn’t feel the same as the dojo.connect example – it’s information that’s too related to be expressed as independent arguments.

Four or More, Time to Refactor?

I think the bottom line here is a) it’s complicated, and b) if your function signature has four or more arguments, you should almost certainly consider whether there’s a better way to do it. If the arguments are super-related, it may be they belong in an object, so you get the benefit of easy extensibility down the road. If there are optional arguments, you almost certainly want to wrap those in an object to avoid passing null over and over again.

Personally, my threshold is actually closer to two arguments – if I find myself wanting a third argument, I question whether my function is trying to do more than it should be doing – maybe I should do some pre-processing of the input so I can get away with just passing in two arguments. Every additional argument is an indication of additional complexity, which means an additional opportunity for things to go wrong.

Other Considerations

I posed this question to Twitter and got a ton of interesting feedback. Here are some of the highlights that I didn’t cover above:

  • @raydaly no new nouns is my principle. If unrelated data needs to be passed, diff args.
  • @dadaxl I would pass an obj if I’ve a dynamic bunch of args containing functions.
  • @sh1mmer omg! Objects for the love of god! No one likes immutable APIs. Just ask @ls_n
  • @MattRogish Rails tends to do required things are named args, optional things are a hash
  • @ryanflorence obfuscation often influences me, objects don’t compress as well as multiple args.
  • @getify if more than half of the args are optional…or if there are several boolean params which without names can be confusing
  • @jcoglan When further args are optional, or args.length>3. Need to spot when options merit a refactoring, though.
  • @digitalicarus A combo of sheer length, amount of reuse, if it’s an API, and/or if it’s designed to be called a variety of ways to a similar end.
  • @BryanForbes If I have to start swapping arguments and type checking, it’s time for one object or reworking my function.
  • @myfreeweb I use an object when I start forgetting the order of args … or there is no logical order like (key, value, callback) at all
  • @zetafleet When many of the arguments are optional or they’re all getting stored or copied directly over to the object.
  • @maravillas I usually don’t make an obj just for passing data; if arglist is too long, maybe the function does too much and needs refactoring.

Postscript

We ended up leaving the code that spurred this whole conversation exactly as it was.

Modern JavaScript

My presentation the jQuery Divide (video here) has been making the rounds on the internet again, six months after I delivered it at JSConf.eu in Berlin, and this time around, a colleague on IRC shared a link with me that drew from it: Is JavaScript the New Perl?

Perl has a special place in my heart; it’s the first language I used to solve a real-world problem, and I still have the second edition Learning Perl that my good friend Marcus got for me at the time. These days I struggle for it not to look mostly like a lot of gibberish, but in the late 1990s it was funtimes.

Anyway. The post that linked to my presentation asked if JavaScript might be going through some of the same pains that Perl has gone through, and linked to an eerily relevant presentation about Modern Perl, a movement “actively seeks to both to teach how to write good code and change perceptions of Perl that still linger from the dot.com 90s.” It talks about the void that Perl sought to fill way back in 1987, and then steps through the highs and lows of the intervening 23 years.

One thing that struck me, reading the slides, is that Perl – like other open-source, server-side languages – has the distinct benefit of being community-driven. While, yes, JavaScript has a wonderful and vibrant community, the language itself is held hostage by browser vendors, some of whom have shown a strong inclination to not give a fuck about owning up to and fixing their egregious mistakes. Using new features of a language like Perl is, at the end of the day, a largely internal problem – given enough time and money, switching to a new version of the language that offers new features for code organization, testing, and abstraction is a thing a project can do. Indeed, Perl as a community can even make bold decisions like deciding that a new version simply won’t be back-compat with a version that came before, throwing away ideas that turned out to be duds; meanwhile, JavaScript web developers often must bend over backwards to ensure back-compat with decade-old technology, and the only way to transition away from that technology is to give up on a set of users entirely.

We’ve already seen what this means for JavaScript as a language: it was years after JavaScript’s debut before we really started seeing conversations about what a module should look like in JavaScript, and we’re still fighting over it today. Without a solid dependency management system – something you can take for granted in any 15-year-old community-driven language – dependency management often means sticking another script tag on the page, and even the most popular JavaScript library on the planet struggles with how to participate in a fledgling ecosystem. With no arbiter of common, tested, community-approved, community-vetted solutions – see Perl’s CPAN – it’s an environment that’s ripe for fragmentation, and shining examples of Not Invented Here (NIH) litter the JavaScript landscape. Lacking even an agreed-upon method of expressing dependencies, the findability of good solutions is low, and coalescence only occurs around tools with extremely low barriers to entry and extremely high near-term reward.

When Marcus was teaching me Perl, back in the dot com heyday of the late 1990s and before the world temporarily went to hell for a few years, there was great emphasis on TIMTOWTDI: there is more than one way to do it. That mantra made Perl beautiful and elegant and powerful. Too often, it also made it ridiculously hard for the next developer to build upon and maintain, especially as the problems developers were solving got more complicated than copying and pasting some code to support a contact form (sound familiar?). In the end, that mantra meant Perl’s reputation suffered, as the consequences of code written by developers with a whole lot of freedom and not so much skill became clear.

This, in a nutshell, is what I was talking about in Berlin: that the reputation of this language we love stands to suffer if we don’t get around to working together to solve these larger problems, and educating the wider world of JavaScript developers as we figure it out. Unlike with Perl, the language itself isn’t going to evolve in time to help us here – unless and until we’re willing to give up on huge swaths of users, we will, generously, be stuck with the browser technology of 2009 for a long time to come. Unlike the Modern Perl movement, the patterns and tools and practices that will form the foundation of Modern JavaScript are going to have to come from outside implementations of the language itself.

Realizing that, it becomes clear just how imperative it is that we, as a community, figure out dependency management, modularization, and intentional interoperability so that these patterns, tools, and practices can start to emerge organically. James Burke, the creator of RequireJS, is something of a hero to me, not for creating RequireJS, but for taking on the challenge of interacting calmly and level-headedly with all sorts of stakeholders to try to make AMD modules a viable reality. Tool and library developers need to stop debating whether this is a good idea and get to work on making it happen.

Tools and libraries also need to take seriously the need for modularization – though I confess I have many misgivings about the NIH aspect of Dustin Diaz’s Ender.js, and wish that the considerable effort involved had been directed toward an established project with similar features, I can’t help but hope it will pressure libraries like jQuery to make more efforts in the direction of modularization.

An equally important aspect of modularization is ensuring minimal duplication of effort. As a community, we need to agree on a core set of functionality that ought to be provided by the language but isn’t, and implement that itself as an interchangeable module. A page with both Underscore.js and jQuery on it has tremendous duplication of functionality, for example. Interchangeability will allow end users to roll exactly the tool they need, no more and no less. Eventually, standard toolkits could emerge that draw on the best of all worlds, rather than one-size-fits-all tools that exist in isolation.

While I agree with what Tom Dale wrote in his oddly controversial post – that “unless it is designed to work well together, it usually won’t” – the more I think about it, the more I realize that the problem lies in our current inability to reliably isolate functionality and express dependencies across tools. It’s not that large tools like Dojo are the One True Way – it’s that large tools like Dojo are incredibly powerful precisely because they take seriously the need for a lightweight core leveraged by components that deliver specific, isolated functionality. JavaScript as a whole will become more powerful by embracing the pattern.

The political problems here are obvious and several: such modularization will, by definition, lead to winners and losers; the identities of libraries as we know them stand to be diluted if it becomes trivial to use only parts of them. The emphasis will shift to curated toolkits that assemble best-of-breed solutions, and NIH efforts will compete on merit, not marketing. At the same time, though, trying new things will no longer involve learning a whole new set of tools, and developers won’t be as stuck with a solution that made sense once upon a time but not anymore.

A final and important piece of the puzzle is actually educating people about the patterns that are enabled when we embrace these tools and practices. The wider community of everyday devs who are just trying to get their job done has hopefully graduated from copying and pasting scripts, but there’s a long path ahead, and part of the work of Modern JavaScript needs to be clearing that path for them.

I said it in my Berlin talk, and I will say it again: sharing what we know is as important as making new things, even if it’s not always quite as fun. All the script loaders, build tools, inheritance systems, array utilities, templating frameworks, and data abstractions in the world are meaningless if we don’t help people understand how and why to use them.

A Dojo Boilerplate

| Comments

When I first started playing with the Dojo Toolkit, it was easy enough to use the CDN-hosted dojo.js and get started, but before long I wanted to make use of one of the features that drew me to Dojo in the first place: the build system that parses your code’s dependencies as expressed by dojo.require() statements and creates production-ready files.

Coming from a world where this was entirely a DIY affair, the patterns I should follow for taking advantage of Dojo’s system were, shall we say, less than clear. There was a lot of frustration, a lot of swearing, and a lot of pleas for help in #dojo on Freenode.

These days, I’m talking about Dojo a lot, and I’ve gotten pretty comfortable with how to set up a project — I even wrote a post about scaffolding a Dojo app once I felt like I had the basics down — but for a long time I’ve wanted to release a ready-made starter project, rather than making people follow seven lengthy steps.

With the help of Colin Snover, I’m pleased to release the Dojo Boilerplate, a simple starter project if you’d like to get your feet wet with Dojo and the power of its dependency management and build system. It comes with a bare-bones do-nothing app, a shell script for downloading the Dojo SDK and getting it in the right place, and a shell script and profile file for actually creating a built version. For the brave, it also includes a work-in-progress router for single-page apps — one of the few features that I feel Dojo itself is missing. Everything you should need to know is documented in the README.

I’ve also created a small demo app that uses the boilerplate and shows some of the basic concepts of MVC development using Dojo, including separating your code into models, views, controllers, and third-party services. It includes an example of templated widgets, which are one of the biggest selling points of Dojo for me, as well as an uber-basic example of object stores, new in Dojo 1.6.

The goal of the boilerplate and the demo app is to eliminate some of that pain and WTF that I went through — while Dojo is ridiculously powerful, the barrier to entry can seem daunting. Over and over again, though, I am grateful that I took the time to overcome it.

Finally: as always, pull requests and issues are welcome. Enjoy.

Update: Colin is now the maintainer of the boilerplate; I’ve updated the links above accordingly.

When You’re Building a Non-trivial JS Application …

| Comments

I sense another round of discussion of this is about to begin, and 140 characters isn’t quite enough to say what I want to say, so:

When you’re building a non-trivial JS application, you don’t want a jQuery developer, or a Dojo developer, or a YUI developer, or, frankly, any developer who chooses their tool before they evaluate the problem. For god’s sake, you want a JavaScript developer. Can you roll your own solution with jQuery as the base? Yes! Should you? I don’t think so, and I advise my clients against it for reasons I’ve written about at length, but I’m open to hearing compelling, articulate, fact-based arguments in favor of it!

But do me a favor, OK? Don’t base your arguments solely on the winner of a popularity contest. Don’t tell me how easy it is to find developers familiar with one library or another, because I’ll come right back and ask you just how good those developers will be at solving problems that aren’t addressed by said library. And please tell me you’ve at least explored some of the other options besides [insert the library you’re advocating here]. 

People read what I write about JavaScript libraries and they write me heartfelt tweets and e-mails saying OMG YOU HATE JQUERY NOW WHAT HAPPENEDDDDD? I don’t hate jQuery! It is a perfectly viable and valuable tool for so many things! But when people argue not just its viability but its absolute supremacy, when people get defensive and possibly even angry that I suggest there are solutions that are vastly better suited to a certain set of problems, when people contort themselves into pretzels to make their case and their case is “well, it’s not that bad” … well, that smacks of blind loyalty, not a thoughtful weighing of the tradeoffs and challenges we face as developers, and I question how those people would fare if actually confronted with the needs of a non-trivial application. 

So, please: Tell me what solutions you’ve looked at for non-trivial application development. Tell me where they work, tell me where they fall short. Tell me what you’re working on and how you chose the tools. Don’t tell me why I’m wrong – tell me why you’re right. Deal? Discuss.

The Future of jQuery Fundamentals (and a Confession)

| Comments

About 9 months ago, I released jQuery Fundamentals, a free, online training curriculum for people interested in learning jQuery based on material I’d assembled while leading jQuery trainings.

The response was and has continued to be amazing: not only has the book seen hundreds of thousands of visits, but it has also received content contributions and bug reports from dozens of people. It has become something of a collaborative work, and one of the go-to resources for jQuery and beginning JavaScript learning. It has been used to teach classes internally at companies and at colleges and universities, and it’s been translated into multiple languages. It’s even made me a tad bit of money — I recently granted a license to Webucator to create derivative works for their jQuery class — and landed me near the top of Google’s search results for “jQuery training”.

And so here is where we get to the confession part: while I’ve stayed very much in touch with the evolution of jQuery these last couple of years, written gobs of sample code in efforts to make people better at using the library, and even played a bit of a role in some of the new features in jQuery 1.5, the last time I chose the library for a project was in the fall of 2008. The last time I used it on a project at all was in the summer of 2010, and in a matter of a few weeks I was gutting the fragile, bug-ridden, DOM-centric code and re-writing the single-page application with — wait for it! — Dojo. jQuery and I have gone from being in a committed relationship to seeing other people to pretty much just saying hi on Facebook now and again.

This has put me in a strange place with jQuery Fundamentals — I want to be investing my energy supporting projects that I use, and while I can still write jQuery just fine and stay in touch with what’s going on with it, I really don’t … use it. That’s made it increasingly difficult to continue maintaining jQuery Fundamentals as a resource for the jQuery community.

Burying the Lede

At the jQuery conference in Boston last fall, John Resig invited me to participate in a conversation about an effort by the project to create a learning resource for the community, and through the course of that and future conversations, jQuery Fundamentals has found its new home.

I’ve been working actively with jQuery team member (and yayQuery co-host) Adam J. Sontag and community member Dan Heberden to get the book into good shape as it transitions to being “owned” by the jQuery project. I’ve also donated a third of the proceeds of the Webucator licensing arrangement to the jQuery project, to recognize the contributions of the community and to give even a wee bit of financial support to the learning efforts.

Adam, Dan, and I will be working hard to address some of the open issues with the book in the coming weeks. If you’re interested in helping, drop me an email, hit me up on Twitter, or just submit a pull request (though you may want to talk to us first if the solution to an issue isn’t straightforward). From formatting fixes to writing new content to updating the book to reflect the changes in jQuery 1.5, there’s a lot to be done.

What’s Next?

These days I’m working with a fantastic client doing mobile application development with PhoneGap and Dojo. It’s pretty much the most challenging, engaging, rewarding project I’ve had an opportunity to work on in nearly three years of independent consulting. These days, when I get the very inquiries I hoped to get by releasing jQuery Fundamentals in the first place, I direct people to the excellent folks at Bocoup. Slowly, I’m recalibrating my efforts and attention toward the projects that make my day-to-day development life better. As soon as I feel like jQuery Fundamentals is in a good place where I don’t have to worry about its future, you can expect to see a lot more learning-related content coming from me again; just, this time, it probably won’t be about jQuery.

I hope you’ll stick around.

Deferreds Coming to jQuery 1.5?

| Comments

I have updated this post to show the code using the API that was released in jQuery 1.5.0.

A few weeks ago, a major rewrite of jQuery’s Ajax functionality landed in the jQuery GitHub repo. Thanks to Julian Aubourg, jQuery looks like it will get a feature that I’ve desperately wished it had ever since I started spending time with Dojo:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
function doAjax(debug) {
  var req = $.ajax({
    url : 'foo.php',
    dataType : 'json',
    success : function(resp) {
      console.log(resp);
    }
  });

  if (debug) {
    req.success(function(resp) {
      console.log("let's see that again!", resp);
    });
  }

  // return the request object so other
  // things can bind to it too!
  return req;
}

doAjax().success(function(resp){
  console.log("Once more, with feeling!", resp);
});

Starting with 1.5 (I’m guessing), users will be able to easily attach callbacks to XHRs … later! And pass around the return value of $.ajax as an actually useful object with a familiar API, rather than just getting back the native XHR! No longer will we have to bundle up callback functionality – some of which might be optional, or depend on other code – inside our success or error callbacks. So hott.

When I heard that these Ajax changes had landed, I got to thinking about how Dojo provides its ability to belatedly attach success and error handlers to its XHRs: underlying its XHR methods is dojo.Deferred. It allows users to assign callback functions for success and error conditions for a task that may not complete immediately. Dojo makes use of this for its XHR stuff, but it’s incredibly useful generically, too:

1
2
3
4
5
6
7
8
9
10
11
function doSomethingAsync() {
  var dfd = new dojo.Deferred();
  setTimeout(function() {
    dfd.resolve('hello world');
  }, 5000);
  return dfd.promise;
};

doSomethingAsync().then(function(resp) {
  console.log(resp); // logs 'hello world'
});

So, Dojo provided the late callback functionality via deferreds. jQuery now had late callback functionality. Was the deferred functionality hidden in the jQuery Ajax rewrite, waiting to be exposed? Julian and I and several others got to talking in the jQuery development IRC channel, and decided it seemed like an interesting and viable idea. A few days later, Julian’s first draft of jQuery.Deferred landed in a branch on GitHub.

It’s early days, but there have been a lot of good discussions already about the proposed API and how it should work. Through all of the conversations I’ve been part of, it’s become really clear that no one cares about deferreds until you show them what they actually mean: the ability to register an interest in the outcome of arbitrary asynchronous behavior, even if the outcome has already occurred. Even better, you can register your interest in the outcome of behavior that may or may not be asynchronous.

I assure you that once you have experienced this, you will wonder how you lived without it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
var cache = {};

function doSomethingMaybeAsync(val) {
  if (cache[val]) {
    return cache[val];
  }

  return $.ajax({
    url : 'foo.php',
    data : { value : val },
    dataType : 'json',
    success : function(resp) {
      cache[val] = resp;
    }
  });
}

$.when(doSomethingMaybeAsync('foo'))
  .then(function(resp){
    alert("The value for foo is", resp);
  });

It’ll also be possible to do something like you see below. I’m not sure what the exact API will be for creating a generic deferred instance, but I hope it will be something along these lines:

1
2
3
4
5
6
7
8
9
10
11
function doIt() {
  var dfd = new $.Deferred();

  setTimeout(function() {
    dfd.resolve('hello world');
  }, 5000);

  return dfd.promise;
}

doIt().then(function(resp) { console.log(resp); }, errorFn);

These changes are sitting in a branch in the jQuery GitHub repo as we speak, and I think it’s likely we’ll see them move to master sooner than later. It’s a nice story of collaboration and community participation that helped make something good – the Ajax rewrite – even better.

It’s exciting to see jQuery venture a bit more into the abstract. My experiences with Dojo core so far make me think there are probably more opportunities for these sorts of utilities that would be of high value for a substantial number of jQuery users. On the other hand, one of the constant themes of our conversations about deferreds has been the potential for confusion with the new methods. Will the API look familiar and jQuery-like, or will users be confused about the ability to chain methods on something other than a jQuery selection? Are there bigger-picture considerations when it comes to adding new constructors to the jQuery namespace? It’ll be interesting to see how these questions sort themselves out, especially if other similar features appear that don’t fall neatly under the well-established DOM/Ajax/Events/Effects umbrella.

The conversation’s happening on GitHub – I hope you’ll join in.

On Conferencing

| Comments

I’m a few hours away from finalizing my slides for the Rich Web Experience in Ft. Lauderdale next month. I’ll be presenting on basic tips for refactoring your jQuery; I think it’s a decent presentation – the part of it I’ve finished, anyway – and I’ll be glad to have it in my collection.

By my count, RWX will be the 15th conference or non-trivial event I attend in 2010, including two that I’ve organized and eight that I’ve spoken at:

Organizer

  • TXJS (Austin, TX)
  • NCJS (Durham, NC)

Speaker

Attendee

  • SXSW (Austin, TX)
  • Bay Area jQuery Conference (Mountain View, CA)
  • JSConf US (Washington, DC)
  • CouchCamp (Walker Creek Ranch, CA)
  • Dojo Developer Days (Mountain View, CA)

(For what it’s worth, and because I was curious enough to count: I’ve also participated in 11 episodes of yayQuery, written a half-dozen in-depth blog posts, organized at least six local web women meetups, and published an open jQuery training curriculum.)

It was sometime in early 2009 that I decided to make a point of speaking at more events. I’d always enjoyed the challenge and the thrill of public speaking, but I also wanted to be part of the solution to the dearth of women speakers in JavaScript land. Something like 18 months later, it seems I might be sort of good at it, if my SpeakerRate is to be believed. This fall I found myself actually turning down requests to speak.

It turns out that they forgot to remind me in work-for-yourself school that at the end of the day or week or month, I’ve still gotta make some money now and then. Putting together presentations and traveling to conferences and meeting the expectations that come along with being a Vaguely Important Person … it’s fucking hard work, and when there’s no company to pay you for it, it’s also fucking expensive, even if it theoretically brings in work in the long run. While every non-local event I spoke at paid for my hotel and accommodations, I was not otherwise compensated. The events that I organized ended up being pretty much a wash, financially.

I spent untold hundreds of hours – I’m not making that up – preparing presentations, traveling to conferences, and speaking this year. Looking back at my calendar since late August, when all of this got really and truly insane (most of my speaking has happened since then), the sad size of my business bank account makes stunning sense. We’ll leave aside the toll that working all the time and then being gone all the rest of the time takes on one’s home life, but suffice to say that the toll is, also, not trivial.

Over the last few weeks, it’s become pretty clear that I need to take a break. I’m mostly over pre-talk jitters, but these days I find myself thinking “for the love of all that’s good, do I really have to get on a plane again?” People ask me about 2011 events and I find myself on the verge of losing my shit, which really isn’t fair to anyone. People tell me how they want to be invited to speak at stuff like I am, and I let loose a heavy sigh. People come up to me at events or email me randomly because they think that I’m an “expert” on this thing or that, and can they just ask me a little question? – and I wonder what, exactly, I have wrought with all of this effort. Worst of all: potential clients ask me about taking on lucrative work, and I must tell them: “Not now, I can’t, I’m sorry. I’ve got this presentation to prepare …”

I might be a terrible person entirely too full of herself, or a drama queen, or whatever else you want to think. You might be certain that if I’d just think about it, there would be lots of efficiencies I could realize, and really I just make this harder than it is. That’s OK, and you might be right.

I want to be clear that I realize that two years ago I was nobody, and to the extent that I am anybody now, it is largely because I have been afforded so many opportunities to make a name for myself. I am grateful for them. It becomes clear, though, that I’ve let the pendulum swing way too far.

And so as I sit here, about to finish off the last slides of the year, I fantasize about deleting Keynote from my computer. But I also find myself thinking how I’m deciding not to do this thing that I’m kind of good at because I simply can’t afford to, and it makes me sad. It kills me to think about missing out on that warm fuzzy it-was-all-worth-it moment when everyone claps at the end. And more than anything, I feel like walking away, for however long, means yet one less woman on stage in a field that is desperate for them, and that makes me saddest of all.

There is no point to this post, really, except for me to get some of my thoughts out of my head and for you to know where I’ve gone if you don’t see or hear from me as much. It is, alas, time for me to actually do some of that work that all this effort has brought in. We’ll see when I emerge – maybe I will feel rejuvenated in the new year, who knows! – and whenever that is, I hope to see you there.