Adventures in JavaScript Development

Greenfielding

I’m officially one-third of the way through my self-imposed month of unemployment before I join Bocoup at the beginning of May, and I’ve been spending most of what would normally be my working hours on a small demo to support talks at conferences I will be speaking at this summer. It’s just a little app that searches various services, and displays the results – so simple that, when I showed it to Melissa, she helpfully asked why I wouldn’t just use Google.

It’s been about 18 months since I last got to start a project from scratch – in that case, the codebase that became Mulberry – but even then, I didn’t have control over the full stack of technologies, just the JavaScript side of things. Over the course of my time on that project, I came to be extremely familiar with Dojo, fairly competent with Jasmine, decently comfortable with Ruby and its go-to simple server Sinatra, and somewhat conversational in Sass.

I spent most of my time on that project working with technologies with which I was already pretty comfortable. Interactions with new technologies came in dribs and drabs (except for that one time I decided to test my Ruby skills by rewriting our entire build process), and all of my learning was backed up by a whole lot of institutional knowledge.

The consulting world, of course, is a wee bit different: you interact frequently with new technologies, and you never know what a client might ask you to do. Learning comes in bursts, and the ability to quickly come up to speed with a technology is imperative. On a six-week project, you can’t spend the first 40 hours getting your bearings.

Even though I spent three years as a consultant, returning to that world of constant learning was feeling a tad intimidating. And so for this project, I decided to make a point of leaving that comfort zone, and intentionally chose technologies – Node, Bootstrap, Backbone, Mocha, RequireJS – that I hadn’t really had a chance to work with in depth (or at all).

On Learning

Greenfield projects are few and far between, and it’s easy to get in a rut by sticking with the tools you already know. Some of my most exciting times at Toura weren’t when I was writing JavaScript, but rather when I was learning how to talk to the computer in a whole new language. Greenfielding a personal project is a special treat – it never really has to be “finished,” and no one’s going to be mad at you if it turns out you made a shitty choice, so you’re free to try things that are less of a sure thing than they would need to be if you were getting paid.

Speaking personally, it can also be a little intimidating to learn a new thing because learning often involves asking for help, and asking for help requires admitting that I don’t already know how to do the thing that people might expect I already know how to do.

Sometimes the thing that gets in the way of starting a new learning project is actually the fear that I will get stuck. What does it mean if the person who talks at conferences about principles code organization can’t figure out how best to structure a particular app with Backbone? What does it mean if the person who’s been encouraging people to build their JavaScript can’t get RequireJS to generate a proper build? What will I say to Isaac, now that he is standing in front of me and introducing himself, when I have not in fact spent any quality time with Node prior to this past weekend?

Lucky for me, it turns out that all of this is mostly in my head. While I often preface my questions with a small dose of humility and embarrassment, it turns out that well articulated questions are usually greeted with thoughtful and helpful answers. If anything, I’m trying to be more communicative about the learning that I do, because I think it’s important that people feel comfortable acknowledging that they used to not know a certain thing, and now they do. I also try to gently remind people that just because they have known something for months or years doesn’t mean they should look down upon the person enthusiastically blogging about it today.

On that note … here’s what’s new to me these past couple of weeks :)

Twitter Bootstrap

I’ve written a lot of demo apps, and while my coding style has changed over the years, one thing has remained constant: they all look fairly terrible. In theory, we’re all smart enough to know that what a demo looks like doesn’t have any bearing on what it explains, but in reality a good-looking demo is simply more compelling, if only because the viewer isn’t distracted by the bad design.

With this in mind, I decided to give Twitter Bootstrap a try. When I first arrived at the site, I started looking for docs about how to set it up, but it turns out that I vastly overestimated Bootstrap’s complexity. Drop a style tag into your page (and, optionally, another style tag for the responsive CSS), look at the examples, and start writing your markup.

What I really loved is that there were patterns for everything I needed, and those patterns were easy to follow and implement. Within an hour or so I had a respectable-looking HTML page with markup that didn’t seem to suck – that is, it looked good and it was a decent starting point if I ever wanted to apply a custom design.

Node

If you’ve ever talked to me about Node, you know that I have pretty mixed feelings about it – some days I feel like the people writing JavaScript in the browser really would have benefited if the people who have gravitated to Node had stuck around to invest their collective smarts in the world where 99% of JavaScript is still written. But that doesn’t really have anything to do with Node the technology, so much as Node the new shiny thing unburdened by browser differences.

I’ve actually visited Node a couple of times previously – if you haven’t at least installed it, you might be living under a rock – and I was flattered that Garann asked me to review her book Node for Front-End Developers, but past experiences had left me frustrated.

This time, something was different. I don’t rule out that it might be me, or even that learning some of the ins and outs of Ruby might have prepared me to understand Node – and packages and dependency management and writing for the server instead of the browser – better this time around. It could also be that the Node ecosystem has reached a point of maturity that it just hadn’t reached the last time I poked around.

Regardless, I found that everything made a whole lot more sense this time, and my struggles this time were about forgetting to stringify an object before sending it as a response to a request, not about getting the server to start in the first place. I used the q module to give me my beloved promises for managing all the asynchronicity, and generally found it ridiculously pleasant to leave behind all the context switching I’d grown accustomed to while using JavaScript and Ruby side by side. I’ll probably still turn to Ruby for automating things on the command line (though I continue to be intrigued by grunt), but I’m ready to admit that it’s time for me to add Node to my toolbox.

Mocha

To be honest, I’d just planned on using Jasmine for writing tests for this project, mostly because I’d never set up Jasmine myself, and I was interested in maybe getting it working with grunt for headless testing. I ended up bailing on that plan when, in the course of some Googling for answers about Jasmine, I came across Mocha.

Mocha is a super-flexible testing framework that runs on Node and in the browser. You can choose your assertion library – that is, you can choose to write your tests like assert(1, 1).ok() or expect(1).to.be(1) depending on your preference. I decided to use the latter style, with the help of expect.js. You can also choose your reporting style, including the ability to generate docs from your tests.

I had to do a bit of finagling to get the browser-based tests working with my RequireJS setup, and ultimately I ended up just using my app’s server, running in dev mode, to serve the tests in the browser. I’m still working out how best to run just one test at a time in the browser, but all in all, discovering Mocha has probably been the best part of working on this project.

RequireJS

RequireJS is another tool that I’ve dabbled with in the past, but for the last 18 months I’ve been spending most of my time with Dojo’s pre-AMD build system, so I had some catching up to do. I don’t have a ton to say about RequireJS except:

  • It’s gotten even easier to use since I last visited it.
  • The docs are great and also gorgeous.
  • While I haven’t had to bother him lately, James Burke, the author and maintainer of RequireJS, is a kind and incredibly helpful soul.
  • The text! plugin makes working with client-side templates incredibly simple, without cluttering up your HTML with templates in script tags or hard-coding your templates into your JavaScript.
  • The use! plugin makes it painless to treat libraries that don’t include AMD support just like libraries that do. I hear it might become an official plugin soon; I hope it will.

Backbone

This part was a tough choice, and I actually set out to use a different framework but ended up getting cold feet – even though this was just a personal project, it did need to reach some semblance of done-ness in some reasonable period of time. After a little bit of poking around at other options, I decided that, barring completely copping out and using Dojo, Backbone was going to be the best tool for this particular job on this particular schedule.

I’m pretty torn about this, because I decided to use a framework that I know has shortcomings and limited magic, and I know that other options would serve me better in the long term. But I also know that the long term doesn’t exactly matter for this particular project. The thing that swayed me, really, was that with Backbone, I didn’t feel like I needed to grasp a whole slew of concepts before I could write my first line of code.

I looked up plenty of things along the way, and rewrote my fair share of code when I discovered that I’d been Doing It Wrong, but I was able to maintain a constant forward momentum. With the other options I considered, I felt like I was going to have to climb a ladder of unknown height before making any forward progress.

I feel like I made the right choice for this project, but it’s a choice I’d spend a lot more time on for a “real” project, and I’d be much more inclined to invest the initial energy in getting up to speed if the payoff was clearer. This, though, is a choice that people seem to be consistently terrible at, and so I feel like I should beat myself up about it just a little. It’s all too common to dig a ginormous hole for ourselves by choosing the technology that lets us start writing code the soonest; on the flip side, it’s all too common to choose a technology that’s complete overkill for the task at hand.

The End

The master branch of the repo for the project should be mostly stable (if incomplete) if you want to check it out. I’m going to close comments on this post in the hopes that you’ll write your own post about what you’ve been learning instead :)

JavaScript: It’s a Language, Not a Religion

I have six things to say:

  1. I am in a committed relationship with my partner Melissa. We will celebrate six years together on Sunday. We contribute frequently to political causes.

  2. I was deeply saddened yesterday to learn that Brendan Eich contributed money in support of a political initiative that sought to rescind the court-established right for same-sex couples to marry in the state of California. It has changed my view of him as a person, despite the fact that we have had a positive and professional relationship and he has been a great supporter of my JavaScript career. I think he is on the wrong side of history, and I hope that courts will continue to agree with me.

  3. I had a frank, private, and face-to-face conversation with Brendan about the issue during JSConf. I shared my disappointment, sadness, and disagreement.

  4. I have been dismayed to see this incident interpreted as a statement about the JavaScript community as a whole. This community is made up of so many people who believe so many different things, and yesterday I was reminded that they are all just people, and JavaScript is just a language, not a religion. I shudder to think of a world where there is a political litmus test for entry into the community. Indeed, I am extremely torn about introducing personal politics into my professional life*, as I fear it will encourage professional colleagues to opine about personal beliefs that are frankly none of their business. One of the great joys of working with computers is that they do not care who I am or what I believe; I realize that to ask the same of people is unreasonable, but inviting politics into the workplace is a treacherously slippery slope. Unless my personal belief system presents an imminent danger to my colleagues, I am loath to welcome discussion of it by people who otherwise have no substantial or personal relationship with me.

  5. I believe individual companies must determine how best to address these issues, as their attitude toward them can have a significant impact on their ability to hire and retain talented people. I support constructive pressure on companies to align themselves with or distance themselves from political causes, but I would not support a company that prohibited its employees from participating in the political process. I urge anyone who is hurt or offended by this incident to engage with Brendan and Mozilla personally and professionally. Brendan is wrong on this issue, but he is a thoughtful and intelligent person, and he is also a human being.

  6. Finally: If this incident has made you angry or sad or disappointed, the most effective thing you can do is follow in Brendan’s footsteps by putting your money where your mouth is. Money speaks volumes in the American political system, and there are campaigns in progress right now that will impact the rights of gays and lesbians. Your contribution of $50, $100, or $1,000 – or, in lieu of money, your time – will have far more impact than yet another angry tweet.

And now I shall turn off the internet for a bit. Comments are disabled. Shocker, I know.

* It bears mentioning that, in certain cases, people making political contributions are required to include information about their employer. The inclusion of this information does not indicate that the employer supports – or is even aware of – the contribution.

Bocoup

| Comments

bocoup

It wasn’t so long ago that I was giving my first talk about JavaScript at the 2009 jQuery Conference, and it was there that Bocoup’s Boaz Sender and Rick Waldron created the (now-defunct) objectlateral.com, a celebration of an unfortunate typo in the conference program’s listing of my talk.

A bond was forged, and ever since I’ve watched as Bocoup has grown and prospered. I’ve watched them do mind-boggling work for the likes of Mozilla, The Guardian, Google, and others, all while staying true to their mission of embracing, contributing to, and evangelizing open-web technologies.

Today, I’m beyond excited – and also a wee bit humbled – to announce that I’m joining their consulting team. As part of that role, I look forward to spending even more time working on and talking about patterns and best practices for developing client-side JavaScript applications. I also hope to work on new training offerings aimed at helping people make great client-side applications with web technology.

New beginnings have a terrible tendency to be accompanied by endings, and while the Bocoup opportunity is one I couldn’t refuse, it’s with a heavy heart that I bid farewell to the team at Toura. I’m proud of what we’ve built together, and that we’ve shared so much of it with the developer community in the form of Mulberry. The beauty of open source means that I fully expect to continue working on and with Mulberry once I leave Toura, but I know it won’t be the same.

I’ll be spending the next few days tying up loose ends at Toura, and then I’m taking a break in April to hit JSConf, spend some time in Berlin, and head to Warsaw to speak at FrontTrends. I’ll make my way back home in time to start with Bocoup on May 1.

And so. To my teammates at Toura: I wish you nothing but the best, and look forward to hearing news of your continued success. To Bocoup: Thanks for welcoming me to the family. It’s been a long time coming, and I’m glad the day is finally here.

Girls and Computers

| Comments

After a week that seemed just chock full of people being stupid about women in technology, I just found myself thinking back on how it was that I ended up doing this whole computer thing in the first place. I recorded a video a while back for the High Visibility Project, but that really just told the story of how I ended up doing web development. The story of how I got into computers begain when I was unequivocally a girl. It was 1982.

Back then, my dad made eyeglasses. My mom stayed at home with me and my year-old sister – which she’d continue to do til I was a teenager, when my brother finally entered kindergarten eight years later. Their mortgage was $79 – about $190 in today’s dollars – which is a good thing because my dad made about $13,000 a year. We lived in Weedsport, New York, a small town just outside of Syracuse. We walked to the post office to get our mail. The farmers who lived just outside town were the rich people. In the winters the fire department filled a small depression behind the elementary school with water for a tiny skating rink. There were dish-to-pass suppers in the gym at church.

In 1982, Timex came out with the Timex Sinclair TS-1000, selling 500,000 of them in just six months. The computer, a few times thicker than the original iPad but with about the same footprint, cost $99.95 – more than that mortgage payment. When everyone else in town was getting cable, my parents decided that three channels were good enough for them – it’s possible they still had a black-and-white TV – and bought a computer instead.

Timex Sinclair TS-1000

I remember tiny snippets of that time – playing kickball in my best friend Beth’s yard, getting in trouble for tricking my mother into giving us milk that we used to make mud pies, throwing sand in the face of my friend Nathan because I didn’t yet appreciate that it really sucks to get sand thrown in your face – but I vividly remember sitting in the living room of our house on Horton Street with my father, playing with the computer.

A cassette player was our disk drive, and we had to set the volume just right in order to read anything off a tape – there was actually some semblance of a flight simulator program that we’d play, after listening to the tape player screech for minutes on end. Eventually we upgraded the computer with a fist-sized brick of RAM that we plugged into the back of the computer, bumping our total capacity from 2K to 34K. I wrote programs in BASIC, though for the life of me I can’t remember what any of them did. The programs that were the most fun, though, were the ones whose assembly I painstakingly transcribed, with my five-year-old fingers, from the back of magazines – pages and pages of letters and numbers I didn’t understand on any level, and yet they made magic happen if I got every single one right.

A string of computers followed. My parents bought a Coleco Adam when we moved to Horseheads, New York – apparently the computer came with a certificate redeemable for $500 upon my graduation from high school, but Coleco folded long before they could cash it in. I made my first real money by typing a crazy lady’s crazy manuscript about crazy food into an Apple IIe that we had plugged into our TV, and my uncle and I spent almost the entirety of his visit from Oklahoma writing a game of Yahtzee! on that computer, again in BASIC.

Me at a computer fair at the mall with my sister, my mother, and my friend
Michael

Above: Me at a computer fair at the mall with my sister, my mother, and my friend Michael. “You were giving us all a tutorial, I can tell,” says my mom. Note the 5-1/4” external floppy drive.

In middle school, I started a school newspaper, and I think we used some prehistoric version of PageMaker to lay it out. When high school rolled around, I toiled through hand-crafting the perfect letters and lines and arrows in Technical Drawing so I could take CAD and CAM classes and make the computer draw letters and lines and arrows for me, and quickly proceeded to school just about every boy in the class. In my senior year of high school, I oversaw the school yearbook’s transition from laying out pages on paper to laying out pages with computers, this time the vaguely portable (it had a handle on the back!) Mac Classic. We used PageMaker again; the screen was black and white and 9”, diagonally.

Macintosh Classic

It was around then that a friend gave me a modem and – to his eventual chagrin, when he got the bill – access to his Delphi account, giving me my first taste of the whole Internet thing in the form of telnet, gopher, and IRC. When I went to college the following year, I took with me a computer with perhaps a 10MB hard drive, and no mouse.

Once again I found myself poring over magazines to discover URIs and, eventually, URLs that I could type to discover a whole new world of information. In 1995, I spent the summer making my college newspaper’s web site, previewing it in Lynx – it felt like there wasn’t much to learn when there was so little difference between the markup and what I saw on the screen. I would go to the computer lab to use NCSA’s Mosaic on the powerful RISC 6000 workstations, because they had a mouse. Yahoo! was about one year old. My friend Dave, who lived down the street, installed Windows 95 that summer and invited me over to show me. It was amazing. We were living in the future.

My early years with computers seem pretty tame – I wasn’t tearing them apart or building my own or doing anything particularly interesting with them, but I was using them, I was telling them what to do and they were mostly listening, and it never made me feel like I was weird. To the contrary, it made me feel powerful and empowered. I felt like a part of this ever-growing community of people who understood, eventually, that computers were going to change the world. It was the people who didn’t understand this who were weird and beneath us. It was the people who understood computers better than me of whom I stood in awe.

I can barely remember a time when computers weren’t a part of my life, and yet when they first entered my life, their presence was incredibly exceptional. These days, of course, computers are ubiquitous, but interaction with them at the copy-assembly-from-the-back-of-a-magazine level is almost nonexistent. Parents who can approach a computer with the same awe and wonder and determination as a child – as I must imagine that my dad did in 1982 – are likely equally rare.

In some ways, it is like the very ubiquity of technology has led us back to a world where socially normative gender roles take hold all over again, and the effort we’re going to need to put into overcoming that feels overwhelming sometimes. Words can’t express my gratitude for the parents I have, for that $99.95 investment they made in me, and for fact that I was lucky enough to be 5 and full of wonder in 1982.

Thoughts on a (Very) Small Project With Backbone and Backbone Boilerplate

| Comments

I worked with Backbone and the Backbone Boilerplate for the first time last weekend, putting together a small demo app for a presentation I gave last week at BazaarVoice. I realize I’m about 18 months late to the Backbone party, here, but I wanted to write down my thoughts, mostly because I’m pretty sure they’ll change as I get a chance to work with both tools more.

Backbone

Backbone describes itself as a tool that “gives structure to web applications,” but, at the risk of sounding pedantic, I think it would be more accurate to say that it gives you tools that can help you structure your applications. There’s incredibly little prescription about how to use the tools that Backbone provides, and I have a feeling that the code I wrote to build my simple app looks a lot different than what someone else might come up with.

This lack of prescription feels good and bad – good, because I was able to use Backbone to pretty quickly set up an infrastructure that mirrored ones I’ve built in the past; bad, because it leaves open the possibility of lots of people inventing lots of wheels. To its credit, it packs a lot of power in a very small package – 5.3k in production – but a real app is going to require layering a lot more functionality on top of it. Ultimately, the best way to think of Backbone is as the client-side app boilerplate you’d otherwise have to write yourself.

My biggest complaint about Backbone is probably how unopinionated it is about the view layer. Its focus seems to be entirely on the data layer, but the view is still where we spend the vast majority of our time. Specifically, I think Backbone could take a page from Dojo, and embrace the concept of “templated widgets”, because that’s what people seem to be doing with Backbone views anyway: mixing data with a template to create a DOM fragment, placing that fragment on the page, listening for user interaction with the fragment, and updating it as required. Backbone provides for some of this, specifically the event stuff, but it leaves you to write your own functionality when it comes to templating, placing, and updating. I think this is a solveable problem without a whole lot of code, and want to spend some time trying to prove it, but I know I need to look into the Backbone Layout Manager before I get too carried away.

Backbone Boilerplate

This project from Tim Branyen was a life-saver – it gave me an absolutely enormous head start when it came to incorporating RequireJS, setting up my application directories, and setting up a development server. It also included some great inline docs that helped me get my bearings with Backbone.

There are a couple of ways I think the boilerplate could be improved, and I’d be curious for others’ opinions:

  • The sample app includes the concept of “modules,” which seem to be a single file that include the models, collections, views, and routes for a … module. I don’t love the idea of combining all of this into a single file, because it seems to discourage smart reuse and unit testing of each piece of functionality. In the app I created, I abandoned the concept of modules, and instead broke my app into “components”, “controllers”, and “services”. I explain this breakdown in a bit more depth in the presentation I gave at BazaarVoice. I’m not sure this is the right answer for all apps, but I think modules oversimplify things.
  • The boilerplate includes a namespace.js file. It defines a namespace object, and that object includes a fetchTemplate method. It seems this method should only be used by views, and so I’d rather see something along the lines of an enhanced View that provides this functionality. That’s what I did with the base component module in my sample app.
  • I’m super-glad to see Jasmine included in the test directory, but unfortunately the examples show how to write Jasmine tests, not Jasmine tests for a Backbone app. As a community, we definitely need to be showing more examples of how to test things, and this seems like a good opportunity to distribute that knowledge.

Overall

I feel a little silly that I’m just now getting around to spending any time with Backbone, and I know that I only scratched the surface, but I like what I saw. I think it’s important to take it for what it is: an uber-tiny library that gets you pointed in the right direction. What I really want to see are fuller-fledged frameworks that build on top of Backbone, because I think there’s a lot more that can be standardized beyond what Backbone offers. I’m hoping to have a bit more time in April to dig in, and hopefully I can flesh out some of these ideas into something useful.

Community Conferences

| Comments

In 2010, I helped put on the first TXJS. We sold our first tickets for $29, and I think the most expensive tickets went for something like $129. We had about 200 people buy tickets, we had speakers like Douglas Crockford, Paul Irish, and John Resig, and we had sponsors like Facebook and Google. Our total budget was something like $30,000, and every out-of-town speaker had their travel and accommodations paid for.

In May, O’Reilly Media is holding another JavaScript conference in San Francisco, called FluentConf. I recently came to know that they are charging $100,000 for top-tier sponsorships, and that they are offering a 10-minute keynote as part of the package.

This turned my stomach, and not just because I believe it cheapens the experience of attendees, who will pay hundreds of dollars themselves. What really upset me was that a few weeks ago, I was approached to be on the speaker selection committee of FluentConf, and that conversation led me to discover that FluentConf would not be paying for speaker travel and accommodations. And so the other day, I tweeted:

conference #protip: save your money – and your speaking skills – for events that don’t sell their keynotes for $100k

Last night, I was at the Ginger Man in Austin, and I checked the Twitters, discovering that Peter Cooper, one of the chairs of FluentConf, had replied to a conversation that arose from that tweet:

@rmurphey @tomdale If you’re referring to Fluent, that is news to me.

I will accept the weird fact that the co-chair of a conference didn’t know its speaking slots were for sale – I gather that it is essentially a volunteer role, and the co-chairs aren’t necessarily in the driver’s seat when it comes to decisions like this. I let Peter know that, indeed, I had a PDF that outlined all the sponsorship options.

This is the part where, in some alternate reality, a mutual understanding of the offensiveness of this fact would have been achieved. What happened instead was a whole lot of name-calling, misquoting, and general weirdness.

Here’s the deal. Conferences can run their event however they want, and they can make money hand over fist. They can even claim they are the giving JavaScript developers “an event of their own,” ignoring the existence of the actual community-run JavaScript events that have been around for years now. I probably won’t go to or speak at an event that makes money hand over fist, but I don’t have any problem with the existence of such events, or with people’s involvement with them. However, when a conference is making money hand over fist – my back of the napkin calculations would suggest that FluentConf stands to have revenues of well over a million dollars – then that conference has no excuse not to pay the relatively paltry costs associated with speaker travel and accommodations.

A conference does not exist without its speakers. Those who speak at an event – the good ones, anyway – spend countless hours preparing and rehearsing, and they are away from home and work for days. While I do not discount the benefits that accrue to good speakers, the costs of being a speaker are non-trivial – and that’s before you get into the dollar costs of travel and accommodations.

When an event is unwilling to cover even those hard costs – nevermind the preparation time and time away from work and home – it materially affects the selection of speakers. It’s even worse when those same conferences claim to desire diversity; the people they claim to want so badly are the very people most likely to be discouraged when they find out they have to pay their own way to the stage.

In the conversation last night, I made this point:

when only the people who can afford to speak can speak, then only the people who can afford to speak will speak.

Amy Hoy responded with a criticism of community-run conferences:

and when only ppl who can order a ticket in 3 seconds can afford to come, only ppl who can order a ticket in 3 seconds can come

I know that getting tickets to the actual community-run events is hard, but that is because the community-run events flat-out ignore the economics of supply and demand, choosing instead to sell tickets at affordable prices even if it means they will sell out in a heartbeat, leaving a boatload of potential profit on the table. And yet those events – JSConf, TXJS, and the like – have still figured out how to cover speaker costs and provide attendees and sponsors with unforgettable experiences.

When an event with revenues exceeding a million dollars is unwilling to cover those costs, while simultaneously selling speaking slots, I do not hesitate for a moment to call that event out, and I do not hesitate to call on respected members of the community to sever their ties with the event. I’m not embarrassed about it, and you can call me all the names you want.

Mulberry: A Development Framework for Mobile Apps

| Comments

I’ll be getting on stage in a bit at CapitolJS, another great event from Chris Williams, the creator of JSConf and all-around conference organizer extraordinaire. My schtick at conferences in the past has been to talk about the pain and pitfalls of large app development with JavaScript, but this time is a little different: I’ll be announcing that Toura Mobile has created a framework built on top of PhoneGap that aims eliminate some of those pains and pitfalls for mobile developers. We’re calling it Mulberry, and you’ll be seeing it on GitHub in the next few weeks.

tl;dr: go here and watch this video.

While the lawyers are dotting the i’s and crossing the t’s as far as getting the code in your hands – we’re aiming for a permissive license similar to the licenses for PhoneGap and Dojo – I wanted to tell you a little bit about it.

Mulberry is two things. First, it’s command line tools (written in Ruby) that help you rapidly scaffold and configure an app, create content using simple Markdown and YAML, and test it in your browser, in a simulator, and on device. Second, and much more exciting to me as a JavaScript developer, it’s a framework built on top of the Dojo Toolkit for structuring an application and adding custom functionality in a sane way.

Mulberry lets you focus on the things that are unique to your application. It provides an underlying framework that includes a “router” for managing application state; built-in components and templates for displaying standard content types like text, audios, videos, feeds, and images; a simple API for defining custom functionality and integrating it with the system; and an HTML/CSS framework that uses SASS and HAML templates to make it easy to style your apps.

The basics of setting up an app are pretty well covered at the Mulberry site, but if you’re reading this, you’re probably a JavaScript developer, so I want to focus here on what Mulberry can do for you. First, though, let me back up and cover some terminology: Mulberry apps consist of a set of “nodes”; each node is assigned a template, and each template consists of components arranged in a layout. Nodes can have assets associated with them – text, audio, images, video, feeds, and data.

It’s the data asset that provides the most power to developers – you can create an arbitrary object, associate it with a node, and then any components that are in the template that’s being used to display the node will get access to that data.

A Twitter component offers a simple example. A node might have a data asset like this associated with it:

1
{ term : 'capitoljs', type : 'twitter' }

We could define a custom template for this page (mulberry create_template Twitter), and tell that template to include a Twitter component:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Twitter:
  screens:
    - name: index
      regions:
        -
          size: fixed
          scrollable: false
          components:
            - PageNav
        -
          size: flex
          scrollable: true
          components:
            - PageHeaderImage
            - custom:Twitter

Next, we’d define our Twitter component (mulberry create_component Twitter), which would create the skeleton of a component file:

1
2
3
4
5
6
7
8
9
10
11
12
13
dojo.provide('client.components.Twitter');

toura.component('Twitter', {
  componentTemplate : dojo.cache('client.components', 'Twitter/Twitter.haml'),

  prep : function() {

  },

  init : function() {

  }
});

One of the things the skeleton contains is a reference to the template for the component. The create_component command creates this file, which defines the DOM structure for the component. For the sake of this component, that template will just need to contain one line:

1
%ul.component.twitter

As I mentioned earlier, Mulberry components automatically get access to all of the assets that are attached to the node they’re displaying. This information is available as an object at this.node. Mulberry components also have two default methods that you can implement: the prep method and the init method.

The prep method is an opportunity to prepare your data before it’s rendered using the template; we won’t use it for the Twitter component, because the Twitter component will go out and fetch its data after the template is rendered. This is where the init method comes in – this is where you can tell your component what to do. Here’s what our Twitter component ends up looking like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
dojo.provide('client.components.Twitter');

mulberry.component('Twitter', {
  componentTemplate : dojo.cache('client.components', 'Twitter/Twitter.haml'),
  tweetTemplate : dojo.cache('client.components', 'Twitter/Tweet.haml'),

  init : function() {
    var data = dojo.filter(this.node.data, function(d) {
          return d.type === 'twitter'
        })[0].json;

    $.ajax('http://search.twitter.com/search.json?q=' + data.term, {
      dataType : 'jsonp',
      success : $.proxy(this, '_onLoad')
    });
  },

  _onLoad : function(data) {
    var tweets = data.results,
        tpl = mulberry.haml(this.tweetTemplate),
        html = $.map(tweets, function(tweet) {
          tweet.link = 'http://twitter.com/capitoljs/status/' + tweet.id_str;

          tweet.created_at = dojo.date.locale.format(
            new Date(tweet.created_at), {
              datePattern : 'EEE',
              timePattern : 'h:m a'
            }
          );

          tweet.text = tweet.text.replace(
            /@(\S+)/g,
            "<a href='http://twitter.com/#!/$1'>@$1</a>"
          );

          return tpl(tweet);
        }).join('');

    this.$domNode.html(html);
    this.region.refreshScroller();
  }
});

Note that when we define the data variable in the init method, we look at this.node.data, which is an array of all of the data objects associated with the node. We filter this array to find the first data object that is the right type – this means we can have lots of different data objects associated with a given node.

Note also that there’s a property this.$domNode that we’re calling jQuery methods on, and that we’re using jQuery’s $.ajax – Mulberry apps come with jQuery enabled by default, and if it’s enabled, helpers like this.$domNode become available to you. This means that very little knowledge of Dojo is required to start adding your own functionality to an app – if you need it, though, the full power of the Dojo Toolkit is available to you too.

Here’s what our component ends up looking like, with a little bit of custom CSS applied to our app:

screenshot

This is a pretty basic demo – Twitter is, indeed, the new hello world – but I hope it gives you a little bit of an idea about what you might be able to build with Mulberry. We’ve been using it in production to create content-rich mobile apps for our users for months now (connected to a web-based CMS instead of the filesystem, of course), and we’ve designed it specifically to be flexible enough to meet arbitrary client requests without the need to re-architect the underlying application.

If you know JavaScript, HTML, and CSS, Mulberry is a powerful tool to rapidly create a content-rich mobile application while taking advantage of an established infrastructure, rather than building it yourself. I’m excited to see what you’ll do with it!

Switching to Octopress

| Comments

I’m taking a stab at starting a new blog at rmurphey.com, powered by Octopress, which is a set of tools, themes, and other goodness around a static site generator (SSG) called jekyll. A couple of people have noticed the new site and wondered what I’m doing, so I thought I’d take a couple of minutes to explain.

My old blog at blog.rebeccamurphey.com is managed using Posterous. It used to be a self-hosted WordPress site, but self-hosted WordPress sites are so 2009. One too many attacks by hackers made it way more trouble than it seemed to be worth. Posterous made switching from a WordPress install pretty easy, so, I did that. All told, it took a few hours, and I was pretty happy.

For a few reasons, the old blog isn’t going anywhere:

  • I ran into some trouble importing the old content into jekyll. I was tired and I didn’t investigate the issues too much, so they’re probably solveable, but …
  • Some of the old content just isn’t that good, and since time is a finite resource, I don’t want to get too wrapped up in moving it over. Plus …
  • Frighteningly or otherwise, some of my posts have become reference material on the internet. If I move them, I’ve got to deal with redirections, and I have a feeling that’s not going to be an easy task with Posterous.

In hindsight, I should have switched directly from WordPress to an SSG. Despite my many complaints about Posterous – misformatted posts, lack of comment hyperlinks, a sign-in requirement for commenting, and lots more – in the end my decision to switch to a static site generator instead was more about having easy control over my content on my filesystem.

This article explains it well, but the bottom line, I think, is that static site generators are blogging tools for people who don’t need all the bullshit that’s been added to online tools in the interest of making them usable by people who don’t know wtf they’re doing. So, yes, to use an SSG, you have to know wtf you’re doing, and for me that’s a good thing: the tool gets out of my way and lets me focus on the writing.

As for Octopress, it seems pretty damn nifty – the default theme looks gorgeous on my desktop and on my phone, and it seems they’ve taken care to put common customization points in a single sass file. All that aside, though, one of my favorite parts about it is that my content is truly my content. If Octopress pisses me off – though I hope it won’t! – then I can simply take my markdown files and put them in some other SSG, upload the whole thing to my GitHub pages, and be done with it. Win all around.

Using Object Literals for Flow Control and Settings

| Comments

I got an email the other day from someone reading through jQuery Fundamentals – they’d come across the section about patterns for performance and compression, which is based on a presentation by Paul Irish gave back at the 2009 jQuery Conference in Boston.

In that section, there’s a bit about alternative patterns for flow control – that is, deciding what a program should do next. We’re all familiar with the standard if statement:

1
2
3
4
5
6
7
function isAnimal(thing) {
  if (thing === 'dog' || thing === 'cat') {
    console.log("yes!");
  } else {
    console.log("no");
  }
}

What stumped the person who emailed me, though, was when the same logic as we see above was written like this:

1
2
3
4
5
6
7
function isAnimal(thing) {
  if (({ cat : 1, dog : 1 })[ thing ]) {
    console.log("yes!");
  } else {
    console.log("no");
  }
}

What’s happening here is that we’re using a throwaway object literal to express the conditions under which we will say a thing is an animal. We could have stored the object in a variable first:

1
2
3
4
5
6
7
8
9
10
11
12
function isAnimal(thing) {
  var animals = {
    cat : 1,
    dog : 1
  };

  if (animals[ thing ]) {
    console.log("yes!");
  } else {
    console.log("no");
  }
}

However, that variable’s only purpose would be to provide this one lookup, so it can be argued that the version that doesn’t bother setting the variable is more economical. Reasonable people can probably disagree about whether this economy of bytes is a good tradeoff for readability – something like this is perfectly readable to a seasoned developer, but potentially puzzling otherwise – but it’s an interesting example of how we can use literals in JavaScript without bothering to store a value in a variable.

The pattern works with an array, too:

1
2
3
function animalByIndex(index) {
  return [ 'cat', 'dog' ][ index ];
}

It’s also useful for looking up values generally, which is how I find myself using it most often these days in my work with Toura, where we routinely branch our code depending on the form factor of the device we’re targeting:

1
2
3
4
5
6
function getBlingLevel(device) {
  return ({
    phone : 100,
    tablet : 200
  })[ device.type ];
}

As an added benefit, constructs that use this pattern will return the conveniently falsy undefined if you try to look up a value that doesn’t have a corresponding property in the object literal.

A great way to come across techniques like this is to read the source code of your favorite library (and other libraries too). Unfortunately, once discovered, these patterns can be difficult to decipher, even if you have pretty good Google fu. Just in case your neighborhood blogger isn’t available, IRC is alive and well in 2011, and it’s an excellent place to get access to smart folks eager to take the time to explain.

Lessons From a Rewrite

| Comments

MVC and friends have been around for decades, but it’s only in the last couple of years that broad swaths of developers have started applying those patterns to JavaScript. As that awareness spreads, developers eager to use their newfound insight are presented with a target-rich environment, and the temptation to rewrite can be strong.

There’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. … The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming: It’s harder to read code than to write it. - Joel Spolsky

When I started working with Toura Mobile late last year, they already had a product: a web-based CMS to create the structure of a mobile application and populate it with content, and a PhoneGap-based application to consume the output of the CMS inside a native application. Customers were paying, but the development team was finding that delivering new features was a struggle, and bug fixes seemed just as likely to break something else as not. They contacted me to see whether they should consider a rewrite.

With due deference to Spolsky, I don’t think it was a lack of readability driving their inclination to rewrite. In fact, the code wasn’t all that difficult to read or follow. The problem was that the PhoneGap side of things had been written to solve the problems of a single-purpose, one-off application, and it was becoming clear that it needed to be a flexible, extensible delivery system for all of the content combinations clients could dream up. It wasn’t an app — it was an app that made there be an app.

Where a new system concept or new technology is used, one has to build a system to throw away, for even the best planning is not so omniscient as to get it right the first time. Hence plan to throw one away; you will, anyhow. - Fred Brooks, The Mythical Man Month

By the time I’d reviewed the code and started writing up my findings, the decision had already been made: Toura was going to throw one away and start from scratch. For four grueling and exciting months, I helped them figure out how to do it better the second time around. In the end, I like to think we’ve come up with a solid architecture that’s going to adapt well to clients’ ever-changing needs. Here, then, are some of the lessons we learned along the way.

Understand what you’re rewriting

I had spent only a few days with the codebase when we decided that we were going to rewrite it. In some ways, this was good — I was a fresh set of eyes, someone who could think about the system in a new way — but in other ways, it was a major hindrance. We spent a lot of time at the beginning getting me up to speed on what, exactly, we were making; things that went without saying for existing team members did not, in fact, go without saying for me.

This constant need for explanation and clarification was frustrating at times, both for me and for the existing team, but it forced us to state the problem in plain terms. The value of this was incredible — as a team, we were far less likely to accept assumptions from the original implementation, even assumptions that seemed obvious.

One of the key features of Toura applications is the ability to update them “over the air” — it’s not necessary to put a new version in an app store in order to update an app’s content or even its structure. In the original app, this was accomplished via generated SQL diffs of the data. If the app was at version 3, and the data in the CMS was at version 10, then the app would request a patch file to upgrade version 3 to version 10. The CMS had to generate a diff for all possible combinations: version 3 to version 10, version 4 to version 10, etc. The diff consisted of queries to run against an SQLite database on the device. Opportunities for failures or errors were rampant, a situation exacerbated by the async nature of the SQLite interface.

In the new app, we replicated the feature with vastly less complexity — whenever there is an update, we just make the full data available at an app-specific URL as a JSON file, using the same format that we use to provide the initial data for the app on the device. The new data is stored on the device, but it’s also retained in memory while the application is running via Dojo’s Item File Read Store, which allows us to query it synchronously. The need for version-by-version diffs has been eliminated.

Restating the problem led to a simpler, more elegant solution that greatly reduced the opportunities for errors and failure. As an added benefit, using JSON has allowed us to meet needs that we never anticipated — the flexibility it provides has become a valuable tool in our toolbox.

Identify pain points

If the point of a rewrite is to make development easier, then an important step is to figure out what, exactly, is making development hard. Again, this was a time to question assumptions — as it turned out, there were things that had come to be accepted burdens that were actually relatively easy to address.

One of the biggest examples of this was the time required to develop and test anything that might behave differently on one operating system versus another. For example, the Android OS has limited support for the audio and video tags, so a native workaround is required to play media on Android that is not required on iOS.

In the original code, this device-specific branching was handled in a way that undoubtedly made sense at the beginning but grew unwieldy over time. Developers would create Mustache templates, wrapping the template tags in /* */ so the templates were actually executable, and then compile those templates into plain JavaScript files for production. Here are a few lines from one of those templates:

1
2
3
4
5
6
7
8
9
10
11
/*  */
var mediaPath = "www/media/" + toura.pages.currentId + "/";
/*  */
/*  */
var mediaPath = [Toura.getTouraPath(), toura.pages.currentId].join("/");
/*  */
var imagesList = [], dimensionsList = [], namesList = [], thumbsList = [];
var pos = -1, count = 0;
/*  */
var pos = 0, count = 0;
/*  */

These templates were impossible to check with a code quality tool like JSHint, because it was standard to declare the same variable multiple times. Multiple declarations of the same variable meant that the order of those declarations was important, which made the templates tremendously fragile. The theoretical payoff was smaller code in production, but the cost of that byte shaving was high, and the benefit somewhat questionable — after all, we’d be delivering the code directly from the device, not over HTTP.

In the rewrite, we used a simple configuration object to specify information about the environment, and then we look at the values in that configuration object to determine how the app should behave. The configuration object is created as part of building a production-ready app, but in development we can alter configuration settings at will. Simple if statements replaced fragile template tags.

Since Dojo allows specifying code blocks for exclusion based on the settings you provide to the build process, we could mark code for exclusion if we really didn’t want it in production.

By using a configuration object instead of template tags for branching, we eliminated a major pain point in day-to-day development. While nothing matches the proving ground of the device itself, it’s now trivial to effectively simulate different device experiences from the comfort of the browser. We do the majority of our development there, with a high degree of confidence that things will work mostly as expected once we reach the device. If you’ve ever waited for an app to build and install to a device, then you know how much faster it is to just press Command-R in your browser instead.

Have a communication manifesto

Deciding that you’re going to embrace an MVC-ish approach to an application is a big step, but only a first step — there are a million more decisions you’re going to need to make, big and small. One of the widest-reaching decisions to make is how you’ll communicate among the various pieces of the application. There are all sorts of levels of communication, from application-wide state management — what page am I on? — to communication between UI components — when a user enters a search term, how do I get and display the results?

From the outset, I had a fairly clear idea of how this should work based on past experiences, but at first I took for granted that the other developers would see things the same way I did, and I wasn’t necessarily consistent myself. For a while we had several different patterns of communication, depending on who had written the code and when. Every time you went to use a component, it was pretty much a surprise which pattern it would use.

After one too many episodes of frustration, I realized that part of my job was going to be to lay down the law about this — it wasn’t that my way was more right than others, but rather that we needed to choose a way, or else reuse and maintenance was going to become a nightmare. Here’s what I came up with:

  • myComponent.set(key, value) to change state (with the help of setter methods from Dojo’s dijit._Widget mixin)
  • myComponent.on&lt;Event&gt;(componentEventData) to announce state changes and user interaction; Dojo lets us connect to the execution of arbitrary methods, so other pieces could listen for these methods to be executed.
  • dojo.publish(topic, [ data ]) to announce occurrences of app-wide interest, such as when the window is resized
  • myComponent.subscribe(topic) to allow individual components react to published topics

Once we spelled out the patterns, the immediate benefit wasn’t maintainability or reuse; rather, we found that we didn’t have to make these decisions on a component-by-component basis anymore, and we could focus on the questions that were actually unique to a component. With conventions we could rely on, we were constantly discovering new ways to abstract and DRY our code, and the consistency across components meant it was easier to work with code someone else had written.

Sanify asynchronicity

One of the biggest challenges of JavaScript development — well, besides working with the DOM — is managing the asynchronicity of it all. In the old system, this was dealt with in various ways: sometimes a method would take a success callback and a failure callback; other times a function would return an object and check one of its properties on an interval.

1
2
3
4
5
6
7
8
9
images = toura.sqlite.getMedias(id, "image");

var onGetComplete = setInterval(function () {
  if (images.incomplete)
    return;

  clearInterval(onGetComplete);
  showImagesHelper(images.objs, choice)
},10);

The problem here, of course, is that if images.incomplete never gets set to false — that is, if the getMedias method fails — then the interval will never get cleared. Dojo and now jQuery (since version 1.5) offer a facility for handling this situation in an elegant and powerful way. In the new version of the app, the above functionality looks something like this:

1
toura.app.Data.get(id, image).then(showImages, showImagesFail);

The get method of toura.app.Data returns an immutable promise — the promise’s then method makes the resulting value of the asynchronous get method available to showImages, but does not allow showImages to alter the value. The promise returned by the get method can also be stored in a variable, so that additional callbacks can be attached to it.

Using promises vastly simplifies asynchronous code, which can be one of the biggest sources of complexity in a non-trivial application. By using promises, we got code that was easier to follow, components that were thoroughly decoupled, and new flexibility in how we responded to the outcome of an asynchronous operation.

Naming things is hard

Throughout the course of the rewrite we were constantly confronted with one of those pressing questions developers wrestle with: what should I name this variable/module/method/thing? Sometimes I would find myself feeling slightly absurd about the amount of time we’d spend naming a thing, but just recently I was reminded how much power those names have over our thinking.

Every application generated by the Toura CMS consists of a set of “nodes,” organized into a hierarchy. With the exception of pages that are standard across all apps, such as the search page, the base content type for a page inside APP is always a node — or rather, it was, until the other day. I was working on a new feature and struggling to figure out how I’d display a piece of content that was unique to the app but wasn’t really associated with a node at all. I pored over our existing code, seeing the word node on what felt like every other line. As an experiment, I changed that word node to baseObj in a few high-level files, and suddenly a whole world of solutions opened up to me — the name of a thing had limiting my thinking.

The lesson here, for me, is that the time we spent (and spend) figuring out what to name a thing is not lost time; perhaps even more importantly, the goal should be to give a thing the most generic name that still conveys what the thing’s job — in the context in which you’ll use the thing — actually is.

Never write large apps

I touched on this earlier, but if there is one lesson I take from every large app I’ve worked on, it is this:

The secret to building large apps is never build large apps. Break up your applications into small pieces. Then, assemble those testable, bite-sized pieces into your big application. - Justin Meyer

The more tied components are to each other, the less reusable they will be, and the more difficult it becomes to make changes to one without accidentally affecting another. Much like we had a manifesto of sorts for communication among components, we strived for a clear delineation of responsibilities among our components. Each one should do one thing and do it well.

For example, simply rendering a page involves several small, single-purpose components:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
function nodeRoute(route, nodeId, pageState) {
  pageState = pageState || {};

  var nodeModel = toura.app.Data.getModel(nodeId),
      page = toura.app.UI.getCurrentPage();

  if (!nodeModel) {
    toura.app.Router.home();
    return;
  }

  if (!page || !page.node || nodeId !== page.node.id) {
    page = toura.app.PageFactory.createPage('node', nodeModel);

    if (page.failure) {
      toura.app.Router.back();
      return;
    }

    toura.app.UI.showPage(pf, nodeModel);
  }

  page.init(pageState);

  // record node pageview if it is node-only
  if (nodeId && !pageState.assetType) {
    dojo.publish('/node/view', [ route.hash ]);
  }

  return true;
}

The router observes a URL change, parses the parameters for the route from the URL, and passes those parameters to a function. The Data component gets the relevant data, and then hands it to the PageFactory component to generate the page. As the page is generated, the individual components for the page are also created and placed in the page. The PageFactory component returns the generated page, but at this point the page is not in the DOM. The UI component receives it, places it in the DOM, and handles the animation from the old page to the new one.

Every step is its own tiny app, making the whole process tremendously testable. The output of one step may become the input to another step, but when input and output are predictable, the questions our tests need to answer are trivial: “When I asked the Data component for the data for node123, did I get the data for node123?”

Individual UI components are their own tiny apps as well. On a page that displays a videos node, we have a video player component, a video list component, and a video caption component. Selecting a video in the list announces the selection via the list’s onSelect method. Dojo allows us to connect to the execution of object methods, so in the page controller, we have this:

1
2
3
4
5
this.connect(this.videoList, 'onSelect', function(assetId) {
  var video = this.\_videoById(assetId);
  this.videoCaption.set('content', video.caption || '');
  this.videoPlayer.play(assetId);
});

The page controller receives the message and passes it along to the other components that need to know about it — components don’t communicate directly with one another. This means the component that lists the videos can list anything, not just videos — its only job is to announce a selection, not to do anything as a result.

Keep rewriting

It takes confidence to throw work away … When people first start drawing, they’re often reluctant to redo parts that aren’t right … they convince themselves that the drawing is not that bad, really — in fact, maybe they meant it to look that way. - Paul Graham, “Taste for Makers”

The blank slate offered by a rewrite allows us to fix old mistakes, but inevitably we will make new ones in the process. As good stewards of our code, we must always be open to the possibility of a better way of doing a thing. “It works” should never be mistaken for “it’s done.”