Adventures in JavaScript Development

A Baseline for Front-End Developers

| Comments

I wrote a README the other day for a project that I’m hoping other developers will look at and learn from, and as I was writing it, I realized that it was the sort of thing that might have intimidated the hell out of me a couple of years ago, what with its casual mentions of Node, npm, Homebrew, git, tests, and development and production builds.

Once upon a time, editing files, testing them locally (as best as we could, anyway), and then FTPing them to the server was the essential workflow of a front-end dev. We measured our mettle based on our ability to wrangle IE6 into submission or achieve pixel perfection across browsers. Many members of the community – myself included – lacked traditional programming experience. HTML, CSS, and JavaScript – usually in the form of jQuery – were self-taught skills.

Something has changed in the last couple of years. Maybe it’s the result of people starting to take front-end dev seriously, maybe it’s browser vendors mostly getting their shit together, or maybe it’s front-end devs – again, myself included – coming to see some well-established light about the process of software development.

Whatever it is, I think we’re seeing the emphasis shift from valuing trivia to valuing tools. There’s a new set of baseline skills required in order to be successful as a front-end developer, and developers who don’t meet this baseline are going to start feeling more and more left behind as those who are sharing their knowledge start to assume that certain things go without saying.

Here are a few things that I want to start expecting people to be familiar with, along with some resources you can use if you feel like you need to get up to speed. (Thanks to Paul Irish, Mike Taylor, Angus Croll, and Vlad Filippov for their contributions.)

JavaScript

This might go without saying, but simply knowing a JavaScript library isn’t sufficient any more. I’m not saying you need to know how to implement all the features of a library in plain JavaScript, but you should know when a library is actually required, and be capable of working with plain old JavaScript when it’s not.

That means that you’ve read JavaScript: The Good Parts – hopefully more than once. You understand data structures like objects and arrays; functions, including how and why you would call and apply them; working with prototypal inheritance; and managing the asynchronicity of it all.

If your plain JS fu is weak, here are some resources to help you out:

Git (and a Github account)

If you’re not on Github, you’re essentially unable to participate in the rich open-source community that has arisen around front-end development technologies. Cloning a repo to try it out should be second-nature to you, and you should understand how to use branches on collaborative projects.

Need to boost your git skills?

Modularity, dependency management, and production builds

The days of managing dependencies by throwing one more script or style tag on the page are long gone. Even if you haven’t been able to incorporate great tools like RequireJS into your workflow at work, you should find time to investigate them in a personal project or in a project like Backbone Boilerplate, because the benefits they convey are huge. RequireJS in particular lets you develop with small, modular JS and CSS files, and then concatenates and minifies them via its optimization tool for production use.

Skeptical of AMD? That’s no excuse to be doing nothing. At the very least, you should be aware of tools like UglifyJS or Closure Compiler that will intelligently minify your code, and then concatenate those minified files prior to production.

If you’re writing plain CSS – that is, if you’re not using a preprocessor like Sass or Stylus – RequireJS can help you keep your CSS files modular, too. Use @import statements in a base file to load dependencies for development, and then run the RequireJS optimizer on the base file to create a file built for production.

In-Browser Developer Tools

Browser-based development tools have improved tremendously over the last couple of years, and they can dramatically improve your development experience if you know how to use them. (Hint: if you’re still using alert to debug your code, you’re wasting a lot of time.)

You should probably find one browser whose developer tools you primarily use – I’m partial to Google Chrome’s Developer Tools these days – but don’t dismiss the tools in other browsers out of hand, because they are constantly adding useful features based on developer feedback. Opera’s Dragonfly in particular has some features that make its developer tools stand out, such as an (experimental) CSS profiler, customizable keyboard shortcuts, remote debugging without requiring a USB connection, and the ability to save and use custom color palettes.

If your understanding of browser dev tools is limited, Fixing these jQuery is a great (and not particularly jQuery-centric) overview of debugging, including how to do step debugging – a life-altering thing to learn if you don’t already know it.

The command line

Speaking of the command line, being comfortable with it is no longer optional – you’re missing out on way too much if you’re not ready to head over to a terminal window and get your hands dirty. I’m not saying you have to do everything in the terminal – I won’t take your git GUI away from you even though I think you’ll be better off without it eventually – but you should absolutely have a terminal window open for whatever project you’re working on. There are a few command line tasks you should be able to do without thinking:

  • ssh to log in to another machine or server
  • scp to copy files to another machine or server
  • ack or grep to find files in a project that contain a string or pattern
  • find to locate files whose names match a given pattern
  • git to do at least basic things like add, commit, status, and pull
  • brew to use Homebrew to install packages
  • npm to install Node packages
  • gem to install Ruby packages

If there are commands you use frequently, edit your .bashrc or .profile or .zshrc or whatever, and create an alias so you don’t have to type as much. You can also add aliases to your ~/.gitconfig file. Gianni Chiappetta’s dotfiles are an excellent inspiration for what’s possible.

Note: If you’re on Windows, I don’t begin to know how to help you, aside from suggesting Cygwin. Right or wrong, participating in the open-source front-end developer community is materially more difficult on a Windows machine. On the bright side, MacBook Airs are cheap, powerful, and ridiculously portable, and there’s always Ubuntu or another *nix.

Client-side templating

It wasn’t so long ago that it was entirely typical for servers to respond to XHRs with a snippet of HTML, but sometime in the last 12 to 18 months, the front-end dev community saw the light and started demanding pure data from the server instead. Turning that data into HTML ready to be inserted in the DOM can be a messy and unmaintainable process if it’s done directly in your code. That’s where client-side templating libraries come in: they let you maintain templates that, when mixed with some data, turn into a string of HTML. Need help picking a templating tool? Garann Means’ template chooser can point you in the right direction.

CSS preprocessors

Paul Irish noted the other day that we’re starting to see front-end devs write code that’s very different from what ends up in production, and code written with CSS preprocessors is a shining example of this. There’s still a vocal crowd that feels that pure CSS is the only way to go, but they’re starting to come around. These tools give you features that arguably should be in CSS proper by now – variables, math, logic, mixins – and they can also help smooth over the CSS property prefix mess.

Testing

One of the joys of writing modular, loosely coupled code is that your code becomes vastly easier to test, and with tools like Grunt, setting up a project to include tests has never been easier. Grunt comes with QUnit integration, but there are a host of testing frameworks that you can choose from – Jasmine and Mocha are a couple of my current favorites – depending on your preferred style and the makeup of the rest of your stack.

While testing is a joy when your code is modular and loosely coupled, testing code that’s not well organized can be somewhere between difficult and impossible. On the other hand, forcing yourself to write tests – perhaps before you even write the code – will help you organize your thinking and your code. It will also let you refactor your code with greater confidence down the line.

  • A short screencast I recorded about testing your jQuery with Jasmine.
  • An example of unit tests on the jquery-bbq plugin.

Process automation (rake/make/grunt/etc.)

Grunt’s ability to set up a project with built-in support for unit tests is one example of process automation. The reality of front-end development is that there’s a whole lot of repetitive stuff we have to do, but as a friend once told me, a good developer is a lazy developer: as a rule of thumb, if you find yourself doing the same thing three times, it’s time to automate it.

Tools like make have been around for a long time to help us with this, but there’s also rake, grunt, and others. Learning a language other than JavaScript can be extremely helpful if you want to automate tasks that deal with the filesystem, as Node’s async nature can become a real burden when you’re just manipulating files. There are lots of task-specific automation tools, too – tools for deployment, build generation, code quality assurance, and more.

Code quality

If you’ve ever been bitten by a missing semicolon or an extra comma, you know how much time can be lost to subtle flaws in your code. That’s why you’re running your code through a tool like JSHint, right? It’s configurable and has lots of ways to integrate it into your editor or build process.

The fine manual

Alas, there is no manual for front-end development, but MDN comes pretty close. Good front-end devs know to prefix any search engine query with mdn – for example, mdn javascript arrays – in order to avoid the for-profit plague that is w3schools.

The End

As with anything, reading about these things won’t make you an expert, or even moderately skilled – the only surefire way to get better at a thing is to do that thing. Good luck.

Greenfielding

I’m officially one-third of the way through my self-imposed month of unemployment before I join Bocoup at the beginning of May, and I’ve been spending most of what would normally be my working hours on a small demo to support talks at conferences I will be speaking at this summer. It’s just a little app that searches various services, and displays the results – so simple that, when I showed it to Melissa, she helpfully asked why I wouldn’t just use Google.

It’s been about 18 months since I last got to start a project from scratch – in that case, the codebase that became Mulberry – but even then, I didn’t have control over the full stack of technologies, just the JavaScript side of things. Over the course of my time on that project, I came to be extremely familiar with Dojo, fairly competent with Jasmine, decently comfortable with Ruby and its go-to simple server Sinatra, and somewhat conversational in Sass.

I spent most of my time on that project working with technologies with which I was already pretty comfortable. Interactions with new technologies came in dribs and drabs (except for that one time I decided to test my Ruby skills by rewriting our entire build process), and all of my learning was backed up by a whole lot of institutional knowledge.

The consulting world, of course, is a wee bit different: you interact frequently with new technologies, and you never know what a client might ask you to do. Learning comes in bursts, and the ability to quickly come up to speed with a technology is imperative. On a six-week project, you can’t spend the first 40 hours getting your bearings.

Even though I spent three years as a consultant, returning to that world of constant learning was feeling a tad intimidating. And so for this project, I decided to make a point of leaving that comfort zone, and intentionally chose technologies – Node, Bootstrap, Backbone, Mocha, RequireJS – that I hadn’t really had a chance to work with in depth (or at all).

On Learning

Greenfield projects are few and far between, and it’s easy to get in a rut by sticking with the tools you already know. Some of my most exciting times at Toura weren’t when I was writing JavaScript, but rather when I was learning how to talk to the computer in a whole new language. Greenfielding a personal project is a special treat – it never really has to be “finished,” and no one’s going to be mad at you if it turns out you made a shitty choice, so you’re free to try things that are less of a sure thing than they would need to be if you were getting paid.

Speaking personally, it can also be a little intimidating to learn a new thing because learning often involves asking for help, and asking for help requires admitting that I don’t already know how to do the thing that people might expect I already know how to do.

Sometimes the thing that gets in the way of starting a new learning project is actually the fear that I will get stuck. What does it mean if the person who talks at conferences about principles code organization can’t figure out how best to structure a particular app with Backbone? What does it mean if the person who’s been encouraging people to build their JavaScript can’t get RequireJS to generate a proper build? What will I say to Isaac, now that he is standing in front of me and introducing himself, when I have not in fact spent any quality time with Node prior to this past weekend?

Lucky for me, it turns out that all of this is mostly in my head. While I often preface my questions with a small dose of humility and embarrassment, it turns out that well articulated questions are usually greeted with thoughtful and helpful answers. If anything, I’m trying to be more communicative about the learning that I do, because I think it’s important that people feel comfortable acknowledging that they used to not know a certain thing, and now they do. I also try to gently remind people that just because they have known something for months or years doesn’t mean they should look down upon the person enthusiastically blogging about it today.

On that note … here’s what’s new to me these past couple of weeks :)

Twitter Bootstrap

I’ve written a lot of demo apps, and while my coding style has changed over the years, one thing has remained constant: they all look fairly terrible. In theory, we’re all smart enough to know that what a demo looks like doesn’t have any bearing on what it explains, but in reality a good-looking demo is simply more compelling, if only because the viewer isn’t distracted by the bad design.

With this in mind, I decided to give Twitter Bootstrap a try. When I first arrived at the site, I started looking for docs about how to set it up, but it turns out that I vastly overestimated Bootstrap’s complexity. Drop a style tag into your page (and, optionally, another style tag for the responsive CSS), look at the examples, and start writing your markup.

What I really loved is that there were patterns for everything I needed, and those patterns were easy to follow and implement. Within an hour or so I had a respectable-looking HTML page with markup that didn’t seem to suck – that is, it looked good and it was a decent starting point if I ever wanted to apply a custom design.

Node

If you’ve ever talked to me about Node, you know that I have pretty mixed feelings about it – some days I feel like the people writing JavaScript in the browser really would have benefited if the people who have gravitated to Node had stuck around to invest their collective smarts in the world where 99% of JavaScript is still written. But that doesn’t really have anything to do with Node the technology, so much as Node the new shiny thing unburdened by browser differences.

I’ve actually visited Node a couple of times previously – if you haven’t at least installed it, you might be living under a rock – and I was flattered that Garann asked me to review her book Node for Front-End Developers, but past experiences had left me frustrated.

This time, something was different. I don’t rule out that it might be me, or even that learning some of the ins and outs of Ruby might have prepared me to understand Node – and packages and dependency management and writing for the server instead of the browser – better this time around. It could also be that the Node ecosystem has reached a point of maturity that it just hadn’t reached the last time I poked around.

Regardless, I found that everything made a whole lot more sense this time, and my struggles this time were about forgetting to stringify an object before sending it as a response to a request, not about getting the server to start in the first place. I used the q module to give me my beloved promises for managing all the asynchronicity, and generally found it ridiculously pleasant to leave behind all the context switching I’d grown accustomed to while using JavaScript and Ruby side by side. I’ll probably still turn to Ruby for automating things on the command line (though I continue to be intrigued by grunt), but I’m ready to admit that it’s time for me to add Node to my toolbox.

Mocha

To be honest, I’d just planned on using Jasmine for writing tests for this project, mostly because I’d never set up Jasmine myself, and I was interested in maybe getting it working with grunt for headless testing. I ended up bailing on that plan when, in the course of some Googling for answers about Jasmine, I came across Mocha.

Mocha is a super-flexible testing framework that runs on Node and in the browser. You can choose your assertion library – that is, you can choose to write your tests like assert(1, 1).ok() or expect(1).to.be(1) depending on your preference. I decided to use the latter style, with the help of expect.js. You can also choose your reporting style, including the ability to generate docs from your tests.

I had to do a bit of finagling to get the browser-based tests working with my RequireJS setup, and ultimately I ended up just using my app’s server, running in dev mode, to serve the tests in the browser. I’m still working out how best to run just one test at a time in the browser, but all in all, discovering Mocha has probably been the best part of working on this project.

RequireJS

RequireJS is another tool that I’ve dabbled with in the past, but for the last 18 months I’ve been spending most of my time with Dojo’s pre-AMD build system, so I had some catching up to do. I don’t have a ton to say about RequireJS except:

  • It’s gotten even easier to use since I last visited it.
  • The docs are great and also gorgeous.
  • While I haven’t had to bother him lately, James Burke, the author and maintainer of RequireJS, is a kind and incredibly helpful soul.
  • The text! plugin makes working with client-side templates incredibly simple, without cluttering up your HTML with templates in script tags or hard-coding your templates into your JavaScript.
  • The use! plugin makes it painless to treat libraries that don’t include AMD support just like libraries that do. I hear it might become an official plugin soon; I hope it will.

Backbone

This part was a tough choice, and I actually set out to use a different framework but ended up getting cold feet – even though this was just a personal project, it did need to reach some semblance of done-ness in some reasonable period of time. After a little bit of poking around at other options, I decided that, barring completely copping out and using Dojo, Backbone was going to be the best tool for this particular job on this particular schedule.

I’m pretty torn about this, because I decided to use a framework that I know has shortcomings and limited magic, and I know that other options would serve me better in the long term. But I also know that the long term doesn’t exactly matter for this particular project. The thing that swayed me, really, was that with Backbone, I didn’t feel like I needed to grasp a whole slew of concepts before I could write my first line of code.

I looked up plenty of things along the way, and rewrote my fair share of code when I discovered that I’d been Doing It Wrong, but I was able to maintain a constant forward momentum. With the other options I considered, I felt like I was going to have to climb a ladder of unknown height before making any forward progress.

I feel like I made the right choice for this project, but it’s a choice I’d spend a lot more time on for a “real” project, and I’d be much more inclined to invest the initial energy in getting up to speed if the payoff was clearer. This, though, is a choice that people seem to be consistently terrible at, and so I feel like I should beat myself up about it just a little. It’s all too common to dig a ginormous hole for ourselves by choosing the technology that lets us start writing code the soonest; on the flip side, it’s all too common to choose a technology that’s complete overkill for the task at hand.

The End

The master branch of the repo for the project should be mostly stable (if incomplete) if you want to check it out. I’m going to close comments on this post in the hopes that you’ll write your own post about what you’ve been learning instead :)

JavaScript: It’s a Language, Not a Religion

I have six things to say:

  1. I am in a committed relationship with my partner Melissa. We will celebrate six years together on Sunday. We contribute frequently to political causes.

  2. I was deeply saddened yesterday to learn that Brendan Eich contributed money in support of a political initiative that sought to rescind the court-established right for same-sex couples to marry in the state of California. It has changed my view of him as a person, despite the fact that we have had a positive and professional relationship and he has been a great supporter of my JavaScript career. I think he is on the wrong side of history, and I hope that courts will continue to agree with me.

  3. I had a frank, private, and face-to-face conversation with Brendan about the issue during JSConf. I shared my disappointment, sadness, and disagreement.

  4. I have been dismayed to see this incident interpreted as a statement about the JavaScript community as a whole. This community is made up of so many people who believe so many different things, and yesterday I was reminded that they are all just people, and JavaScript is just a language, not a religion. I shudder to think of a world where there is a political litmus test for entry into the community. Indeed, I am extremely torn about introducing personal politics into my professional life*, as I fear it will encourage professional colleagues to opine about personal beliefs that are frankly none of their business. One of the great joys of working with computers is that they do not care who I am or what I believe; I realize that to ask the same of people is unreasonable, but inviting politics into the workplace is a treacherously slippery slope. Unless my personal belief system presents an imminent danger to my colleagues, I am loath to welcome discussion of it by people who otherwise have no substantial or personal relationship with me.

  5. I believe individual companies must determine how best to address these issues, as their attitude toward them can have a significant impact on their ability to hire and retain talented people. I support constructive pressure on companies to align themselves with or distance themselves from political causes, but I would not support a company that prohibited its employees from participating in the political process. I urge anyone who is hurt or offended by this incident to engage with Brendan and Mozilla personally and professionally. Brendan is wrong on this issue, but he is a thoughtful and intelligent person, and he is also a human being.

  6. Finally: If this incident has made you angry or sad or disappointed, the most effective thing you can do is follow in Brendan’s footsteps by putting your money where your mouth is. Money speaks volumes in the American political system, and there are campaigns in progress right now that will impact the rights of gays and lesbians. Your contribution of $50, $100, or $1,000 – or, in lieu of money, your time – will have far more impact than yet another angry tweet.

And now I shall turn off the internet for a bit. Comments are disabled. Shocker, I know.

* It bears mentioning that, in certain cases, people making political contributions are required to include information about their employer. The inclusion of this information does not indicate that the employer supports – or is even aware of – the contribution.

Bocoup

| Comments

bocoup

It wasn’t so long ago that I was giving my first talk about JavaScript at the 2009 jQuery Conference, and it was there that Bocoup’s Boaz Sender and Rick Waldron created the (now-defunct) objectlateral.com, a celebration of an unfortunate typo in the conference program’s listing of my talk.

A bond was forged, and ever since I’ve watched as Bocoup has grown and prospered. I’ve watched them do mind-boggling work for the likes of Mozilla, The Guardian, Google, and others, all while staying true to their mission of embracing, contributing to, and evangelizing open-web technologies.

Today, I’m beyond excited – and also a wee bit humbled – to announce that I’m joining their consulting team. As part of that role, I look forward to spending even more time working on and talking about patterns and best practices for developing client-side JavaScript applications. I also hope to work on new training offerings aimed at helping people make great client-side applications with web technology.

New beginnings have a terrible tendency to be accompanied by endings, and while the Bocoup opportunity is one I couldn’t refuse, it’s with a heavy heart that I bid farewell to the team at Toura. I’m proud of what we’ve built together, and that we’ve shared so much of it with the developer community in the form of Mulberry. The beauty of open source means that I fully expect to continue working on and with Mulberry once I leave Toura, but I know it won’t be the same.

I’ll be spending the next few days tying up loose ends at Toura, and then I’m taking a break in April to hit JSConf, spend some time in Berlin, and head to Warsaw to speak at FrontTrends. I’ll make my way back home in time to start with Bocoup on May 1.

And so. To my teammates at Toura: I wish you nothing but the best, and look forward to hearing news of your continued success. To Bocoup: Thanks for welcoming me to the family. It’s been a long time coming, and I’m glad the day is finally here.

Girls and Computers

| Comments

After a week that seemed just chock full of people being stupid about women in technology, I just found myself thinking back on how it was that I ended up doing this whole computer thing in the first place. I recorded a video a while back for the High Visibility Project, but that really just told the story of how I ended up doing web development. The story of how I got into computers begain when I was unequivocally a girl. It was 1982.

Back then, my dad made eyeglasses. My mom stayed at home with me and my year-old sister – which she’d continue to do til I was a teenager, when my brother finally entered kindergarten eight years later. Their mortgage was $79 – about $190 in today’s dollars – which is a good thing because my dad made about $13,000 a year. We lived in Weedsport, New York, a small town just outside of Syracuse. We walked to the post office to get our mail. The farmers who lived just outside town were the rich people. In the winters the fire department filled a small depression behind the elementary school with water for a tiny skating rink. There were dish-to-pass suppers in the gym at church.

In 1982, Timex came out with the Timex Sinclair TS-1000, selling 500,000 of them in just six months. The computer, a few times thicker than the original iPad but with about the same footprint, cost $99.95 – more than that mortgage payment. When everyone else in town was getting cable, my parents decided that three channels were good enough for them – it’s possible they still had a black-and-white TV – and bought a computer instead.

Timex Sinclair TS-1000

I remember tiny snippets of that time – playing kickball in my best friend Beth’s yard, getting in trouble for tricking my mother into giving us milk that we used to make mud pies, throwing sand in the face of my friend Nathan because I didn’t yet appreciate that it really sucks to get sand thrown in your face – but I vividly remember sitting in the living room of our house on Horton Street with my father, playing with the computer.

A cassette player was our disk drive, and we had to set the volume just right in order to read anything off a tape – there was actually some semblance of a flight simulator program that we’d play, after listening to the tape player screech for minutes on end. Eventually we upgraded the computer with a fist-sized brick of RAM that we plugged into the back of the computer, bumping our total capacity from 2K to 34K. I wrote programs in BASIC, though for the life of me I can’t remember what any of them did. The programs that were the most fun, though, were the ones whose assembly I painstakingly transcribed, with my five-year-old fingers, from the back of magazines – pages and pages of letters and numbers I didn’t understand on any level, and yet they made magic happen if I got every single one right.

A string of computers followed. My parents bought a Coleco Adam when we moved to Horseheads, New York – apparently the computer came with a certificate redeemable for $500 upon my graduation from high school, but Coleco folded long before they could cash it in. I made my first real money by typing a crazy lady’s crazy manuscript about crazy food into an Apple IIe that we had plugged into our TV, and my uncle and I spent almost the entirety of his visit from Oklahoma writing a game of Yahtzee! on that computer, again in BASIC.

Me at a computer fair at the mall with my sister, my mother, and my friend
Michael

Above: Me at a computer fair at the mall with my sister, my mother, and my friend Michael. “You were giving us all a tutorial, I can tell,” says my mom. Note the 5-1/4” external floppy drive.

In middle school, I started a school newspaper, and I think we used some prehistoric version of PageMaker to lay it out. When high school rolled around, I toiled through hand-crafting the perfect letters and lines and arrows in Technical Drawing so I could take CAD and CAM classes and make the computer draw letters and lines and arrows for me, and quickly proceeded to school just about every boy in the class. In my senior year of high school, I oversaw the school yearbook’s transition from laying out pages on paper to laying out pages with computers, this time the vaguely portable (it had a handle on the back!) Mac Classic. We used PageMaker again; the screen was black and white and 9”, diagonally.

Macintosh Classic

It was around then that a friend gave me a modem and – to his eventual chagrin, when he got the bill – access to his Delphi account, giving me my first taste of the whole Internet thing in the form of telnet, gopher, and IRC. When I went to college the following year, I took with me a computer with perhaps a 10MB hard drive, and no mouse.

Once again I found myself poring over magazines to discover URIs and, eventually, URLs that I could type to discover a whole new world of information. In 1995, I spent the summer making my college newspaper’s web site, previewing it in Lynx – it felt like there wasn’t much to learn when there was so little difference between the markup and what I saw on the screen. I would go to the computer lab to use NCSA’s Mosaic on the powerful RISC 6000 workstations, because they had a mouse. Yahoo! was about one year old. My friend Dave, who lived down the street, installed Windows 95 that summer and invited me over to show me. It was amazing. We were living in the future.

My early years with computers seem pretty tame – I wasn’t tearing them apart or building my own or doing anything particularly interesting with them, but I was using them, I was telling them what to do and they were mostly listening, and it never made me feel like I was weird. To the contrary, it made me feel powerful and empowered. I felt like a part of this ever-growing community of people who understood, eventually, that computers were going to change the world. It was the people who didn’t understand this who were weird and beneath us. It was the people who understood computers better than me of whom I stood in awe.

I can barely remember a time when computers weren’t a part of my life, and yet when they first entered my life, their presence was incredibly exceptional. These days, of course, computers are ubiquitous, but interaction with them at the copy-assembly-from-the-back-of-a-magazine level is almost nonexistent. Parents who can approach a computer with the same awe and wonder and determination as a child – as I must imagine that my dad did in 1982 – are likely equally rare.

In some ways, it is like the very ubiquity of technology has led us back to a world where socially normative gender roles take hold all over again, and the effort we’re going to need to put into overcoming that feels overwhelming sometimes. Words can’t express my gratitude for the parents I have, for that $99.95 investment they made in me, and for fact that I was lucky enough to be 5 and full of wonder in 1982.

Thoughts on a (Very) Small Project With Backbone and Backbone Boilerplate

| Comments

I worked with Backbone and the Backbone Boilerplate for the first time last weekend, putting together a small demo app for a presentation I gave last week at BazaarVoice. I realize I’m about 18 months late to the Backbone party, here, but I wanted to write down my thoughts, mostly because I’m pretty sure they’ll change as I get a chance to work with both tools more.

Backbone

Backbone describes itself as a tool that “gives structure to web applications,” but, at the risk of sounding pedantic, I think it would be more accurate to say that it gives you tools that can help you structure your applications. There’s incredibly little prescription about how to use the tools that Backbone provides, and I have a feeling that the code I wrote to build my simple app looks a lot different than what someone else might come up with.

This lack of prescription feels good and bad – good, because I was able to use Backbone to pretty quickly set up an infrastructure that mirrored ones I’ve built in the past; bad, because it leaves open the possibility of lots of people inventing lots of wheels. To its credit, it packs a lot of power in a very small package – 5.3k in production – but a real app is going to require layering a lot more functionality on top of it. Ultimately, the best way to think of Backbone is as the client-side app boilerplate you’d otherwise have to write yourself.

My biggest complaint about Backbone is probably how unopinionated it is about the view layer. Its focus seems to be entirely on the data layer, but the view is still where we spend the vast majority of our time. Specifically, I think Backbone could take a page from Dojo, and embrace the concept of “templated widgets”, because that’s what people seem to be doing with Backbone views anyway: mixing data with a template to create a DOM fragment, placing that fragment on the page, listening for user interaction with the fragment, and updating it as required. Backbone provides for some of this, specifically the event stuff, but it leaves you to write your own functionality when it comes to templating, placing, and updating. I think this is a solveable problem without a whole lot of code, and want to spend some time trying to prove it, but I know I need to look into the Backbone Layout Manager before I get too carried away.

Backbone Boilerplate

This project from Tim Branyen was a life-saver – it gave me an absolutely enormous head start when it came to incorporating RequireJS, setting up my application directories, and setting up a development server. It also included some great inline docs that helped me get my bearings with Backbone.

There are a couple of ways I think the boilerplate could be improved, and I’d be curious for others’ opinions:

  • The sample app includes the concept of “modules,” which seem to be a single file that include the models, collections, views, and routes for a … module. I don’t love the idea of combining all of this into a single file, because it seems to discourage smart reuse and unit testing of each piece of functionality. In the app I created, I abandoned the concept of modules, and instead broke my app into “components”, “controllers”, and “services”. I explain this breakdown in a bit more depth in the presentation I gave at BazaarVoice. I’m not sure this is the right answer for all apps, but I think modules oversimplify things.
  • The boilerplate includes a namespace.js file. It defines a namespace object, and that object includes a fetchTemplate method. It seems this method should only be used by views, and so I’d rather see something along the lines of an enhanced View that provides this functionality. That’s what I did with the base component module in my sample app.
  • I’m super-glad to see Jasmine included in the test directory, but unfortunately the examples show how to write Jasmine tests, not Jasmine tests for a Backbone app. As a community, we definitely need to be showing more examples of how to test things, and this seems like a good opportunity to distribute that knowledge.

Overall

I feel a little silly that I’m just now getting around to spending any time with Backbone, and I know that I only scratched the surface, but I like what I saw. I think it’s important to take it for what it is: an uber-tiny library that gets you pointed in the right direction. What I really want to see are fuller-fledged frameworks that build on top of Backbone, because I think there’s a lot more that can be standardized beyond what Backbone offers. I’m hoping to have a bit more time in April to dig in, and hopefully I can flesh out some of these ideas into something useful.

Community Conferences

| Comments

In 2010, I helped put on the first TXJS. We sold our first tickets for $29, and I think the most expensive tickets went for something like $129. We had about 200 people buy tickets, we had speakers like Douglas Crockford, Paul Irish, and John Resig, and we had sponsors like Facebook and Google. Our total budget was something like $30,000, and every out-of-town speaker had their travel and accommodations paid for.

In May, O’Reilly Media is holding another JavaScript conference in San Francisco, called FluentConf. I recently came to know that they are charging $100,000 for top-tier sponsorships, and that they are offering a 10-minute keynote as part of the package.

This turned my stomach, and not just because I believe it cheapens the experience of attendees, who will pay hundreds of dollars themselves. What really upset me was that a few weeks ago, I was approached to be on the speaker selection committee of FluentConf, and that conversation led me to discover that FluentConf would not be paying for speaker travel and accommodations. And so the other day, I tweeted:

conference #protip: save your money – and your speaking skills – for events that don’t sell their keynotes for $100k

Last night, I was at the Ginger Man in Austin, and I checked the Twitters, discovering that Peter Cooper, one of the chairs of FluentConf, had replied to a conversation that arose from that tweet:

@rmurphey @tomdale If you’re referring to Fluent, that is news to me.

I will accept the weird fact that the co-chair of a conference didn’t know its speaking slots were for sale – I gather that it is essentially a volunteer role, and the co-chairs aren’t necessarily in the driver’s seat when it comes to decisions like this. I let Peter know that, indeed, I had a PDF that outlined all the sponsorship options.

This is the part where, in some alternate reality, a mutual understanding of the offensiveness of this fact would have been achieved. What happened instead was a whole lot of name-calling, misquoting, and general weirdness.

Here’s the deal. Conferences can run their event however they want, and they can make money hand over fist. They can even claim they are the giving JavaScript developers “an event of their own,” ignoring the existence of the actual community-run JavaScript events that have been around for years now. I probably won’t go to or speak at an event that makes money hand over fist, but I don’t have any problem with the existence of such events, or with people’s involvement with them. However, when a conference is making money hand over fist – my back of the napkin calculations would suggest that FluentConf stands to have revenues of well over a million dollars – then that conference has no excuse not to pay the relatively paltry costs associated with speaker travel and accommodations.

A conference does not exist without its speakers. Those who speak at an event – the good ones, anyway – spend countless hours preparing and rehearsing, and they are away from home and work for days. While I do not discount the benefits that accrue to good speakers, the costs of being a speaker are non-trivial – and that’s before you get into the dollar costs of travel and accommodations.

When an event is unwilling to cover even those hard costs – nevermind the preparation time and time away from work and home – it materially affects the selection of speakers. It’s even worse when those same conferences claim to desire diversity; the people they claim to want so badly are the very people most likely to be discouraged when they find out they have to pay their own way to the stage.

In the conversation last night, I made this point:

when only the people who can afford to speak can speak, then only the people who can afford to speak will speak.

Amy Hoy responded with a criticism of community-run conferences:

and when only ppl who can order a ticket in 3 seconds can afford to come, only ppl who can order a ticket in 3 seconds can come

I know that getting tickets to the actual community-run events is hard, but that is because the community-run events flat-out ignore the economics of supply and demand, choosing instead to sell tickets at affordable prices even if it means they will sell out in a heartbeat, leaving a boatload of potential profit on the table. And yet those events – JSConf, TXJS, and the like – have still figured out how to cover speaker costs and provide attendees and sponsors with unforgettable experiences.

When an event with revenues exceeding a million dollars is unwilling to cover those costs, while simultaneously selling speaking slots, I do not hesitate for a moment to call that event out, and I do not hesitate to call on respected members of the community to sever their ties with the event. I’m not embarrassed about it, and you can call me all the names you want.

Mulberry: A Development Framework for Mobile Apps

| Comments

I’ll be getting on stage in a bit at CapitolJS, another great event from Chris Williams, the creator of JSConf and all-around conference organizer extraordinaire. My schtick at conferences in the past has been to talk about the pain and pitfalls of large app development with JavaScript, but this time is a little different: I’ll be announcing that Toura Mobile has created a framework built on top of PhoneGap that aims eliminate some of those pains and pitfalls for mobile developers. We’re calling it Mulberry, and you’ll be seeing it on GitHub in the next few weeks.

tl;dr: go here and watch this video.

While the lawyers are dotting the i’s and crossing the t’s as far as getting the code in your hands – we’re aiming for a permissive license similar to the licenses for PhoneGap and Dojo – I wanted to tell you a little bit about it.

Mulberry is two things. First, it’s command line tools (written in Ruby) that help you rapidly scaffold and configure an app, create content using simple Markdown and YAML, and test it in your browser, in a simulator, and on device. Second, and much more exciting to me as a JavaScript developer, it’s a framework built on top of the Dojo Toolkit for structuring an application and adding custom functionality in a sane way.

Mulberry lets you focus on the things that are unique to your application. It provides an underlying framework that includes a “router” for managing application state; built-in components and templates for displaying standard content types like text, audios, videos, feeds, and images; a simple API for defining custom functionality and integrating it with the system; and an HTML/CSS framework that uses SASS and HAML templates to make it easy to style your apps.

The basics of setting up an app are pretty well covered at the Mulberry site, but if you’re reading this, you’re probably a JavaScript developer, so I want to focus here on what Mulberry can do for you. First, though, let me back up and cover some terminology: Mulberry apps consist of a set of “nodes”; each node is assigned a template, and each template consists of components arranged in a layout. Nodes can have assets associated with them – text, audio, images, video, feeds, and data.

It’s the data asset that provides the most power to developers – you can create an arbitrary object, associate it with a node, and then any components that are in the template that’s being used to display the node will get access to that data.

A Twitter component offers a simple example. A node might have a data asset like this associated with it:

1
{ term : 'capitoljs', type : 'twitter' }

We could define a custom template for this page (mulberry create_template Twitter), and tell that template to include a Twitter component:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Twitter:
  screens:
    - name: index
      regions:
        -
          size: fixed
          scrollable: false
          components:
            - PageNav
        -
          size: flex
          scrollable: true
          components:
            - PageHeaderImage
            - custom:Twitter

Next, we’d define our Twitter component (mulberry create_component Twitter), which would create the skeleton of a component file:

1
2
3
4
5
6
7
8
9
10
11
12
13
dojo.provide('client.components.Twitter');

toura.component('Twitter', {
  componentTemplate : dojo.cache('client.components', 'Twitter/Twitter.haml'),

  prep : function() {

  },

  init : function() {

  }
});

One of the things the skeleton contains is a reference to the template for the component. The create_component command creates this file, which defines the DOM structure for the component. For the sake of this component, that template will just need to contain one line:

1
%ul.component.twitter

As I mentioned earlier, Mulberry components automatically get access to all of the assets that are attached to the node they’re displaying. This information is available as an object at this.node. Mulberry components also have two default methods that you can implement: the prep method and the init method.

The prep method is an opportunity to prepare your data before it’s rendered using the template; we won’t use it for the Twitter component, because the Twitter component will go out and fetch its data after the template is rendered. This is where the init method comes in – this is where you can tell your component what to do. Here’s what our Twitter component ends up looking like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
dojo.provide('client.components.Twitter');

mulberry.component('Twitter', {
  componentTemplate : dojo.cache('client.components', 'Twitter/Twitter.haml'),
  tweetTemplate : dojo.cache('client.components', 'Twitter/Tweet.haml'),

  init : function() {
    var data = dojo.filter(this.node.data, function(d) {
          return d.type === 'twitter'
        })[0].json;

    $.ajax('http://search.twitter.com/search.json?q=' + data.term, {
      dataType : 'jsonp',
      success : $.proxy(this, '_onLoad')
    });
  },

  _onLoad : function(data) {
    var tweets = data.results,
        tpl = mulberry.haml(this.tweetTemplate),
        html = $.map(tweets, function(tweet) {
          tweet.link = 'http://twitter.com/capitoljs/status/' + tweet.id_str;

          tweet.created_at = dojo.date.locale.format(
            new Date(tweet.created_at), {
              datePattern : 'EEE',
              timePattern : 'h:m a'
            }
          );

          tweet.text = tweet.text.replace(
            /@(\S+)/g,
            "<a href='http://twitter.com/#!/$1'>@$1</a>"
          );

          return tpl(tweet);
        }).join('');

    this.$domNode.html(html);
    this.region.refreshScroller();
  }
});

Note that when we define the data variable in the init method, we look at this.node.data, which is an array of all of the data objects associated with the node. We filter this array to find the first data object that is the right type – this means we can have lots of different data objects associated with a given node.

Note also that there’s a property this.$domNode that we’re calling jQuery methods on, and that we’re using jQuery’s $.ajax – Mulberry apps come with jQuery enabled by default, and if it’s enabled, helpers like this.$domNode become available to you. This means that very little knowledge of Dojo is required to start adding your own functionality to an app – if you need it, though, the full power of the Dojo Toolkit is available to you too.

Here’s what our component ends up looking like, with a little bit of custom CSS applied to our app:

screenshot

This is a pretty basic demo – Twitter is, indeed, the new hello world – but I hope it gives you a little bit of an idea about what you might be able to build with Mulberry. We’ve been using it in production to create content-rich mobile apps for our users for months now (connected to a web-based CMS instead of the filesystem, of course), and we’ve designed it specifically to be flexible enough to meet arbitrary client requests without the need to re-architect the underlying application.

If you know JavaScript, HTML, and CSS, Mulberry is a powerful tool to rapidly create a content-rich mobile application while taking advantage of an established infrastructure, rather than building it yourself. I’m excited to see what you’ll do with it!

Switching to Octopress

| Comments

I’m taking a stab at starting a new blog at rmurphey.com, powered by Octopress, which is a set of tools, themes, and other goodness around a static site generator (SSG) called jekyll. A couple of people have noticed the new site and wondered what I’m doing, so I thought I’d take a couple of minutes to explain.

My old blog at blog.rebeccamurphey.com is managed using Posterous. It used to be a self-hosted WordPress site, but self-hosted WordPress sites are so 2009. One too many attacks by hackers made it way more trouble than it seemed to be worth. Posterous made switching from a WordPress install pretty easy, so, I did that. All told, it took a few hours, and I was pretty happy.

For a few reasons, the old blog isn’t going anywhere:

  • I ran into some trouble importing the old content into jekyll. I was tired and I didn’t investigate the issues too much, so they’re probably solveable, but …
  • Some of the old content just isn’t that good, and since time is a finite resource, I don’t want to get too wrapped up in moving it over. Plus …
  • Frighteningly or otherwise, some of my posts have become reference material on the internet. If I move them, I’ve got to deal with redirections, and I have a feeling that’s not going to be an easy task with Posterous.

In hindsight, I should have switched directly from WordPress to an SSG. Despite my many complaints about Posterous – misformatted posts, lack of comment hyperlinks, a sign-in requirement for commenting, and lots more – in the end my decision to switch to a static site generator instead was more about having easy control over my content on my filesystem.

This article explains it well, but the bottom line, I think, is that static site generators are blogging tools for people who don’t need all the bullshit that’s been added to online tools in the interest of making them usable by people who don’t know wtf they’re doing. So, yes, to use an SSG, you have to know wtf you’re doing, and for me that’s a good thing: the tool gets out of my way and lets me focus on the writing.

As for Octopress, it seems pretty damn nifty – the default theme looks gorgeous on my desktop and on my phone, and it seems they’ve taken care to put common customization points in a single sass file. All that aside, though, one of my favorite parts about it is that my content is truly my content. If Octopress pisses me off – though I hope it won’t! – then I can simply take my markdown files and put them in some other SSG, upload the whole thing to my GitHub pages, and be done with it. Win all around.

Using Object Literals for Flow Control and Settings

| Comments

I got an email the other day from someone reading through jQuery Fundamentals – they’d come across the section about patterns for performance and compression, which is based on a presentation by Paul Irish gave back at the 2009 jQuery Conference in Boston.

In that section, there’s a bit about alternative patterns for flow control – that is, deciding what a program should do next. We’re all familiar with the standard if statement:

1
2
3
4
5
6
7
function isAnimal(thing) {
  if (thing === 'dog' || thing === 'cat') {
    console.log("yes!");
  } else {
    console.log("no");
  }
}

What stumped the person who emailed me, though, was when the same logic as we see above was written like this:

1
2
3
4
5
6
7
function isAnimal(thing) {
  if (({ cat : 1, dog : 1 })[ thing ]) {
    console.log("yes!");
  } else {
    console.log("no");
  }
}

What’s happening here is that we’re using a throwaway object literal to express the conditions under which we will say a thing is an animal. We could have stored the object in a variable first:

1
2
3
4
5
6
7
8
9
10
11
12
function isAnimal(thing) {
  var animals = {
    cat : 1,
    dog : 1
  };

  if (animals[ thing ]) {
    console.log("yes!");
  } else {
    console.log("no");
  }
}

However, that variable’s only purpose would be to provide this one lookup, so it can be argued that the version that doesn’t bother setting the variable is more economical. Reasonable people can probably disagree about whether this economy of bytes is a good tradeoff for readability – something like this is perfectly readable to a seasoned developer, but potentially puzzling otherwise – but it’s an interesting example of how we can use literals in JavaScript without bothering to store a value in a variable.

The pattern works with an array, too:

1
2
3
function animalByIndex(index) {
  return [ 'cat', 'dog' ][ index ];
}

It’s also useful for looking up values generally, which is how I find myself using it most often these days in my work with Toura, where we routinely branch our code depending on the form factor of the device we’re targeting:

1
2
3
4
5
6
function getBlingLevel(device) {
  return ({
    phone : 100,
    tablet : 200
  })[ device.type ];
}

As an added benefit, constructs that use this pattern will return the conveniently falsy undefined if you try to look up a value that doesn’t have a corresponding property in the object literal.

A great way to come across techniques like this is to read the source code of your favorite library (and other libraries too). Unfortunately, once discovered, these patterns can be difficult to decipher, even if you have pretty good Google fu. Just in case your neighborhood blogger isn’t available, IRC is alive and well in 2011, and it’s an excellent place to get access to smart folks eager to take the time to explain.