Using RequireJS to load UglifyJS’s parser in the browser

UglifyJS has a great parser (parse-js) which is written as a CommonJS module. This works great in node but not so great in the browser. The suggested route to use it browser side is to just manually wrap it in a AMD define or just pull out the exports yourself. It’s easy enough to do but likely a difficulty to maintain going forward.

RequireJS has support for custom plugins – effectively code that can process the content of a file before it is passed into RequireJS’s AMD module system. Ben Hockey has put together a simple CommonJS module loader plugin (cjs) which automatically wraps the content of a CommonJS module with a AMD define.

This makes a great example of why RequireJS is so powerful. For this example I’ve just git clone‘d the content of the UglifyJS repo into my project and used the following RequireJS configuration to tell it where the CommonJS module is based.

	var require = {
		baseUrl: '/js/requirejs',
		packages: [{
			name: 'uglify-js',
			main: 'uglify-js',
			location: '../UglifyJS'
<script src="/js/requirejs/require.js"></script>

Once that is in place it’s just a matter of asking for the module and prefixing the module id with the cjs plugin.

require(['cjs!uglify-js/lib/parse-js'], function(parser) {
	var ast = parser.parse('function saySomething() { alert("Hello!"); }');
	// ...

These simple steps are enough to give us the following parse tree.


JavaScript Modules

One of the first challenges developers new to JavaScript who are building large applications will have to face is how to go about organizing their code. Most start by embedding hundreds of lines of code between a <script> tag which works but quickly turns into a mess. The difficultly is that JavaScript doesn’t offer any obvious help with organizing our code. Literally where C# has using, Java has import - JavaScript has nothing. This has forced JavaScript authors to experiment with different conventions and to use the language we do have to create practical ways of organizing large JavaScript applications.

The patterns and tools and practices that will form the foundation of Modern JavaScript are going to have to come from outside implementations of the language itself

Rebecca Murphy

The Module Pattern

One of the most widely used approaches to solve this problem is known as the Module Pattern. I’ve attempted to explain a basic example below and talk about some of it’s properties. For a much better description and a fantastic run down of different approaches take a look at Ben Cherry’s post – JavaScript Module Pattern: In-Depth.

(function(lab49) {

	function privateAdder(n1, n2) {
		return n1 + n2;

	lab49.add = function(n1, n2) {
		return privateAdder(n1);

})(window.lab49 = window.lab49 || {});

In the above example we’ve used a number of basic features from the language to create constructs like what we see in languages like C# and Java.


You’ll notice that the code is wrapped inside a function which is invoked immediately (check the last line). By default in the browser JavaScript files are evaluated in the global scope so anything we declared inside our file would be available everywhere. Imagine if in lib1.js we had a var name = '...' statement then in lib2.js we had another var name = '...' statement. The second var statement would replace the value of the first – not good. However as JavaScript has function scoping, in the above example everything is declared in it’s own scope away from the global. This means anything in this function will be isolated from whatever else is going on in the system.


In the last line you’ll notice that we’re assigning window.lab49 to either itself or to an empty object literal. It looks a bit odd but let’s walk through an imaginary system where we have a number of js files all using the above function wrapper.

The first file to get included will evaluate that OR statement and find that the left hand side is undefined. This is a falsely value so the OR statement will go ahead and evaluate the right hand side, in this case an empty object literal. The OR statement is actually an expression that will return it’s result and go ahead and assign it to the global window.lab49.

Now the next file to use this pattern will get to the OR statement and find that window.lab49 is now an instance of an object – a truthy value. The OR statement will short circuit and return this value that is immediately assigned to itself – effectively doing nothing.

The result of this is that the first file in will create our lab49 namespace (just a JavaScript object) and every subsequent file using this construct will just reuse the existing instance.

Private State

As we just talked about due to being inside a function, everything declared inside it is in the scope of that function and not the global scope. This is great to isolate our code but it also has the effect that no one could call it. Pretty useless.

As we also just talked about we’re creating a window.lab49 object to effectively namespace our content. This lab49 variable is available globally as it’s attached to the window object. To expose things outside of our module, publically you may say, all we need to do attach values to that global variable. Much like we’re doing with our add function in the above example. Now outside of our module our add function can be called with lab49.add(2, 2).

As another result of declaring our values inside of this function, if a value isn’t explicitly exposed by attaching it to our global namespace or something outside of the module there is no way for external code to reach it. In practice, we’ve just created some private values.

CommonJS Modules

CommonJS is a group primarily made up of authors of server-side JavaScript runtimes who have attempted to standardize exposing and accessing modules. It’s worth noting however that their proposed module system is not a standard from the same group that creates the JavaScript standard so it’s become more of an informal convention between the authors of server-side JavaScript runtimes.

I generally support the CommonJS idea, but let’s be clear: it’s hardly a specification handed down by the gods (like ES5); it’s just some people discussing ideas on a mailing list. Most of these ideas are without actual implementations.

Ryan Dahl, creator of node.js

The core of the Modules specification is relatively straight forward. Modules are evaluated in their own context and have a global exports variable made available to them. This exports variable is just a plain old JavaScript object which you can attach things too, similar to the namespace object we demonstrated above. To access a module you call a global require function and give an identifier for the package you are requesting. This then evaluates the module and returns whatever was attached to the exports. This module will then be cached for subsequent require calls.

// calculator.js
exports.add = function(n1, n2) {


// app.js
var calculator = require('./calculator');

calculator.add(2, 2);

If you’ve ever played with Node.js you’ll probably find the above familiar. The way that Node implements CommonJS modules is surprisingly easy, looking at a module inside node-inspector (a Node debugger) will show its content wrapped inside a function that is being passed values for exports and require. Very similar to the hand rolled modules we showed above.

There’s a couple of node projects (Stitch and Browserify) which bring CommonJS Modules to the browser. A server-side component will bundle these individual module js files into a single js file with a generated module wrapper around them.

CommonJS was mainly designed for server-side JavaScript runtimes and due to that there’s a couple of properties which can make them difficult for organization of client-side code in the browser.

  • require must return immediately – this works great when you already have all the content but makes it difficult to use a script loader to download the script asynchronously.
  • One module per file – to combine CommonJS modules they need to be wrapped in a function and then organized in some fashion. This makes them difficult to use without some server component like the ones mentioned above and in many environments (ASP.NET, Java) these don’t yet exist.

Asynchronous Module Definition

The Asynchronous Module Definition (commonly known as AMD) has been designed as a module format suitable for the browser. It started life as a proposal from the CommonJS group but has since moved onto GitHub and is now accompanied by a suite of tests to verify compliance to the AMD API for module system authors.

The core of AMD is the define function. The most common way to call define accepts three parameters – the name of the module (meaning that it’s no longer tied to the name of the file), an array of module identifiers that this module depends on, and a factory function which will return the definition of the module. (There are other ways to call define – check out the AMD wiki for full details).

define('calculator', ['adder'], function(adder) {
	return {
		add: function(n1, n2) {
			return adder.add(n1, n2);

Because of this module definition is wrapped in the define call it means you can happily have multiple modules inside a single js file. Also as the module loader has control over when the define module factory function is invoked it can resolve the dependencies in its own time – handy if those modules have to first be downloaded asynchronously.

A significant effort has been made to remain compatible with the original CommonJS module proposal. There is special behavior for using require and exports within a module factory function meaning that traditional CommonJS modules can be dropped right in.

AMD looks to be becoming a very popular way to organize client-side JavaScript applications. Whether it be through module resource loaders like RequireJS or curl.js, or JavaScript applications that have recently embraced AMD like Dojo.

Does this mean JavaScript sucks?

The lack of any language level constructs for organization of code into modules can be quite jarring for developers coming from other languages. However as this deficiency forced JavaScript developers to come up with their own patterns for how modules were structured we’ve been able to iterate and improve as JavaScript applications have evolved. Follow the Tagneto blog for some insight into this.

Imagine if this type of functionality had been included in the language 10 years ago. It’s unlikely they would have imagined the requirements for running large JavaScript applications on the server, loading resources asynchronously in the browser, or including resources like text templates that loaders like RequireJS are able to do.

Modules are being considered as a language level feature for Harmony/ECMAScript 6. Thanks to the thought and hard work of authors of module systems over the past few years, it’s much more likely that what we end up getting will be suitable for how modern JavaScript applications are built.

Introducing StitchIt – The CommonJS Module packager for ASP.NET MVC


One of the biggest challenges writing large client-side JavaScript single page applications is how you actually manage a large amount of JavaScript. How do you structure the content of the files? Where do you include all the script tags? What order do the script tags have to appear in? It’s all a bit of a headache.

The CommonJS Modules specification proposes a method of structuring JavaScript into self contained modules which specify what they require to run and what they expose externally. At it’s most basic – a global require function loads a module by an identifier (which typically looks like a file path), and there’s a global exports object that the module can attach it’s API on which is returned by the require call. If you’ve ever tried out Node.js it’ll be familiar to you.

exports.add = function(n1, n2) {
	return n1 + n2;


var calculator = require('calc');

var result = calculator.add(2, 3); // 5


StitchIt is based on a great library for Node called Stitch which provides a CommonJS Module API in the browser and will automatically package your JavaScript into modules.

Disclaimer: StitchIt is not yet ready for use. It’s the result of only a few hours work on a Sunday afternoon to serve as a prototype for how CommonJS modules could really be the way to go for structuring large JavaScript applications. The code is probably awful, there’s absolutely no caching so it will rebuild everything on every request and there’s no minification. I hope to make it production ready in the coming weeks but for the time being it’s just something to look at.

That said, let’s dig into a demonstration of how it works. You start by placing all the JavaScript you want packaged into a directory – in this case I’ve used ~/Scripts/app. We initialize StitchIt in the application’s InitializeRoutes method and expose its packaged content on a path.

public static void RegisterRoutes(RouteCollection routes)

Inside the Scripts/app directory we’ll make a couple of our JavaScript files which will form a basic application. For this example I’m using the sample code provided in the CommonJS Module specification itself.


exports.add = function() {
    var sum = 0, i = 0, args = arguments, l = args.length;
    while (i < l) {
        sum += args[i++];
    return sum;


var add = require('math').add;

exports.increment = function(val) {
    return add(val, 1);


var inc = require('increment').increment;
var a = 1;
var b = inc(a);

console.log(b); // 2

With these in place we can pull down the file generated by StitchIt in a script tag and then use a require function attached to the global stitchIt object to execute our program.


<script src="/app.js"></script>

Wrapping third party JavaScript libraries into Modules

Very few client-side JavaScript libraries are built as CommonJS Modules, so how do we use them from our Modularized code? Let’s take jQuery as it’s probably the first JavaScript library people will want to use.

We’ll ensure that jQuery is loaded separately via a normal script tag before loading the StitchIt content. Then we’ll create a wrapper which instead of attaching an API to exports, will completely replace the module exports with the jQuery object itself grabbed from the window.


module.exports = window.jQuery;

Using this wrapper module we can just require it like a normal library.


var $ = require('jquery');

$(function () {
    $('body').text('Hi from jQuery');

Beyond JavaScript – Managing related client-side templates

If you’ve done much with KnockOut or jQuery templating you’ve probably found yourself dumping templates into script blocks in the main page body. Although this works for simple scenarios, I quickly found this practice horribly difficult to manage for large single page apps. Following RequireJS’s and Stitch’s example, I added support for adding *.html files to your JavaScript application directory. StitchIt will wrap these into a CommonJS JavaScript module so they can be required like any other JS dependency.


    <span>Hi, I'm ${name}</span>


var $ = require('jquery'),
    template = require('./personViewTmpl'); // Just another module

function PersonView(el, name) {
    $.tmpl(template, { name: name }).appendTo(el);

exports.PersonView = PersonView;


var $ = require('jquery'),
    PersonView = require('views/personView').PersonView;

$(function () {
    var davidView = new PersonView($('body'), 'David');

Also notice the use of global and relative identifiers for the require call – this allows us to nicely organize our JavaScript into sub-directories that can be as deep as we need. Relative module identifiers are evaluated relative to where that module resides.


I know what you’re thinking – JavaScript is so 2010. Well thanks to Jurassic, a wonderful JavaScript runtime for .NET, simply drop in .coffee files to your app directory and they’ll automatically get compiled to JavaScript when the StitchIt package is built.

class Person
	constructor: (@name) ->

	sayHi: () ->
		"Hi, I'm #{@name}"

exports.Person = Person


var Person = require('person').Person;

var david = new Person('David');

console.log( david.sayHi() ); // Hi, I'm David

What needs to be done before it’s ready?

So there’s enough here to show off what I believe are very important concepts, but it’s still a fair bit off being usable. Currently all of the modules get packaged into a single .js file. Although some may disagree, I prefer wrapping all the code up into a single file which can be downloaded once and then cached forever in the client. The biggest issue now is that this .js file is getting completely regenerated on every request – this may work okay for development but ideally I’ll probably want to compare time stamps or something to only regenerate the package when it’s source files have changed.

I’ll also want to integrate some form of js minification into StitchIt. I’d ideally find a way of using SquishIt (a great tool which inspired this name). If that’s not possible I’ll probably want to integrate Google’s Closure compiler directly into StitchIt.

There’s also currently only a fraction of the CommonJS Module specification implemented. The major requirement will be support for require.paths so we can control where modules are loaded from – although I’m not sure how much sense this makes in a browser at this point.

Grand Ideas

Looking further into the future I’ve got some fairly grand plans. If I support exposing modules from resources in .NET assemblies I can imagine providing very cool NuGet integration support.

Install-Package knockout
var knockout = require('knockout');

I also wonder how difficult it would be to evaluate these CommonJS modules in Visual Studio itself to go towards providing some kind of AutoComplete/Intellisense support.

Probably very difficult.

Show me the code

All the code is up at GitHub – StitchIt. Please go browse and fork. I’d love to hear your thoughts.

Using Isotope with Knockout.js

Knockout.js is a JavaScript library for writing MVVM style HTML applications. Isotope is a super cool jQuery plugin for fluid list animation – go play around with it here, it’s really impressive.

A question from a colleague prompted me to look at the Knockout.js documentation for the first time in a while and I noticed that there’s now a ‘afterAdd’ option available for the foreach binding. This allows us to hook in some code for manipulating an element once it’s been added to the list, intended for animation. I wondered if it was possible to insert Isotope into this process and it turns out it’s really easy – take a look at it working together here.

The code to do it was also really simple and demonstrates quite how handy Knockout is. I’m sure there’s some debate to have about whether the function for manipulating the element in the view really belongs on the ViewModel, but I’ll leave that for another day.

var $wordList = $('#word-list'),
    wordsViewModel = {
        words: ko.observableArray([]),
        newWord: ko.observable(''),
        add: function() {
            this.words.push( this.newWord() );
        wordAdded: function(el) {
            $wordList.isotope( 'appended', $(el) );


    layoutMode: 'fitRows',
    itemSelector: '.word'
<form data-bind="submit: add">
    <input placeholder="New Word" data-bind="value: newWord" autofocus />
<ul id="word-list" data-bind="template: { name: 'word-item-tmpl', foreach: words, afterAdd: wordAdded }">
<script id="word-item-tmpl" type="text/x-jquery-tmpl">
    <li class="word">${ $data }</li>

My NYC CodeCamp Talks – Javascript and Node.js

Yesterday I did a couple of talks at the NYC CodeCamp which Lab49 was sponsoring. It was a great event and I really enjoyed meeting many of the 400 developers who attended. My one regret was that due to presenting myself I missed great talks from my fellow Lab49’s Scott Weinstein and Doug Finke.

My first talk was an introduction to Javascript where I explained that despite JavaScript looking an awful lot like C# and Java, it’s in fact not much like them at all. But don’t panic – it’s an extremely simple language and once you understand it’s basics (prototype based inheritance, hoisting, functions, etc..), you’ll be able to understand most JavaScript code out there.

My second talk was an introduction to node.js and how running JavaScript on the server is in fact far from the worst idea. The intention of the talk was not to try to convert a room full of ASP.NET developers to Node, but to explain where node is innovating in the web platform space and how we’ll probably see a lot more of these techniques in the future on every platform (code sharing between client & server, simple API’s for real-time web communication). The demo’s I did during the talk can be found on GitHub.

I’d be really happy to give these talks again so please get in touch with me if you’d be interested in having me speak at any development groups.

Put decent JavaScript documentation in your address bar

As we all know the highest ranked (by Google) site for JavaScript documentation is W3Schools. The problem is, it kind of sucks. Over the past few months I’ve found the highest quality of JavaScript documentation by far can be found at the Mozilla Development Network – just look at Date for instance.

I use Chrome pretty much everywhere now and using it’s custom search engine feature, we can put MDN right into the address/search/whatever bar. Just right click on the text box itself and hit “Edit Search Engines”. From here just add a new one with as the url. Now just type mdn and what you’re looking for.

It’s the little things.

Making Macros in CoffeeScript


JavaScript dependency management is a hot topic at the moment (see RequireJS, Dojo and StealJS). This got me thinking, why do we just treat JavaScript as dumb files to be served up to the client? Now that we have web servers that literally speak the same language, aren’t there greatly possibilities yet to be discovered? Can we write code that seamlessly merges the divide between client and server?

Well honestly? I’ve got no idea. I got a little stuck on the first problem that came to mind – how do we get the server to understand what the code is intending to do on the client? Sure, using Node we could happily execute our JavaScript. But if we wanted to have some smarts about how we deal with it, say analyze a piece of JavaScript to determine what stuff it’s dependent on, we’d actually have to parse the code. Now I’m sure this is possible; clearly Web Browsers and Node parse JavaScript quite happily. But the thought of trying to deal with that myself didn’t quite make me giddy with excitement. If only there was a language like JavaScript but had easily useable parsers on hand that let us mess with the language…


Enter CoffeeScript. In it’s own words…

CoffeeScript is a little language that compiles into JavaScript. Underneath all of those embarrassing braces and semicolons, JavaScript has always had a gorgeous object model at its heart. CoffeeScript is an attempt to expose the good parts of JavaScript in a simple way.

A nice language that will compile into JavaScript, which also exposes the good parts of JavaScript’s (gorgeous) object model in a simple way? Well doesn’t that just sound perfect. Sure enough, it’s pretty easy to compile a CoffeeScript program from JavaScript.

var coffee = require('coffee-script');

var nodes = coffee.nodes(coffee.tokens("a = 2 + 2"));

console.log( nodes.compile() ); // var a = 2 + 2;

The important bit for this post, is that we can get both the token stream and the AST nodes themselves before they’re finally compiled into JavaScript. These nodes are just simple JavaScript objects thrown together in a graph. To make messing with them even easier, the Node structures are extremely well documented on the CoffeeScript site (No really, go look at it – it’s the most attractive documentation I’ve seen in a while). For the statement “a = 2 + 2”, the node graph looks much like below:

    Value "a"
    Op +
      Value "2"
      Value "2"

As a first experiment, I wrote a Visitor object which would visit each node in the graph and if certain conditions were met, replace the node with a new one. In this case, I’m looking for a method call like “ADD x, y”, then I’d replace the node with another in the form “x + y”.

var addReplacementVisitor = {
	onCall: function(n, replaceCallback) {
		if (n.variable.base.value === "ADD") {
			var addOp = new nodes.Op('+', n.args[0], n.args[1]);

So I can imagine what you’re thinking, “well done Dave, you’ve managed to replace an ADD call with an add operation. Yeah, super useful…”. My original idea was to build on this and have more advanced Visitors which would transform the node graph in grander ways. However, these things are kind of difficult to write, and doing anything even slightly more than trivial took an awful lot of code. Fortunately, I just happened to show this to our product designer Eric Wright, he took one look at and remarked – “ah, like Macro’s in Lisp” (yeah, a talented designer who’s also familiar with Lisp, way to make me feel inferior). Lisp allows you to define “macros”, essentially things that look a lot like functions, but actually act more as a find and replace on the language/AST itself. (There are a lot of places online which will give you a proper rundown).

(defmacro swap (a b)
    `(let ((temp ,a))
       (setf ,a ,b)
       (setf ,b temp)))

CoffeeScript Macros

This got me thinking – by far the easiest way of representing a graph of CoffeeScript nodes is CoffeeScript code itself. So in my first prototype, I have a CoffeeScript file designated to define macros, and then a CoffeeScript source file which the defined Macro’s are applied to. So to do something like the above we’d define a swap Macro like…

SWAP = (x,y) ->
	$tmp = x
	x = y
	y = $tmp

Quite straight forward stuff – we’re just creating a function named SWAP which takes two variables (x and y) and swaps them around. Hopefully you’re wondering about the significance of $tmp and why I’ve named it a bit funny, we’ll get to that in a moment. Imagine it working on the following CoffeeScript source:

a = 1
b = 2
c = 3
d = 4
SWAP a, b
SWAP c, d

The Macro really is just a find and replace, so when it’s found the SWAP method in the above, it’ll replace it with the body of the Macro. If we were to do this twice in the same scope, like the above. We’d expect to see two $tmp variables declared which wouldn’t be good – to prevent this, any variable in the Macro scope begining with $ will be renamed to something unique. So in my quick prototype, compiling the above would result in the following JavaScript:

var a, b, c, d, __tmp0, __tmp1;
a = 1;
b = 2;
c = 3;
d = 4;
__tmp0 = a;
a = b;
b = __tmp0;
__tmp1 = c;
c = d;
d = __tmp1;

Using your body

Usually the expressions passed to a Macro are just copied directly into the Macro’s body. But what if we wanted to wrap a Macro around a whole block of code? Well, there’s a special $body argument convention that will help us cope with that. Where most arguments are just directly copied, the $body variable is first “unwrapped” and just the expressions in it’s body will be copied. This allows us to pass a function (typically as ia Macro’s last argument) and treat it as the body of the Macro’s code. The example below hopefully demonstrates this better. Imagine we want to wrap a try…catch around all of our code so that all exceptions are swallowed.

ERRZLESS = ($body) ->
	catch e
		console.log "IGNORED ERROR: #{e}"

Note how we use this macro by supplying a function as the body.

	throw "ARGGGGSSS"

This will produce the following JavaScript.

try {
  throw "ARGGGGSSS";
} catch (e) {
  console.log("IGNORED ERROR: " + e);

So what’s the point?

Hurrah, Macro’s in CoffeeScript! Well not really. This was only a little experiment of mine to see how easy it is to mess with CoffeeScript before it’s complied. Fortunately as it turns out, it’s pretty easy.

So where is this useful? These kind of techniques I’ve discussed are super useful for building internal DSL’s and meta-programming. It also looks like there’s some serious work going on to provide static metaprogramming to CoffeeScript which would give us proper Macros and an awful lot more.

Show Me The Code

All of the code for this little experiment can be found up on GitHub. If you pull it down it can be executed using the command: