My NYC CodeCamp Talks – Javascript and Node.js

Yesterday I did a couple of talks at the NYC CodeCamp which Lab49 was sponsoring. It was a great event and I really enjoyed meeting many of the 400 developers who attended. My one regret was that due to presenting myself I missed great talks from my fellow Lab49’s Scott Weinstein and Doug Finke.

My first talk was an introduction to Javascript where I explained that despite JavaScript looking an awful lot like C# and Java, it’s in fact not much like them at all. But don’t panic – it’s an extremely simple language and once you understand it’s basics (prototype based inheritance, hoisting, functions, etc..), you’ll be able to understand most JavaScript code out there.

My second talk was an introduction to node.js and how running JavaScript on the server is in fact far from the worst idea. The intention of the talk was not to try to convert a room full of ASP.NET developers to Node, but to explain where node is innovating in the web platform space and how we’ll probably see a lot more of these techniques in the future on every platform (code sharing between client & server, simple API’s for real-time web communication). The demo’s I did during the talk can be found on GitHub.

I’d be really happy to give these talks again so please get in touch with me if you’d be interested in having me speak at any development groups.


WCF WebSockets: First Glance

I finally got around to playing with the first drop of WebSockets support for WCF. I’m pretty familiar with WebSockets as I’ve been using Node.js to play around with them for quite a while now. The server API in Node.js is wonderful as it couldn’t be simpler – to demonstrate this, take a look at how we’d create a basic echo server.

var ws = require("websocket-server");

var server = ws.createServer();

server.addListener("connection", function(connection){
  connection.addListener("message", function(msg){


When I heard that Microsoft was planning on integrating WebSockets into WCF my first thoughts weren’t all that positive. Now I know it’s not quite the case since .NET 4, but WCF has had a reputation for rather large complex API’s and heaps of XML. I was dreading what a WCF take on the above would look like. Turns out, perhaps I should be a little more optimistic.

class Program
    static void Main(string[] args)
        var host = new WebSocketsHost(new Uri("ws://localhost:4502/echo"));

public class EchoService : WebSocketsService
    public override void OnMessage(JsonValue jsonValue)

That’s really the kind of simplicity I was really hoping to see. Now the interesting question is how it’s going to look when dealing with multiple clients. Node kind of has an advantage here as it’s entirely single threaded (yeah, I said advantage), but with .NET’s recent concurrent collections and a rather nice API for dealing with events (Rx) I’m feeling pretty hopeful.

Making Macros in CoffeeScript


JavaScript dependency management is a hot topic at the moment (see RequireJS, Dojo and StealJS). This got me thinking, why do we just treat JavaScript as dumb files to be served up to the client? Now that we have web servers that literally speak the same language, aren’t there greatly possibilities yet to be discovered? Can we write code that seamlessly merges the divide between client and server?

Well honestly? I’ve got no idea. I got a little stuck on the first problem that came to mind – how do we get the server to understand what the code is intending to do on the client? Sure, using Node we could happily execute our JavaScript. But if we wanted to have some smarts about how we deal with it, say analyze a piece of JavaScript to determine what stuff it’s dependent on, we’d actually have to parse the code. Now I’m sure this is possible; clearly Web Browsers and Node parse JavaScript quite happily. But the thought of trying to deal with that myself didn’t quite make me giddy with excitement. If only there was a language like JavaScript but had easily useable parsers on hand that let us mess with the language…


Enter CoffeeScript. In it’s own words…

CoffeeScript is a little language that compiles into JavaScript. Underneath all of those embarrassing braces and semicolons, JavaScript has always had a gorgeous object model at its heart. CoffeeScript is an attempt to expose the good parts of JavaScript in a simple way.

A nice language that will compile into JavaScript, which also exposes the good parts of JavaScript’s (gorgeous) object model in a simple way? Well doesn’t that just sound perfect. Sure enough, it’s pretty easy to compile a CoffeeScript program from JavaScript.

var coffee = require('coffee-script');

var nodes = coffee.nodes(coffee.tokens("a = 2 + 2"));

console.log( nodes.compile() ); // var a = 2 + 2;

The important bit for this post, is that we can get both the token stream and the AST nodes themselves before they’re finally compiled into JavaScript. These nodes are just simple JavaScript objects thrown together in a graph. To make messing with them even easier, the Node structures are extremely well documented on the CoffeeScript site (No really, go look at it – it’s the most attractive documentation I’ve seen in a while). For the statement “a = 2 + 2”, the node graph looks much like below:

    Value "a"
    Op +
      Value "2"
      Value "2"

As a first experiment, I wrote a Visitor object which would visit each node in the graph and if certain conditions were met, replace the node with a new one. In this case, I’m looking for a method call like “ADD x, y”, then I’d replace the node with another in the form “x + y”.

var addReplacementVisitor = {
	onCall: function(n, replaceCallback) {
		if (n.variable.base.value === "ADD") {
			var addOp = new nodes.Op('+', n.args[0], n.args[1]);

So I can imagine what you’re thinking, “well done Dave, you’ve managed to replace an ADD call with an add operation. Yeah, super useful…”. My original idea was to build on this and have more advanced Visitors which would transform the node graph in grander ways. However, these things are kind of difficult to write, and doing anything even slightly more than trivial took an awful lot of code. Fortunately, I just happened to show this to our product designer Eric Wright, he took one look at and remarked – “ah, like Macro’s in Lisp” (yeah, a talented designer who’s also familiar with Lisp, way to make me feel inferior). Lisp allows you to define “macros”, essentially things that look a lot like functions, but actually act more as a find and replace on the language/AST itself. (There are a lot of places online which will give you a proper rundown).

(defmacro swap (a b)
    `(let ((temp ,a))
       (setf ,a ,b)
       (setf ,b temp)))

CoffeeScript Macros

This got me thinking – by far the easiest way of representing a graph of CoffeeScript nodes is CoffeeScript code itself. So in my first prototype, I have a CoffeeScript file designated to define macros, and then a CoffeeScript source file which the defined Macro’s are applied to. So to do something like the above we’d define a swap Macro like…

SWAP = (x,y) ->
	$tmp = x
	x = y
	y = $tmp

Quite straight forward stuff – we’re just creating a function named SWAP which takes two variables (x and y) and swaps them around. Hopefully you’re wondering about the significance of $tmp and why I’ve named it a bit funny, we’ll get to that in a moment. Imagine it working on the following CoffeeScript source:

a = 1
b = 2
c = 3
d = 4
SWAP a, b
SWAP c, d

The Macro really is just a find and replace, so when it’s found the SWAP method in the above, it’ll replace it with the body of the Macro. If we were to do this twice in the same scope, like the above. We’d expect to see two $tmp variables declared which wouldn’t be good – to prevent this, any variable in the Macro scope begining with $ will be renamed to something unique. So in my quick prototype, compiling the above would result in the following JavaScript:

var a, b, c, d, __tmp0, __tmp1;
a = 1;
b = 2;
c = 3;
d = 4;
__tmp0 = a;
a = b;
b = __tmp0;
__tmp1 = c;
c = d;
d = __tmp1;

Using your body

Usually the expressions passed to a Macro are just copied directly into the Macro’s body. But what if we wanted to wrap a Macro around a whole block of code? Well, there’s a special $body argument convention that will help us cope with that. Where most arguments are just directly copied, the $body variable is first “unwrapped” and just the expressions in it’s body will be copied. This allows us to pass a function (typically as ia Macro’s last argument) and treat it as the body of the Macro’s code. The example below hopefully demonstrates this better. Imagine we want to wrap a try…catch around all of our code so that all exceptions are swallowed.

ERRZLESS = ($body) ->
	catch e
		console.log "IGNORED ERROR: #{e}"

Note how we use this macro by supplying a function as the body.

	throw "ARGGGGSSS"

This will produce the following JavaScript.

try {
  throw "ARGGGGSSS";
} catch (e) {
  console.log("IGNORED ERROR: " + e);

So what’s the point?

Hurrah, Macro’s in CoffeeScript! Well not really. This was only a little experiment of mine to see how easy it is to mess with CoffeeScript before it’s complied. Fortunately as it turns out, it’s pretty easy.

So where is this useful? These kind of techniques I’ve discussed are super useful for building internal DSL’s and meta-programming. It also looks like there’s some serious work going on to provide static metaprogramming to CoffeeScript which would give us proper Macros and an awful lot more.

Show Me The Code

All of the code for this little experiment can be found up on GitHub. If you pull it down it can be executed using the command:


Using NodeJs to render JavaScript charts on the server

Ideally, you want both rich interactive charts in your front-end (“just like Google Finance” is a common request) and the ability to render the same charts on the server for exporting, emailing, or supporting less capable clients (Blackberry, I’m looking at you). The perfect scenario would be to do this using the same library so that they look identical, and so you don’t have to maintain two separate code bases. Unfortunately due to limitations in libraries, platforms and environments – it’s difficult to make this a reality.

Recently, I’ve been developing a HTML5 application in which we used the Highcharts charting library to provide basic line and bar charts. Highcharts is a JavaScript library that will render either SVG or VML dependent on the hosting browser (if you’re interested, take a look at some of their demos). To support exports, the library will create an SVG in the browser and post it up to the web server. In the box (figuratively), they supply a php script which will render this content into an image or a PDF then return it over HTTP. This works great for situations where we have a full web browser, but sadly, we have to resort to another charting library that can produce static images to produce images on the server. It’s challenging to get the charts looking identical, and maintaining two code bases is a pain.

Unless you’ve had something better to do than constantly follow blogs and twitter (oh, just me then…), you’ve probably heard about Nodejs. It’s a server-side JavaScript implementation running on top of Google’s excellent V8 engine (the one under Chrome). Using the jsdom library, we can provide an environment similar to a browser into which we can load client-side scripts. With very little work, I was able to load up the Highcharts library in node, then using identical code which I would write for a browser, render the chart and grab the SVG content. Compare a chart rendering in a browser to the following from my node sample:

var $ = window.jQuery,
 Highcharts = window.Highcharts,
 document = window.document,
 $container = $('<div id="container" />'),


chart = new Highcharts.Chart({
 chart: {
  defaultSeriesType: chartType,
  renderTo: $container[0],
  renderer: 'SVG',
  width: width,
  height: height
 series: [{
  animation: false,
  data: data

svg = $container.children().html();

Once we’ve grabbed this SVG content, we can render it to an image using the command line tool convert. As it’s super easy in Node to create a basic web server (literally just a single function), I created a basic server which accepts requests like /bar?data=1,2,3,4 and will return the chart rendered as an image using Highcharts.

this.server = http.createServer(function(request, response) {
	var url = parse(request.url, true),
		chartTypeMatch = /^\/(\w+)$/.exec(url.pathname),
		chartType	= chartTypeMatch ? chartTypeMatch[1] : null,

	/* Some code omitted */
	createHighchartsWindow(function(window) {
		/* chart generation from above */

		svg = $container.children().html();

		// Start convert reading in an svg and outputting a png
		convert	= spawn('convert', ['svg:-', 'png:-']);

		// We're writing an image, hopefully...
		response.writeHeader(200, {'Content-Type': 'image/png'});

		// Pump in the svg content

		// Write the output of convert straight to the response
		convert.stdout.on('data', function(data) {

		// When we're done rendering, we're done
		convert.on('exit', function(code) {
}).listen(2308); // Start HTTP server listening on port 2308

So far I’ve only prototyped these ideas, but it’s working pretty well (see image below). Although I’ve used Highcharts for this example, it should be possible to use any SVG based JavaScript charting package. I feel this technique has the potential to be able to support both rich client side charts and static server generated images using the same libraries, and most importantly, sharing the same code. My prototype is available at up at GitHub. What do you think?

Highcharts chart rendered on Node