Dust.js as a React Delivery Vehicle


Even people in love with Single Page Apps and pooh-poohing server side rendering know that SPAs cannot just materialize in the browser. Something has to deliver them there. Until recently, AngularJS was the most popular client side JS framework, and I have seen all kinds of ADVs (Angular Delivery Vehicles) – from Node.js (N is for Node.js in the MEAN stack), to wonderfully ironic JSP files serving Angular. Particularly exuberant developers go all the way to using static HTML to launch their apps into orbit, extolling the virtues of CDNs.

Of course, I would like to see their faces the first time they realize the need to compute something dynamically in those static files. But that is beside the point. The important takeaway is that AngularJS has a very nice property of being embeddable. You don’t need to surrender your entire page to Angular – you can sequester it to a DIV (in fact, you can have multiple Angular apps running in the same page). This is a very nice feature, but also a necessary one – again, AngularJS is a pure client side framework.

With great power…

React changes this equation because it can be rendered on the server. Similarly to AngularJS, React root component can be mounted to any node inside your page, but unlike Angular, this page can also be rendered on the server by React.

This opens the door for a whole new class of isomorphic or universal apps, and I have written about it already. What you need to decide is whether you want to render the entire page, or a subset of it. If you choose the latter, then the subsequent question becomes – how do you start the rendering process until React takes over?

My first inclination was to go full React. Using the awesome react-engine module, it is easy to configure React as the view engine in the Express framework, and render the entire page. However, we soon hit some snags:

  1. As we render our UIs using micro services that use UI composition to glue the pages together, we hit a problem of React being grumpy about having to render HTML snippet arriving from outside of its control – something much easier with a templating library such as Dust.js.
  2. JSX turned out to be fairly quirky and finicky when rendering HTML HEAD element, as well as trying to inline JavaScript in tags. In fact, the latter became so error prone that we completely gave up on it, electing to put all our script in files. I didn’t like this particular restriction, and this remains an area where I am sour on JSX and the fact it most decidedly isn’t HTML.

Of course the problems I listed above did not show up in the small example I published in the initial article. They reared their ugly head once we started getting serious. In my original example, the header was a React component. This works great if you control the entire app, but the whole deal of UI composition is to integrate with apps not necessarily written using React, and you can only do that using the least common denominator (HTML, CSS and vanilla JS).

Remind me again about UI composition

In a nutshell, UI composition is the approach where common areas on the page are served as HTML snippets from a REST API. The API service can also serve CSS and JavaScript files that accompany the snippet (if you don’t serve them from a CDN). The idea is that a dedicated microservice can be providing these common areas this way, allowing other microservices to pull the content and inline it in their pages regardless of their stack.

UI composition inverts the flow you would normally use with iframes. Instead of rendering the common areas, then placing an iframe for the content and loading the content from an external URL, we load the common area as HTML and inline it into our page. In the olden days this approach was often accomplished using ‘edge side includes’ or ESIs. This approach has many benefits, including no need for the dreaded iframes, full control over the entire page, and the ability to integrate microservices implemented using different stacks. For example, the teams in my current project use:

  1. Node.js microservices using React for isomorphic rendering
  2. Node.js microservices using Dust.js and jQuery
  3. Java microservices using AngularJS

All of these microservices render the exact same header even though they use different stacks and libraries. Powerful stuff.

OK, so React?

The problem here with React is that when we use it to render the entire page server-side, it starts to squirm in discomfort. Inlining HTML is a problem for React because, while it can be used on the server, it was designed to shine on the client. Inlining raw HTML presumably obtained as a user input into DOM is a major security exposure without sanitization, and React will tell you so in no uncertain terms. Here is the line that inlines the shared header taken straight out of our production code:

<div dangerouslySetInnerHTML={{__html: decodeURIComponent(this.props.header)}} />

This is React telling us very clearly: “I really, really, really don’t like what you are asking me to do”. And this is even before we bring in react-engine. React engine is a PayPal module that enables isomorphism when combined with Node.js/express and react-router. We can render on the server, then pack the properties as part of the HTML response, hydrate the React components on the client and continue where we left off.

Only we have a little problem: notice that in the snippet above we have inlined the header by passing it down to the React component as a prop. What this means is that this prop (‘header’) will be sent to the client alongside other properties. The only problem is that this property contains HTML snippet for the entire header, and because it can really mess up react-engine, it is also URLencoded, making it even bulkier.

This is an unhappy situation: first we inline the header HTML (which is fine), then we send that same inlined HTML, this time encoded, as a React prop to the client. This unnecessarily bloats the generated HTML.


This is what we are going to do to solve this problem: we are going to call back our trusty Dust.js sitting sad on the sidelines ever since we got smitten with React. We can press it into service to do what it does best – outline the page, prep it up to the point where React can take over.

It’s a good thing that Express framework will happily handle more than one view engine. Of course, only one of them can be registered as default, but this just means that we will need to spell out the extension of dust files (because React is the default). Big deal.

var path = require('path')
, express = require('express')
, renderer = require('react-engine')
, dust = require('dustjs-linkedin')
, helpers = require('dustjs-helpers')
, cons = require('consolidate');


// create the view engine with `react-engine`
var engine = renderer.server.create({
  routes: require(path.normalize(__dirname + '/public/routes.jsx')), 
  docType: ""

// set the engines
app.engine('.jsx', engine);
app.engine('.dust', cons.dust);

// set the view directory
app.set('views', __dirname + '/public/views');

// set jsx as the view engine
app.set('view engine', 'jsx');

// finally, set the custom view
app.set('view', renderer.expressView);

Once we have both engines set up, we can arrange our delivery vehicle like this. We will first set up a reusable layout template using Dust.js that will deliver the outline of each page:

<!DOCTYPE html>
   <meta charset='utf-8'>
   <meta http-equiv="X-UA-Compatible" content="IE=edge">
   <meta name="viewport" content="width=device-width, initial-scale=1" >
   <link rel="stylesheet" href="/css/styles.css">
   <div class="main-content">

Notice how inlining of the header now happens in Dust.js, relieving React from doing it. We can now use this template to render our page like this:

<div id="react-mount-node">{spa|s}</div>
<script src="/bundle.js"></script>

We have created a DIV in the template above where we are mounting React, and we will inline the React server-side content using the ‘spa’ variable.

Our Express controller that will handle both Dust.js and React can look something like this:

var request = require("request");

module.exports.get = function(req, res) {
  var settings = {
    title: 'SPA - Demo',
    name: 'React SPA',
    selection: 'header-spa'

  var headerUrl = "http://"+req.headers.host+"/header?selection=header-spa";

  request.get(headerUrl, function (err, response, body) {
    if (err)
      settings.header = body;
    res.render(req.url, { name: settings.name }, function(err, html) {
      if (err) {
        settings.spa = err.message;
        settings.spa = html;
      res.render("spa.dust", settings); 

The code above is what I call ‘React Delivery Vehicle’. It has three steps:

  1. First we fetch the header from the URL that is providing us with the header HTML snippet. We capture it into a variable ‘header’. In production, we will heavily cache this step.
  2. Then we render the React root component as usual. This will employ react-engine and react-router to render all the views necessary for the provided request URL. However, we don’t send the rendering to the response. Instead, we capture it in the variable ‘spa’.
  3. Finally, we invoke Dust.js view engine to render the full page, passing in the ‘settings’ object. This object will contain both the header and the content rendered by React. Dust will simply inline both while rendering the page outline. The result will be sent directly to the Express response.


The solution described above plays to the strengths of both Dust.js as a server side template engine and React as a component library. Dust.js is great at rendering HTML outline the way you would expect, with no need to worry about JSX quirks around HEAD meta tags, or JavaScript inlining. React takes over to render isomorphic components into the mount node. This means we don’t need to send any data that is not needed on the client. This fixes the HTML bloat problem I mentioned before.

As for the negatives, setting up two rendering engines on the server has a slight overhead, and switching mentally from Dust.js to JSX adds context switching tax. Luckily, you can set up reusable Dust.js templates and not really worry about them too much – most of the action will be in JSX anyway.

We like this approach and are currently switching to it across the board. If you have alternative ideas or comments, drop me a line. Meanwhile, the source code for the entire example is available on GitHub as usual, and the sample app employing this mechanism runs on Bluemix.

© Dejan Glozic, 2015

PayPal, You Got Me At ‘Isomorphic ReactJS’


I love PayPal’s engineering department. There, I’ve said it. I have followed what Jeff Harrell and the team have been doing ever since I started reading about their wholesale jump into Node.js/Dust.js waters. In fact, their blogs and presentations finally convinced me to push for Node.js in my IBM team as well. I had a pleasure of talking to them at conferences multiple times and I continue to like their overall approach.

PayPal is at its core a no-nonsense enterprise that moved from Java to Node.js. Everything I have seen coming from them had the same pragmatic approach, with concerns you can expect from running Node.js in production – security, i18n, converting a large contingent of Java engineers to Node.js.

More recently, I kept tab on PayPal’s apparent move to from Dust.js to ReactJS. Of course, this time around we learned faster and were already playing with React ourselves (using Dust.js for simpler, content-heavy pages and reserving ReactJS for more dynamic use cases). However, we haven’t really started pushing on ReactJS because I was still looking at how to take advantage of React’s ability to render on the server.

Well, the wait is over. PayPal has won my heart again by releasing a React engine that connects the dots in a way so compatible with what we needed that made me jump with joy. Unlike the version I used for my previous blog post, this one allows server side components to be client-mountable, offering true isomorphic goodness. Finally, a fat-free yogurt that does not taste like paper glue.

Curate and connect

The key importance of PayPal’s engine is in what it brings together. One of the reasons React has attracted so much attention lately is its ability to render into a string on the server, then render to the real DOM on the client using the same code. This is made possible by using NodeJS, which is by now our standard stack (I haven’t written a line of Java code for more than a year, on my honour).

But that is not enough – in order to carry over into the client with the same code, you need ‘soft’ page switching – showing boxes inside other boxes and updating the browser history as these boxes are swapped. This has been brought to us now by another great library – react-router. This module inspired by Ember’s amazing router is quickly becoming ‘the’ router for React applications.

What PayPal did in their engine was connect all these great libraries, then write important glue code to put it all together. It is now possible to mix normal server side templates with pages that start their life on the server, then continue on the client with the state preserved and ready to go as soon as JavaScript is loaded.

Needless to say, this was the solution we were looking for. As far as we are concerned, this will put an end to needless ‘server vs client’ wars, and allow us to have our cake and eat it too. Mmm, cake.

Show us some sample code

OK, let’s get our hands dirty. What many people need when writing applications is a hybrid between a site and an app – the ability to put together a web site that has one or more single-page apps embedded in it. We will build an example site that has two plain ReactJS pages rendered on the server, while the third page is really an SPA taking advantage of react-engine and the ability to go full isomorphic.

We will start by creating a shared layout JSX component to be consumed by all other pages:

var React = require('react');
var Header = require('./header.jsx');

module.exports = React.createClass({

  render: function render() {
    var bundle;

    if (this.props.addBundle)
      bundle = <script src='/bundle.js'/>;

    return (
          <meta charSet='utf-8' />
          <link rel="stylesheet" href="/css/styles.css"/>
          <Header {...this.props}></Header>
          <div className="main-content">

We extracted common header as a separate component that we required and inlined:

var React = require('react');

module.exports = React.createClass({

  displayName: 'header',

  render: function render() {
    var linkClass = 'header-link';
    var linkClassSelected = 'header-link header-selected';

    return (
      <section className='header' id='header'>
        <div className='header-title'>{this.props.title}</div>
        <nav className='header-links'>
            <li className={this.props.selection=='header-home'?linkClassSelected:linkClass} id='header-home'>
              <a href='/'>Home</a>
            <li className={this.props.selection=='header-page2'?linkClassSelected:linkClass} id='header-page2'>
              <a href='/page2'>Page 2</a>
            <li className={this.props.selection=='header-spa'?linkClassSelected:linkClass} id='header-spa'>
              <a href='/spa/section1'>React SPA</a>

The header shows three main pages – ‘Home’, ‘Page2’ and ‘React SPA’. The first two are plain server side pages that are rendered by express and sent to the client as HTML:

var Layout = require('./layout.jsx');
var React = require('react');

module.exports = React.createClass({

  render: function render() {

    return (
      <Layout {...this.props}>
        <p>An example of a plain server-side ReactJS page.</p>

On to the main course

The third page (‘React SPA’) is where all the fun is. Here, we want to create a single-page app so that when we navigate to it by clicking on its link in the header, all subsequent navigations inside it are client-side. However, true to our isomorphic requirement, we want the initial content of ‘React SPA’ page to be rendered on the server, after which react-router and React component will take over.

To show the potential of this approach, we will build a very useful layout – a page with a left nav containing three links (Section 1, 2 and 3), each showing different content in the content area of the page. If you have seen such a page once, you saw it a million times – this layout is internet’s bread and butter.

We start building our SPA top-down. Our top level ReactJS component will reuse Layout component:

var Layout = require('./layout.jsx');
var React = require('react');
var Nav = require('./nav.jsx');
var Router = require('react-router');

module.exports = React.createClass({

  render: function render() {

    return (
      <Layout {...this.props} addBundle='true'>
        <Nav {...this.props}/>
        <Router.RouteHandler {...this.props}/>

We have loaded left nav as a Nav component:

var React = require('react');
var Link = require('react-router').Link;

module.exports = React.createClass({

  displayName: 'nav',

  render: function render() {
    var activeClass = 'left-nav-selected';

    return (
      <section className='left-nav' id='left-nav'>
        <div className='left-nav-title'>{this.props.name}</div>;
        <nav className='left-nav-links'>
            <li className='left-nav-link' id='nav-section1'>
              <Link to='section1' activeClassName={activeClass}>Section 1</Link>
            <li className='left-nav-link' id='nav-section2'>
              <Link to='section2' activeClassName={activeClass}>Section 2</Link>
            <li className='left-nav-link' id='nav-section3'>
              <Link to='section3' activeClassName={activeClass}>Section 3</Link>

This looks fairly simple, except for one crucial difference: instead of adding plain ‘a’ tags for links, we used Link components coming from react-router module. They are the key for the magic here – on the server, they will render normal links, but with ‘breadcrumbs’ allowing React router to mount click listeners on them, and cancel normal navigation behaviour. Instead, they will cause React components registered as handlers for these links to be shown. In addition, browser history will be maintained so that back button and address bar works as expected for these ‘soft’ navigations.

Component RouteHandler is responsible for executing the information specified in our route definition:

var Router = require('react-router');
var Route = Router.Route;

var SPA = require('./views/spa.jsx');
var Section1 = require('./views/section1.jsx');
var Section2 = require('./views/section2.jsx');
var Section3 = require('./views/section3.jsx');

var routes = module.exports = (
  <Route path='/spa' handler={SPA}>;
    <Route name='section1' handler={Section1} />;
    <Route name='section2' handler={Section2} />;
    <Route name='section3' handler={Section3} />;
	<Router.DefaultRoute handler={Section1} />;

As you can infer, we are not declaring all the routes for our site, just the section for the single-page app (under the ‘/spa’ path). There we have built three subpaths and designated React components as handlers for these routes. When a Link component whose ‘to’ property is equal to the route name is activated, the component designated as handler will be shown.

Server needs to cooperate

In order to get our HTML5 push state enabled router to work, we need server side cooperation. In the olden days when SPAs were using hashes to ensure client side navigation is not causing page reloading, we didn’t need to care about the server because hashes stayed on the client. Those days are over and we want true deep URLs on the client, and we can have them using HTML5 push state support.

However, once we start using true links for everything, we need to tell the server to not try to render pages that belong to the client. We can do this in express like this:

app.get('/', function(req, res) {
  res.render('home', {
    title: 'React Engine Demo',
    name: 'Home',
    selection: 'header-home'

app.get('/page2', function(req, res) {
  res.render('page2', {
    title: 'React Engine Demo',
    name: 'Page 2',
    selection: 'header-page2'

app.get('/spa*', function(req, res) {
  res.render(req.url, {
    title: 'SPA - React Engine Demo',
    name: 'React SPA',
    selection: 'header-spa'

Notice that we have defined controllers for two routes using normal ‘res.render’ approach, but the third one is special. First off, we have instructed express to not try to render any pages under /spa by sending them all to the React router. Notice also that instead of sending normal view names in res.render, we are passing entire URL coming from the request. This particular detail is what makes ‘react-engine’ ingenious – the ability to mix react-router and normal views by looking for the presence of the leading ‘/’ sign.

A bit more boilerplate

Now that we have all these pieces, what else do we need to get this to work? First off, we need the JS file to configure the react router on the client, and start the client side mounting:

var Routes = require('./routes.jsx');
var Client = require('react-engine/lib/client');

// Include all view files. Browserify doesn't do
// this automatically as it can only operate on
// static require statements.
require('./views/**/*.jsx', {glob: true});

// boot options
var options = {
  routes: Routes,

  // supply a function that can be called
  // to resolve the file that was rendered.
  viewResolver: function(viewName) {
    return require('./views/' + viewName);

document.addEventListener('DOMContentLoaded', function onLoad() {

And to round it all, we need to deliver all this JavaScript, and JSX templates, to the client somehow. There are several ways to approach JavaScript modularization on the client, but since we are using Node.js and sing the isomorphic song, what can be more apt than using Browserify to carry over CommonJS into the client? The following command line will gather the entire dependency tree for index.js into one tidy bundle:

browserify -t reactify -t require-globify public/index.js -o public/bundle.js

If you circle back all the way to Layout.jsx, you will notice that we are including a sole script tag for /bundle.js.

The complete source code for this app is available on GitHub. When we run ‘npm install’ to bring all the dependencies, then ‘npm start’ to run browserify and start express, we get the following:


When we click on header links, they cause full page reload, rendered by express server. However, clicks on the left nav links cause the content of the SPA page to change without a page reload. Meanwhile, the address bar and browser history is dutifully updated, and deep links are available for sharing.


You can probably tell that I am very excited about this approach because it finally brings together fast initial rendering and SEO-friendly server-side pages, with full dynamic ability of client side apps. All excitement aside, we need to remember that this is just views – we would need to write more code to add action dispatcher and data stores in order to implement full Flux architecture.

Performance wise, the combined app renders very quickly, but one element sticks out. Bundle.js in its full form is about 800KB of JavaScript, which is a lot. When running the command to minify it, it is trimmed down to 279KB, and when compression is enabled in express, it goes further down to 62.8KB to send down the wire. We should bear in mind that this is ALL JavaScript we need – ReactJS, as well as our own components. It should also be noted that this JavaScript is loaded asynchronously and that we are sending content from the server already – we will not see a white page while script is being downloaded and parsed.

In a more complex application, we would probably want to segregate JavaScript into more than one bundle so that we can load code in chunks as needed. Luckily, react-router already addressed this.

The app is deployed on Bluemix – you can access it at http://react-engine-demo.mybluemix.net. Try it out, play with the source code and let me know what you think.

Great job, PayPal! As for me, I am completely sold on ReactJS for most real world applications. We will be using this approach in our current project from now on.

© Dejan Glozic, 2015

Isomorphic Apps Part 2: Node, React.js, and Socket.io

Two Heads, 1930, Wikimedia Commons
Two Heads, 1930, Wikimedia Commons

When I was a kid, I went to the movies to watch Mel Brooks’ “History of The World, Part I”. I had a great time and could not wait for the sequel (that featured, among other things, Hitler on ice, a Viking funeral and laser-shooting rabbis in ‘Jews in Space’ teaser). Alas, ‘Part II’ never came. Determined to not subject my faithful readers to such a disappointment, here comes the promised part II of my ‘Isomorphic Apps’ trilogy.

In the first part of this story, we created an isomorphic app by taking advantage of the fact that we can use Dust.js as an Express view engine, and then compile partials into JavaScript and re-use them on the client as needed. In order to compare approaches with only one variable changed, we will switch to React.js for the view.

What’s the deal with React.js

React.js is attracting a lot of attention these days due to the novel approach it has taken to building dynamic Web apps. At the heart of the approach is the notion of a virtual DOM. React.js components manipulate an abstraction of a DOM that is then transformed into the physical DOM in a highly optimized fashion. Even more ingeniously, browser’s DOM is only one of the possible transformations: virtual DOM can be also serialized into plain HTML, which makes it possible to use it on the server. Even more recently, it can be serialized into native code to address mobile (and even desktop) UI components.

I am old enough to remember Java’s “Write once, run anywhere” slogan, and this looks like new generation’s attempt to make a run for this chimera. But even putting React native on a side for a moment, the fact that you can render on the server makes React supremely suitable for isomorphic apps, something Angular.js is lacking.

React.js is also refreshingly simple to figure out. Angular.js has this famous adoption roller coaster, and sometimes when you don’t get an Angular peculiarity, you feel the fault is with you, not Angular. React.js took an approach that life is short, and we can do better things with our time than figure out the maddening quirks of a complex framework. There is no two-way binding (because it has shown to be a double-edged sword – see what I did here). When the model changes, you just naively rebuild the view (sometimes referred to as ‘write pages like it’s the 90s’). Seems massively suboptimal, but remember that you are only rebuilding the virtual DOM – React.js figures out the actual delta and only applies the delta against the real DOM. And since most of the performance (or lack thereof) lies in the physical DOM, React.js promises fast apps without writing a lot of code for smart and surgical updating on model changes.

Configuring React.js as an Express view engine

Alright, I hope this wet your appetite for some coding. We will start by cloning the page from part I and adding another view engine in app.js (because I am cheap/lazy and don’t want to run another app for this). For this we need to install react on the server, as well as the express view adapter.

We will start by installing ‘react’ and ‘express-react-views’ and configuring the jsx view engine:

var react = require('express-react-views');


app.engine('jsx', react.createEngine());
app.set('view engine', 'jsx');

The last line above should only be set if you will use JSX as the only view engine for Express. In my case, I had to omit that line because I am already serving some Dust pages, and you can only set one default engine. The only thing I lost this way was the ability to find JSX templates without the extension – they can still be rendered when extension is included.

The controller for our React.js page is almost identical to the one we wrote for Dust.js:

var model = require('../models/todos');

module.exports.get = function(req, res) {
   model.list(req.user, function(err, todos) {
         { title: 'React - Isomorphic', user: req.user, todos: todos });

Most of the fun happens in the view, as expected. React.js requires some getting used to. For starters, JSX syntax is actually XML (and not even XHTML), so all elements require termination. Many attribute names require camel case, which is very annoying (I always hated Jade for this mental transformation, and now JSX is doing the same for me). At least the JSX transformer is yelling at you in the console about possible errors you made, so fixing up your JSX is not too hard:

var React = require('react');
var DefaultLayout = require('./rlayout');
var RTodo = require('./rtodo');

var Todos = React.createClass({
  render: function() {
    return (
      <DefaultLayout { ...this.props} selection="react">
        <h1>Using React.js for View</h1>
        <div className="new">
           <textarea id="new-todo-text" placeholder="New todo"/>
        <div className="delete">
           <button type="button" id="delete-all"
              className="btn btn-primary">Delete All</button>
        <div id="todos" className="todos">
           {this.props.todos.map(function(todo) {
          	return <RTodo key={todo.id} {...todo} />;
        <script src="/js/prettyDate.js"></script>
        <script src="/js/rtodo.js"></script>
        <script src="/js/rtodos.js"></script>

module.exports = Todos;

The code above requires some explanation. Unlike with Dust.js, both inclusion into a common layout template and instantiation of partials is done through React.js component model. Notice that we imported DefaultLayout component that is our standard page boilerplate. The payload of the page is simply specified as content of the instantiated component in the ‘render’ method above.

Another important point is that unlike Dust.js, properties are not automatically passed down the component hierarchy – we need to explicitly do it (notice the strange “{ …this.props }” expression in the DefaultLayout declaration – what I am saying is ‘pass all the properties down to the child component’). We can also define new properties, which I am doing by passing ‘selection’ that will be used by the header component (to highlight the ‘React’ link).

Another important section of the template is where I am instantiating RTodo component (a single Todo card). Flow control can be tricky in JSX because the entire template is one giant return statement, so everything needs to evaluate to an expression. Notice the trick with using the array map to iterate over the list of todos and render each child todo component.

This code will produce a page very similar to the one with Dust.js, with identical results. In fact, it is possible to go back and forth because both pages are using the same REST service for the model.

JSX compiler

So far we took care of the server side. As with Dust.js, we can compile components we need on the client side, this time using jsx compiler that comes by installing ‘react-tools’:

node_modules/react-tools/bin/jsx --extension jsx views/ public/js/ rtodo

We can compile any number of components and place them into the JS directory under /public folder so that Express can serve them to the browser.

The client side script is very similar to the one used by the Dust.js page. The only difference is in the ‘Add’ action handler:

var socket = io.connect('/');
socket.on('todos', function (message) {
  if (message.type=='add') {
    var newTodo = document.createElement('div');
    React.render(React.createElement(RTodo, message.state),

The code is remarkably similar – instead of calling ‘dust.render’ to render the partial using the element we received via the Socket.io message, we ask React to render the compiled element into a new DOM element we created on the fly. We then prepend this element into the parent DIV.

Commentary and comparisons

First off, I would say that this second attempt at writing an isomorphic app was a success because I was able to replicate Dust.js example from part I with identical behaviour. However, it is not as good a fit for React.js. A better example would see us modifying a model and asking React.js to re-render an existing DOM branch. Now that I feel reasonably comfortable around React.js, I think I will create something more dynamic for it in the near future. A true React-y way of doing the list of todos would be to simply re-render the entire list on each Socket.io message. We would let React.js figure out that all it needs to do is insert a new Todo DIV into the parent node. This way we would not need to create DOM elements ourselves, as in the code above.

After spending a year working with Dust.js, JSX took some getting used to. As I said, I hated Jade because it created an additional layer of abstraction between me and HTML, and I never quite knew what final HTML it will produce. JSX evokes the same feelings in me, but the error/correction loop has shortened as I learned it more. In addition, the value I get in return is much higher with JSX than with Jade.

Nevertheless, certain things will not get better with time. JSX is awkward to work with when it comes to logic. Remember, the entire template is an expression, so statements are really hard to fit in. Tertiary conditonals work, and as you saw, it is possible to use tricks with maps to iterate over children in a collection. I still prefer Dust.js for straightforward pages, but I can see how React.js can work with components in a very dynamic app.

I like React.js component model, as well as the fact that code and markup are close to each other – for a developer this is very useful. I also like the fact that, JSX quirks aside, there is much less magic compared to Angular.js. Of course, React.js is really just a View of MVC, so it is not a fair comparison. On the other hand, I am now dying to hook it up into Backbone as a view – it feels like a great combination (and of course, there are already articles on exploiting this exact combination). The more I think and read about it, Backbone models/collections/router and React.js views may just end up being my favorite stack for writing highly dynamic apps with server side bonus for SEO and initial experience.

A word of caution

If your system has elements of both a site and an app, use micro-services and implement site portions with a more straightforward templating solution (as I already covered in the previous blog post). This is going to make content authoring easier and increase the number of content providers with cursory knowledge of HTML that will feel confident authoring and/or modifying something like Dust.js templates. Leave React.js for 10x developers working on the highly dynamic ‘app’ portions of the system. By the way, this is an area where micro-services shine – this kind of partitioning is one of their key selling points. You can easily have micro-services using Dust.js and micro-services using React.js (and as I have already shown, even mixed in the same Node app).

One of the downsides of a mixed system (one using both Dust.js and React.js) is that sometimes content pages have dynamic component sprinkled in them. The challenge is invoking those component without requiring your casual developers to be afraid to touch such pages. Invoking React.js component in a Dust.js page would require inserting JavaScript tags, which is less then ideal. This is where Web Components are much easier to reason about, and there are already attempts to bridge the two worlds – invoking React.js components as custom Web Components.

And that’s a wrap

As before, you can browse the source code as IBM DevOps Services project, and the latest version of the app is running in Bluemix. In the final instalment of this trilogy, I will make our example a bit more dynamic (to let React.js show its true potential), and add some structure using Backbone.js. Until then, React away!

© Dejan Glozic, 2015

SoundCloud is Reading My Mind

Marvelous feats in mind reading, The U.S. Printing Co., Russell-Morgan Print, Cincinnati & New York, 1900
Marvelous feats in mind reading, The U.S. Printing Co., Russell-Morgan Print, 1900

“Bad artists copy. Good artists steal.”

– Pablo Picasso

It was bound to happen. In the ultra-connected world, things are bound to feed off of each other, eventually erasing differences, equalizing any differential in electric potentials between any two points. No wonder the weirdest animals can be found on islands (I am looking at you, Australia). On the internet, there are no islands, just a constant primordial soup bubbling with ideas.

The refactoring of monolithic applications into distributed systems based on micro-services is slowly becoming ‘a tale as old as time’. They all follow a certain path which kind of makes sense when you think about it. We are all impatient, reading the first few Google search and Stack Overflow results ‘above the fold’, and it is no coincidence that the results start resembling majority rule, with more popular choices edging out further and further ahead with every new case of reuse.

Luke Wroblewski of Mobile First fame once said that ‘two apps do the same thing and suddenly it’s a pattern’. I tend to believe that people researching the jump into micro-services read more than two search results, but once you see certain choices appearing in, say, three or four stories ‘from the trenches’, you become reasonably convinced to at least try them yourself.

If you were so kind as to read my past blog posts, you know some of they key points of my journey:

  1. Break down a large monolithic application (Java or RoR) into a number of small and nimble micro-services
  2. Use REST API as the only way these micro-services talk to each other
  3. Use message broker (namely, RabbitMQ) to apply event collaboration pattern and avoid annoying inter-service polling for state changes
  4. Link MQ events and REST into what I call REST/MQTT mirroring to notify about resource changes

Then this came along:

As I was reading the blog post, it got me giddy at the realization we are all converging on the emerging model for universal micro-service architecture. Solving their own unique SoundCloud problems (good problems to have, if I may say – coping with millions of users falls into such a category), SoundCloud developers came to very similar realizations as many of us taking a similar journey. I will let you read the post for yourself, and then try to extract some common points.

Stop the monolith growth

Large monolithic systems cannot be refactored at once. This simple realization about technical debt actually has two sub-aspects: the size of the system at the moment it is considered for a rewrite, and the new debt being added because ‘we need these new features yesterday’. As with real world (financial) debt, the first order of business is to ‘stop the bleeding’ – you want to stop new debt from accruing before attempting to make it smaller.

At the beginning of this journey you need to ‘draw the line’ and stop adding new features to the monolith. This rule is simple:

Rule 1: Every new feature added to the system will from now on be written as a micro-service.

This ensures that precious resources of the team are not spent on making the monolith bigger and the finish line farther and farther on the horizon.

Of course, a lot of the team’s activity involves reworking the existing features based on validated learning. Hence, a new rule is needed to limit this drain on resources to critical fixes only:

Rule 2: Every existing feature that requires significant rework will be removed and rewritten as a micro-service.

This rule is somewhat less clear-cut because it leaves some room for the interpretation of ‘significant rework’. In practice, it is fairly easy to convince yourself to rewrite it this way because micro-service stacks tend to be more fun, require fewer files, fewer lines of code and are more suitable for Web apps today. For example, we don’t need too much persuasion to rewrite a servlet/JSP service in the old application as a Node.js/Dust.js micro-service whenever we can. If anything, we need to practice restraint and not fabricate excuse to rewrite features that only need touch-ups.

Micro-services as BBQ. Mmmmm, BBQ…

An important corollary of this rule is to have a plan of action ahead of time. Before doing any work, have a ‘cut of beef’ map of the monolith with areas that naturally lend themselves to be rewritten as micro-services. When the time comes for a significant rework in one of them, you can just act along that map.

As is the norm these days, ‘there’s a pattern for that’, and as SoundCloud guys noticed, the cuts are along what is known as bounded context.

Center around APIs

As you can read at length on the API evangelist’s blog, we are transforming into an API economy, and APIs are becoming a central part of your system, rather than something you tack on after the fact. If you could get by with internal monolith services in the early days, micro-services will force you to accept APIs as the only way you communicate both inside your system and with the outside world. As SoundCloud developers realized, the days of integration around databases are over – APIs are the only contact points that tie the system together.

Rule 3: APIs should be the only way micro-services talk to each other and the outside world.

With monolithic systems, APIs are normally not used internally, so the first APIs to be created are outward facing – for third party developers and partners. A micro-service based system normally starts with inter-service APIs. These APIs are normally more powerful since they assume a level of trust that comes from sitting behind a  firewall. They can use proprietary authentication protocols, have no rate limiting and expose the entire functionality of the system. An important rule is that they should in no way be second-class compared to what you would expose to the external users:

Rule 4: Internal APIs should be documented and otherwise written as if they will be exposed to the open Internet at any point.

Once you have the internal APIs designed this way, deciding which subset to expose as public API stops becoming a technical decision. Your external APIs look like internal with the exception of stricter visibility rules (who can see what), rate limiting (with the possibility of a rate-unlimited paid tier), and authentication mechanism that may differ from what is used internally.

Rule 5: Public APIs are a subset of internal APIs with stricter visibility rules, rate limiting and separate authentication.

SoundClound developers went the other way (public API first) and realized that they cannot build their entire system with the limitations in place for the public APIs, and had to resort to more powerful internal APIs. The delicate balance between making public APIs useful without giving out the farm is a decision every business need to make in the API economy. Micro-services simply encourage you to start from internal and work towards public.


If there was a section in SoundCloud blog post that made me jump with joy was a section where they discussed how they arrived at using RabbitMQ for messaging between micro-services, considering how I write about that in every second post for the last three months. In their own words:

Soon enough, we realized that there was a big problem with this model; as our microservices needed to react to user activity. The push-notifications system, for example, needed to know whenever a track had received a new comment so that it could inform the artist about it. At our scale, polling was not an option. We needed to create a better model.


We were already using AMQP in general and RabbitMQ in specific — In a Rails application you often need a way to dispatch slow jobs to a worker process to avoid hogging the concurrency-weak Ruby interpreter. Sebastian Ohm and Tomás Senart presented the details of how we use AMQP, but over several iterations we developed a model called Semantic Events, where changes in the domain objects result in a message being dispatched to a broker and consumed by whichever microservice finds the message interesting.

I don’t need to say much about this – read my REST/MQTT mirroring post that describes the details of what SoundCloud guys call ‘changes in the domain objects result in a message’. I would like to indulge in a feeling that ‘great minds think alike’, but more modestly (and realistically), it is just common sense and RabbitMQ is a nice, fully featured and reliable open source polyglot broker. No shocking coincidence – it is seen in many installations of this kind. Let’s make a rule about it:

Rule 6: Use a message broker to stay in sync with changes in domain models managed by micro-services and avoid polling.

All together now

Let’s pull all the rules together. As we speak, teams around the world are suffering under the weight of large unwieldy monolithic applications that are ill-fit for the cloud deployment. They are intrigued by micro-services but afraid to take the plunge. These rules will make the process more manageable and allow you to arrive at a better system that is easier to grow, deploy many times a day, and more reactive to events, load, failure and users:

  1. Every new feature added to the system will from now on be written as a micro-service.
  2. Every existing feature that requires significant rework will be removed and rewritten as a micro-service.
  3. APIs should be the only way micro-services talk to each other and the outside world.
  4. Internal APIs should be documented and otherwise written as if they will be exposed to the open Internet at any point.
  5. Public APIs are a subset of internal APIs with stricter visibility rules, rate limiting and separate authentication.
  6. Use a message broker to stay in sync with changes in domain models managed by micro-services and avoid polling.

This is a great time to build micro-service based systems, and collective wisdom on the best practices is converging as more systems are coming online. I will address the topic of APIs in more detail in one of the future posts. Stay tuned, and keep reading my mind!

© Dejan Glozic, 2014

Meanwhile, in Nodeland…

Hieronymus Bosch, "The Garden of Earthly Delights", between 1480 and 1505.
Hieronymus Bosch, “The Garden of Earthly Delights”, between 1480 and 1505.

In India, we don’t call it “Indian food’. We just call it “food”.

Rajesh Koothrappali, “The Big Bang Theory”

It was always hard for me to make choices. In the archetypical classification of ‘satisficers’ and ‘maximizers’, I definitely fall into the latter camp. I research ad nauseum, read reviews, measure carefully, take everything into account, and then finally make a move, mentally exhausted. Then I burst into cold sweat of buyer’s remorse, or read a less than glowing review of the product I just purchased, and my happiness is dimished. It sucks to be me.

It appears it can be passed on as well. My 14-year-old daughter recently went to the mall to buy clothes (alone, a rite of passage of sorts). She returned empty-handed – ‘there was nothing nice’. I think we confirmed paternity right then and there.

Of course, this affliction carries over into my professional life. Readers who follow me for a while can attest my hemming and hawing about Node.js – should I stay or should I go, what if it is not ready, what if it turns out to be just a fad, what if I read a bad review after we port everything.

At least in my professional choices I appear to be more decisive than when looking at Banana Republic sweaters. After we jumped into Node.js waters in January this year, there was no turning back. It is almost April now and my mid-term report is due.

One of the beneficial things of ending the agony and giving Node a chance is that you stop worrying about HOW things are done and focus on GETTING them done. And once you do it, like Rajesh above, Node.js recedes into the background and the actual problems you are trying to solve become your focus (you are not solving problems in Node, you are just, you know, solving problems). Xenofobic people who get to know ‘the other’ are often surprised that under the surface of difference there is a sea of sameness – people around the world have similar worries, hopes, fears and joys. In Node, you still have to worry about security, i18n, performance, code structure, user experience, and Node is not the only player involved in this – just one tool in your toolbox.

One of the most important roles in converting an organization of any size to Node.js is that of influencers. People like myself who push for it, look for opportunities to get ‘camel’s nose under the tent’ (to borrow from Bill Scott), find a crevice where Node can squeeze and then expand like ice in the road potholes (I should know, I keep avoiding them this cold March in Toronto all the time). It is a startup of sorts, an inverted pyramid that can easily fall sideways if the founder flinches or loses hope and vision.

On the other hand, it can remind you of the locked potential of the people. There is a dark lake of passion in all of us, often trapped under a layer of dealing with day to day stress and ‘things that need to be done’. It is heart-warming to see that when you succeed in winning people over to the Node.js side, the passion bursts out and they jump in with enthusiasm and vigor that is great to experience.

Example: a member of the JazzHub team heard my pitch for Node.js and started experimenting with tutorials written in markdown. Like everything else, these tutorials are currently served using JSPs and client-side JavaScript. He wrote a small Node app that takes the tutorials, converts them to HTML using a nice little module and renders them using Dust.js. He did it in one afternoon and proudly demoed it the next time we met – it is one page of JavaScript – you don’t even need to scroll to see the entire code. Needless to say, he was hooked.

One of the most important jobs of an influencer is to prevent failure on stupid technicalities. People ready to yell ‘I told you Node was not ready’ are everywhere. Node is awesome,  but we all know that a single uncaught exception can crash the process. A JEE container can throw exceptions like a leaky barge, filling up the logs, but it will stay up. In Node, it is your responsibility to prop Node up, a Weekend at Bernies’s of sorts. Judging by the frequency of questions about this behaviour, it is hard to stomach by people moving over from the other platforms.

JEE is in fact not that different. When an uncaught exception is thrown in a simple Java program with ‘main’, it exits. When the same happens in a JEE container, the runnable running in a thread exits. The thread is reset, returned to the thread pool, and the next request is passed on to another thread. In a production system with Node.js, it is your responsibility to replicate this behaviour.

It is somewhat easier to end up with an uncaught exception in Node.js than it is in Java. Due to the asynchronous nature of Node, trying to surround a piece of code in a try/catch block is often futile. Exceptions can and will escape. This is such a tricky problem that it warranted the high priests of Node to huddle recently and release a detailed and super useful document that anybody trying to write production quality Node.js system should read and internalize.

It goes without saying then that you will never run ‘node myApp.js’ except during development. In production, you want to spread out to cover all the cores of the machine or VM you are running on. You want to restart the process when it exits (and it will, repeatedly). And finally, it would help to monitor CPU, memory and throughput of each running process. We are currently looking at PM2 for this, but as always with Node, there are other ways of addressing this need.

There are other things we discovered while getting serious with Node. For example:

  1. WebSockets – one of the first things you will try to do after getting the CRUD down is server push – after all, Data Intensive Real Time (DIRT) is what initially catapulted Node into our consciousness. After you get the code working on your machine, inspect the entire path to the browser on your system – proxies are surprisingly cranky when it comes to support for Web Sockets. NGINX didn’t have it until version 1.3. If you are using Apache, note that it will not let the Web Socket connections through unless you install and configure mod_proxy_wstunnel. Of course, this is not Node-specific, but chances are you are most likely to hit it with Node because of its affinity to server push.
  2. DevOps – make them your friend. If you thought Continous Integration and Continous Deployment was a neat idea, you will find it absolutely indispensable once you start getting real with Node. A well established pattern here is a system composed of micro-services, and it is not uncommon to juggle dozens upon dozens of separate Node apps performing a role in the system. You will go mad trying to manage micro-services by hand. Of course, once you get your DevOps going, the ability to deliver new features into a micro-service with surgical precision will bring no end of joy, particularly if you are replacing a JEE pig that takes minutes to restart. Zero downtime is a reachable goal in a micro-service based system comprised of Node processes. In IBM Rational, we use Urban Code, which is natural to us since we bought them recently. Do what you need to do to go automatic.
  3. Storage – Node will make you re-evaluate the rest of your system. When I started this blog, I was characteristically wishy-washy about databases. With Node.js, we noticed that we naturally lean towards JSON-based NoSQL databases. Many people use MongoDB with success due to the smooth transition from SQL, but if you don’t like its scalability characteristics or your lawyers don’t like its AGPL license, CouchDB is an easy choice. IBM recently bought Clodant (hosted CouchDB fork with Apache Lucene tossed in) so it is hard for us to say ‘no’ to a free, Cloud-based and professionally managed NoSQL JSON DB. Again, YMMV – use what works for you, but give JSON-based NoSQL databases a try – going JavaScript all the way down is liberating.
  4. Authentication and Single Sign-On – if there is anything that will remind you of the Ford Model T (you could order it in any colour as long as it was black), it will be authentication and SSO in your system. Nothing will uncover unnecessary complexity of the authentication schemes used in the system like trying to move it to something that is not JEE-based. The complexity is normally hidden in the JEE filters, and is uncovered as soon as you try to replicate it in Node.js. It is technology agnostic as long as you use Java. The good thing is that with a sympathetic team on the other end, this serves as a catalyst to finally simplify and clean up the protocol or switch to something saner.
  5. Legal – if you are in an enterprise of any size, you will not be allowed to just use whatever you find on the Internet (even startups find software promiscuity to have side-effects as soon as they grow enough to attract legal attention). Our legal department probably hates us – the amount of Node.js modules they need to vet is staggering, but the mountain of requests is tapering down. For example, the Markdown app I mentioned earlier in the post needed to vet only the markdown module – all the other modules were already encountered by the apps before it. At this rate, the flow will slow down to a trickle soon.

So there you go – three months after our foray into Nodeland, we are lining up several new Node micro-services, the sky didn’t fall, we are learning new things every day and worry more about what we are building than how we are building it. Every once in a while we are forced to solve a problem for the first time. We agree on the best practice together, write it down for others coming after us, and move on. Where we can, we rely on the efforts of people before us, allowing suites such as Kraken.js to let us focus on what is really unique about our code, rather than the boilerplate. Our lawyers hate us, but if ‘tried and true’ was the only criterium when choosing our tools and languages, we would be still writing everything in Cobol.

If only I was this decisive choosing my last Home Theatre receiver.

© Dejan Glozic, 2014

Release the Kraken.js – Part I

Image credit: Yarnington 2011

Of course when you’re a kid, you can be friends with anybody. […] You like Cherry Soda? I like Cherry Soda! We’ll be best friends!

Jerry Seinfeld

We learned from Rene Zellweger that sometimes you can have people at “Hello”. In case of myself and the project Kraken.js, it had me at “Node.js/express.js/Dust.js/jQuery/Require.js/Bootstrap”. OK, not fit for a memorable movie quote but fairly accurate – that list of libraries and platforms was what we at JazzHub eventually arrived at as our new stack to write cloud development tools for BlueMix. Imagine my delight when all these checkboxes were ticked in a presentation describing PayPal’s experience in moving from Java to Node for their Lean UX needs, and announcing project Kraken. You can say I had my ‘Cherry Soda’ moment.

Of course, blessed with ADD as I am, I made a Flash-like scan of the website, never spending time to actually, you know, study it. Part of it is our ingrained fear and loathing of frameworks in general, and also our desire to start clean and very slowly add building blocks as we need them. This approach is also advocated by Christian Heilmann in his article ‘The Vanilla Web Diet’ in the Smashing Magazine Book 4.

Two things changed my mind. First, as part of NodeDay I had a pleasure to meet and talk to Bill Scott, Senior Director of UX Engineering, Jeff Harrell, Director of Engineering Architecture, and Richard Ragan, Principal Engineer and ‘the Dust.js guy’, all from PayPal, and now feel retroactively bad for not giving the fruit of their labor more attention. Second, I had a chat with a friend from another IBM group also working on a Node.js project, and he said “If we were to start again today, I would definitely give Kraken.js a go – I wish we had it back then, because we had to essentially re-implement most of it”. Strike two – a blog post is in order.

So now with a combination of guilt, intra-company encouragement and natural affinity of our technologies, I started making inventory of what Kraken has to offer. At this point, hilarity ensued: the Getting Started page requires installation of Yeoman, and then the Kraken generator. Problem: there is no straightforward way to install Yeoman on Windows. I have noticed during a recent NodeDay that when it comes to Node, Macs outnumber Windows machines 99 to 1, but considering that Kraken is targeting enterprise developers switching to Node, chances are many of them are running Windows on their work machines. A Yeoman for Windows would definitely help spread the good word.

Update: it turns out I was working on the outdated information – after posting the article, I tried again and managed to install Yeoman on a Windows 7 machine.

Using Yeoman is of course convenient to bootstrap your apps – you create a directory, type ‘yo kraken’, answer couple of questions and the scaffolding of your app is set up for you. At this point our team lead Curtis said ‘I would be fine with a zip file and no need to install Yeoman’. He would have been right if using Yeoman was a one-time affair, but it turns out that Kraken Yeoman generator is a gift that keeps on giving. You can return to it to incrementally build up your app – you can add new pages, models, controllers, locales etc. It is a major boost to productivity – I can see why they went through the trouble.

Anyway, switched to my MacBook Pro, I installed Yeoman, then Kraken generator, then typed ‘yo Kraken’ and my app was all there. Once you start analyzing the app structure, you see the Kraken overhead – there are more directories, more files, and Kraken has decided on how things should be structured and tries to stick to it. On the surface of it, it is less clear for the beginner, but I am already past the pure express ‘Hello, World’ apps. I am interested in how to write real world production apps for the enterprise, and Kraken delivers. It addresses the following enterprise-y needs:

  1. Structure – app is clearly structured – models, controllers, templates, client and server side – it is all well labelled and easy to figure out
  2. i18n – Kraken is serious about enterprise needs, so internationalization is baked in right from the start
  3. Errors – it provides translated files for common HTTP errors (404, 500, 503) as a pattern
  4. Templates – Kraken uses Dust, which fits our needs like a glove, and the templates are located in the public folder so that they can be served to both Node.js and the browser, allowing template sharing. Of course, templates are pre-compiled into JS files for performance reasons.
  5. Builds – Kraken uses Grunt to automate all the boring tasks of preparing your app for deployment, and also incorporates Mocha tests as a standard part. Having a consistent approach to testing (and the ability to incrementally add tests for new pages) will improve quality of your apps in the long run.
Kraken.js HelloWorld app directory structure

One thing that betrays that PayPal deals with financial transactions is how serious they are about security. Thanks to its Lusca module, Kraken bakes protection against the most prevalent types of attacks right into your app. All enterprise developers know this must be done before going into production, so why tacking it on in a panicky scramble at the last minute when you can bake it into the code from the start.

Of course, the same goes for i18n – Makara module handles translations. At this point I paused looking at the familiar file format from the Java world – *.properties files. My gut reaction would be to make all the supporting files in JSON format (and to Kraken authors’ credit, they did do so for all the configuration files). However, after thinking about the normal translation process, I realized that NLS files are passed to the translation team that is not necessarily very technical. Properties files are brain-dead – key/value pairs with comments. This makes them easy to author – you don’t want to receive JSON NLS files with missing quotes, commas or curly braces.

Makara is designed to work with Dust.js templates, and Kraken provides a Dust helper to inline locale-based substitutions into the template directly (for static cases), as well as an alternative for more dynamic situations.

As for the main file itself, it did give me pause because the familiar express app was nowhere to be found (I absent-mindedly typed ‘node app.js’, not realizing the app is really in index.js):

'use strict';

var kraken = require('kraken-js'),
    app = {};

app.configure = function configure(nconf, next) {
    // Async method run on startup.

app.requestStart = function requestStart(server) {
    // Run before most express middleware has been registered.

app.requestBeforeRoute = function requestBeforeRoute(server) {
    // Run before any routes have been added.

app.requestAfterRoute = function requestAfterRoute(server) {
    // Run after all routes have been added.

if (require.main === module) {
    kraken.create(app).listen(function (err) {
        if (err) {

module.exports = app;

This is where the ‘framework vs library’ debate can be legitimate – in a true library, I would require express, dust etc. in addition to kraken, start the express server and somehow hook up kraken to it. I am sure express server is somewhere under the hood, but I didn’t spend enough time to find it (Lenny Markus is adamant this is possible) . This makes me wonder how I would approach using cluster with kraken – forking workers to use all the machine cores. Another use case – how would I use Socket.io here – Socket.io piggy-backs on the express server and reuses the port. I will report my findings in the second instalment.

I would be remiss not to include Kappa here – it proxies NPM repository allowing control over what is pulled in via NPM and provides for intra-enterprise sharing of internal modules. Every enterprise will want to reuse code but not necessarily immediately make it Open Source on GitHub. Kappa fits the bill, if only for providing an option – I have not tested it and don’t know if it would work in our particular case. If anything, it does not suffer from NIH – it essentially wraps a well regarded npm-delegate module.

Let’s not forget builds: as part of scaffolding, Yeoman creates a Gruntfile.js configured to build your app and do an amazing amount of work you will get for free:

  1. Run JSHint to verify your JS files
  2. Build Require.js modules
  3. Run less compiler on the css files
  4. Build locale-specific versions of your dust templates based on i18n properties
  5. Compile dust templates into *.js files for efficient use on the client
  6. Run Mocha tests

I cannot stress this enough – after the initial enthusiasm with Node.js, you are back to doing all these boring but necessary tasks, and to have them all hooked up in the new app allows the teams to scale while retaining quality standards and consistency. Remember, as a resident Node.js influencer you will deal with many a well intentioned but inexperienced Java, PHP or .NET developer – having some structural reinforcement of good practices is a good insurance against everything unravelling into chaos. There is a tension here – you want to build a modern system based on Node.js micro-services, but don’t want each micro service to be written in a different way.

Kraken is well documented, and there are plenty examples to understand how to do most of the usual real world tasks. For an enterprise developer ready to jump into Node.js, the decision tree would be like this:

  1. If you prefer frameworks and like opinionated software for the sake of consistency and repeatability, use Kraken in its entirety
  2. If you are starting with Node.js, learn to write express.js apps first – you want to understand how things work before adding abstractions on top of them. Then, if you are OK with losing some control to Kraken, go to (1). Otherwise, consider continuing with your express app but cherry-picking Kraken modules as you go. To be fair, if Kraken is a framework at all (a very lightweight one as frameworks go), it falls into the class of harvested frameworks – something useful extracted out of the process of building applications. This is the good kind.
  3. If you ‘almost’ like Kraken but some details bother you or don’t quite work in your particular situation, consider participating in the Open Source project – helping Kraken improve is much cheaper than writing your own framework from scratch.

A friend of mine once told me (seeing me dragging my guitar for a band rehearsal) that she “must sit down one afternoon and learn how to play the guitar already”. I wisely opted out of such a mix of arrogance and cluelessness here and called this ‘Part I’ of the Kraken.js article. I am sure I will be able to get a better idea of its relative strengths and weaknesses after a prolonged use. For now, I wanted to share my first impressions before I lose the innocence and lack of bias.

Meanwhile, in the words of “the most interesting man in the world”: I don’t always use frameworks for Node.js development, but when I do, I prefer Kraken.js.

© Dejan Glozic, 2014

NodeDay 2014

'Unicorns farting rainbows' from Eran Hammer's presentation 'Node Inc.', artwork by @ChrisMCarrasco
‘Unicorns farting rainbows’ from Eran Hammer’s presentation ‘Node Inc.’, artwork by @ChrisMCarrasco

This post is the continuation of my trip report that started with part 1 on Node.js on the Road. NodeDay was organized and hosted by PayPal in their building in San Jose, CA, on Feb 28th 2014. The focus of the one-day conference was Node.js in the enterprise, which was a great fit for what we are now doing with Node.js in IBM Rational. Add in the opportunity to talk with Bill Scott, Richard Ragan and Jeff Harrell about their use of Dust.js in production, and it all added up to a promising day.


I drove down 101 South from San Francisco to my hotel in San Jose in a calm evening. Last thing I remember before I fell asleep was muted howling of unusually strong wind outside my hotel window. Then I woke up into a morning of storm reports, flash floods, mud slides and falling rocks throughout the region. I must have been very tired.

Luckily, this was largely over by the time I finished breakfast, and PayPal was only 2 miles away.

We got badges and bracelets at the entrance. Badges doubled as agenda – simple and minimalistic, as you would expect from a Node.js event.


The event was kicked off by Bill Scott, Senior Director of UI Engineering, PayPal and our host for the day. The first keynote was by Eran Hammer, Senior Architect of Walmart Labs, who was singlehandedly responsible for pushing me off the fence and into Node.js development. The watershed event was Black Friday 2013, when 55% of Walmart traffic was from mobile devices, and handled by Node.js servers. The uneventful day for the pager-carrying developers cemented their trust that Node.js has ‘arrived’, resulting in Walmart doubling down on Node and spreading it to the rest of the enterprise.

Eran’s focus was on saving the typical enterprise Java developer from living a double life – passionately using Node at home, and wrestling with the legacy projects behind the firewall at work (illustrated with a contrast between unicorns and rainbows at the top of the post, and the Intranet wall of dead puppies – see the rest of the presentation). A common myth is that Open Source is all done by little guys, individuals and startups – just technology and no politics. In reality, Open Source is sustained and flourishing in the enterprise – nobody is using the Open Source more than the enterprise (and Walmart is not exactly a startup either). Why? Because people see the leverage that working for large companies can give them. Hence, we need more ‘Inc.’ in our Node, the community to be one, instead of Enterprise people being on the side.

We (the enterprise) should not feel like we are on the fringes. For example, Joyent has been the best supporter of the project, and one of the biggest consumers of Node. The business of paying for support for Open Source projects is still alive and a valid one. With NPM in the news recently, it is clear that profitability is not a dirty word, it is the only way to sustain the community. Issues such as private NPM repositories that are so important for enterprises were initially hard to take seriously, but it is obvious it is understood now, judging by half a dozen solutions already available. It shows that in many cases, building on what is already available and perhaps sponsoring/adopting core contributors is a cost-effective way to get things done and ensure critical support. Doing everything in-house is an anti-pattern.

Eran Hammer, Senior Architect - Walmart Labs
Eran Hammer, Senior Architect – Walmart Labs

TJ Fountaine, Node.js Project Lead from Joyent has mostly repeated his talk from ‘Node.js on the Road’, which was a good thing to do since there was no overlap in the crowd between the two events (not counting myself of course :-). You can read my report in the previous post with no loss of data.

TJ Fontaine, Node.js Core Lead, Joyent and Bill Scott, Senior Director, UI Engineering - PayPal
TJ Fontaine, Node.js Core Lead – Joyent and Bill Scott, Senior Director, UI Engineering – PayPal

The funniest part of TJ’s keynote is the extent he went to in order to ensure that he is not mixed up with the express.js author TJ Holowaychuk:

Adam Baldwin, Chief Security Officer – &yet, has covered a touchy subject – Node.js security. With all the enthusiasm for the new way of working Node provides, enterprise is very touchy around security and it is a big topic when considering Node adoption. Adam suggests protecting what makes your money, and consider availability as an aspect of security (if the app is down, who cares if it is secure). The process of how you handle the vulnerability is the key (but you will screw it up anyway).

When it comes to security, nodejs-sec mailing list is important because all security-related announcements go there (very few so far, knock on wood). In reality, most of the issues will be in Node modules, not the core – security is built into core, it is not a bolt-on module. The enterprise is responsible for what you require() – there has to be a process in place for vetting modules, and private NPM repositories. Linting is recommended (using npm install precommit-hook).

Security-related test cases are important (some orgs only write test cases for the happy path). One useful library that can help – retire.js (scans your web app or Node app for use of vulnerable JS libraries).

With all said and done, the greatest vulnerability in the enterprise?

Adam Baldwin, Chief Security Officer – &yet
Adam Baldwin, Chief Security Officer – &yet

Joe McCann, COO of The Node Firm has addressed the business case for Node. There are many signs of Node.js maturity – exponential growth in NPM, number of books published, enterprises going into production with Node. He posited that a mission statement for Node should read: “An easy way to build fast and scalable applications”. As to ‘Why Now’, he suggested that legacy stacks are impending innovation and make companies less competitive.

Joe suggested five business tenets when making the case for Node: innovation, productivity, developer joy, hiring/retaining talent and cost savings. He focused on a few success stories:

PayPal had velocity as the key driving point. Node provided boost to the workflow, allowing faster iteration and innovation. After a long history of Java/C++, developers are energized by their jobs again. A frequently re-twitted quote:

Groupon moved from a monolithic application that became a burden and a blocker to a service-oriented architecture. Node.js now handles the same amount of traffic using less hardware (direct cost savings – a music to every CFO’s ears). At the same time, page load times decreased by 50%. The use of a common layout service allowed them to retain the ability to make site-wide design changes quickly.

Walmart achieved Black Friday CPU utilization hovering around 1% (you want to be bored when on pager duty). In addition, quality of candidates after switching to Node.js increased – Node is a magnet for talent. This is important – Node could be a differentiator in hiring/retaining.

In Yahoo!, Node.js services are handling 2M requests/minute, 200 developers write Node, there are 500 internal and 800 external Node modules. Node brought speed and ease of development.

Back to business tenets, innovation is helped by building things faster and more efficiently. Productivity is helped by terse syntax and absence of thick layers of bureaucracy that slow developers down. “Productive developer is a happy developer”, which helps with developer joy and helps stability and retention. As for hiring/retaining talent, there is a magnetism of working on Node that is unique. Finally, cost savings are real and palpable: fewer developers writing apps in less amount of time, using less hardware. Using Node in multiple projects multiplies the savings.

This topics generated lively discussion, and a member of the audience claimed that similar enthusiasm was heard 10 years ago around Ruby on Rails. So what is different with Node? The consensus seems to be that since JavaScript is gravity on the client, using JavaScript ‘all the way down’ provides benefits of simplification and ‘single stack’ that cannot be duplicated in other stacks.

The reality of enterprise architecture is that mixed stacks (Node.js + Java/PHP) are everywhere. Right now, front end in Node.js is a safe bet, but there is growth in systems around Node.

At the request to cover some failures (to counter success stories for fairness), Sam from Amazon.com offered a counter-argument of why some shops cannot get there just yet – some libraries just do not exist yet (but time can help plug this gap). In the same vein, LinkedIn is still a huge Scala/Java shop, and in some use cases Play framework is easier to move their developers forward than to make a massive switch to Node (LinkedIn still uses Node for their mobile services, and they are very fast).

The takeaway is that while Node.js is obviously at an inflection point and it is not hard to find success stories, but it will never be a magic bullet that will solve all the problems. It is imperative to ensure that the problem to be solved is a good fit for Node.

Joe McCann, COO - The Node Firm
Joe McCann, COO – The Node Firm

After lunch and an opportunity to socialize with actual people in the 3D world and full HD (what a concept!), Erik Toth, Principal Software Engineer at PayPal took us through the fuzzy and complicated world of moving a large developer workforce to NodeJS. He provided a set of hard won tips on how to handle resistance and problems along the road. First off, accepting that your rationale might be non-technical is healthy – Node is as much a way of life as a technology. There will be critics, they will parrot questions, represent opinions they’ve read as their own, use outdated information, micro benchamrks, assume you have ulterior motives, they will underestimate you (real engineers use something else). They will assume that because they haven’t heard of something, it doesn’t exist, and all of this without even looking at the technology.

The secret to overcome this? Influencers. You need to have an advocate that absolutely believes in it, and things will start happening. Setting limits is key – focus and avoid some things, avoid inventing problems to have something to solve, solving other peoples’ problems, or solving solved problems. Only fix what is broken. For PayPal, it was primarily velocity and productivity, so there was a need to scale people, product and technology. This requires an ecosystem: GitHub, npm, tooling.

Erik pointed at silos as a serious impediment (enterprises like Game of Thrones). They breed distrust, create shallow alliances based on selfish goals, black-boxes teams and code. He suggested elimination of backlog as a concept (where features go to die).

A key takeaway for individual developers?

Individuals rarely solve real problems – benefit from the work of others. This is not revolutionary, just harder in the enterprise. Finally, the quality of code should not depend on the access rights (“The only thing that should differentiate public and private modules is access rights”). Be proud of your code, expect copypasta, learn semver and teach others.

Oh, and the key takeaway from the Q/A: “We treated Node as something that will go away in 5 years, and we didn’t want to impose custom solutions to those that will come after us”.

Erik Toth, Principal Software Engineer - PayPal
Erik Toth, Principal Software Engineer – PayPal

Ian Livingstone, VP of Engineering of GoInstant told a war story of using Node in two apps, and how the second time was informed by all the mistakes made in the first app. The first app was written in the time when Node was instable. More importantly, growing code base is a maintenance nightmare, which greatly underscores the important of writing small modules instead of large monolithic apps (a common theme throughout the NodeDay).

Ian offered many tips collected the hard way. For example, state must live in a class instance, not in modules. Furthermore, when a class has a state, it must have both ‘initialize’ and ‘destruct’ methods. Another trick that helped was the use of JSON schema, which makes it easy to exhaustively test code. This includes schema for Request and Response (run only in development, limited in production for performance reasons).

On the topic of UI testing, Ian pointed out that multi-user integration tests are required to excersize user interfaces. Selenium was an obvious first choice, but the tests ended up verbose and very fragile, and took a long time to run. NPM to the rescue: a tool called ‘Barista’ (extension to Mocha) proved to be a better solution. Lesson: the power at your finger tips through the Node eco-system is incredible.

The need to avoid one giant code base cannot be stressed enough: like in Java’s “God object”, leaked responsibilities between modules made the code base into a giant ‘house of cards’ that was fragile and very hard to test, greatly diminishing trust in the code. Solution: small, simple and independent modules.

The second time around, applying all these hard-won lessons had a hugely positive effect – it changed the conversation. A small downside – longer install times via NPM (solved by using a proxy backed by S3). Node works better when it has one thing to do, favoring an architecture with many small and simple services. As a convenience, each service has a client library that wraps the transport concerns, making it easy to consume and test. Using upstart and supervisord provided for process lifecycle management.

Takaway – Node.js is power, and many problems have already been solved by the community.

Ian Livingstone, VP of Engineering - GoInstant
Ian Livingstone, VP of Engineering – GoInstant

Trevor Norris, NodeJS Core Maintainer from Mozilla took us on a journey much closer to the metal, under the layers of abstractions we are so used to in our daily work. The aim of the presentation was not to prove that all abstractions are bad, just to make their cost clear. He offered a number of performance tips related to intimate knowledge of V8 engine that powers Node (example: inlined functions can’t contain more than 600 characters, including comments; inlined functions must be declared close to the call site etc.). This level of detail is not very practical to everyday Node developers writing modules at higher level of the stack, but are a reality for Node Core developers that must worry about the performance envelope of the bedrock on which everything else resides.

In that spirit, Trevor continued to provide advise on the most obvious ‘sins of our abstractions’ – the fact that some events like DNS and FS are in fact pseudo-async (they tie up a thread in the background), and again sternly warned about using the new execSync function that is a very odd animal in the ansync world of Node.

Key takeways: write code that you can maintain, write code that doesn’t crash – the slowest code is the code that doesn’t run. And finally, code for your case. Very few developers actually code with Node Core performance requirements in mind.

Trevor Norris, NodeJS Core Maintainer - Mozilla
Trevor Norris, NodeJS Core Maintainer – Mozilla

Richard Rodger, Founder of Near Form really drove the point of keeping Node processes small (a motif of the entire conference). This ensures that services are easy to restart, which makes for a fault-tolerant system. With a number of services running in the system, a communication mechanism is require (messages). Processes need to be small (100 lines of code), a few screens to scroll, that can fit inside your head. Frameworks are discouraged – use modules and composition.

A good message broker that supports pub/sub is very handy in this architecture – it makes the system loosely coupled and extendable. Micro-services also make rewrites cheap, avoiding project management ceremony. They localize bad coding choices, making them less of an issue because they are confined to a single micro-service.

Not all services are created equal – some are critical and require a lot of attention, some do not. By grouping services by quality, focus can be applied where it counts. As always, fixing a bug later in the cycle carries a higher cost, but with micro-services, the curve is less steep.

Richard provided some real-world examples of approaching a problem using micro-service architecture. The approach flows from user stories, to capabilities, services and messages. After that, decisions need to be made on how to run in production: what performance, scale, behavior and message transport are needed. Of course, production rollout need not be extensive – a good start can be to run all the micro-services on one machine and cluster only the ones that actually need it.

As a final comment, Richard agreed that micro-services can be built in any language – Node.js is simply a very suitable option, but not the only one.

Richard Rodger, Founder - NearForm
Richard Rodger, Founder – NearForm
NodeDay audience - full house
NodeDay audience – full house
Mr. JavaScript himself - Douglas Crockford, unaware of my paparazzi tendencies.
Mr. JavaScript himself – Douglas Crockford, unaware of my paparazzi tendencies.

All in all, it was a well organized event and an amazing opportunity to talk to a number of key people in the Node.js community all in the same day. In effect, it was like its topic – I was able to talk to more people and learn more in a shorter period of time and virtually no downtime – like a true Node app. Kudos to Bill Scott for being a gracious host and Lenny Markus for ensuring it all went without a hitch.

© Dejan Glozic, 2014