Node Summit 2015

NodeSummit panel with representatives from ???
NodeSummit panel with Foundation members.

What a difference a year makes. Astute readers of this blog may remember my sitting on the Node.js fence in the fall of 2013. I eventually jumped off the fence thanks to the wave of presentations at the last year’s NodeSummit indicating that Node.js was ready. During the ensuing year I have become a vocal supporter of Node.js-based micro-services for fun and profit. It is therefore with great anticipation that I flew to San Francisco earlier this week to attend this year’s NodeSummit.

One of the immediate delights of my visit was the fact that unlike last year, IBM’s presence was huge (‘platinum sponsor’ big enough for you?). There have been projects in IBM dabbling in Node.js, and IBM PaaS Bluemix has a first class support for running Node apps. More recently though, Node.js has become a go-to technology for every new IBM project I have seen recently, particularly cloud-hosted (and most new projects fall into that category these days).

The theme of the last year was the claim that ‘Node.js is ready for the enterprise’ (once the Walmart memory leak has been fixed). The mood was one of hope and excitement, with a tinge of whistling in the dark. During the ensuing year, the pilot projects have wrapped up, risk-awerse enterprises have convinced themselves that Node.js is just fine for production, and there is no need to convince anybody any more. This year, we moved on to the details and the logistics of doing it at a massive scale.

Of course, the highlight of the conference was the announcement of the formation of the Node.js Foundation, taking over stewartship of Node from its sole corporate sponsor Joyent (Joyent continues to be the member of the Foundation). It is left to be seen if this move will heal the rift in the community caused by the fork of Node into io.js, but it does point at the maturation of the platform and addresses some of the key complaints regarding the control over its evolution.

Another long-awaited news was the shipping of Node.js 0.12. Exactly a year ago when I attended Node Day organized by PayPal, then and still current Node.js team lead TJ announced that the shipping of 0.12 was ‘immanent’. Well, this was the longest ‘immanent’ anything in the recent history, if that word would to retain any meaning. On the positive side, the shipping of 0.12 followed the passage of all the test case suites for all the supported platforms, turning this into the new quality benchmark to satisfy for all future releases (and the coveted 1.0). It also underlines the difficulty that Node.js platform now finds itself in, being built into the core of more and more production deployments, and under a lot of pressure to evolve the platform while maintaining quality.

One of my favourite talks was by Fred Schott from Box about the realities of running Node.js in a real-world situation. It was heart-warming to follow performance tweaks all the way from mediocre to fast. It is a cautionary tale that, while Node.js is a wonderful platform with a potential for great performance, careless out of the box apps may disappoint. Like any other platform, it requires tuning, profiling and (what a concept) a certain level of mastery. You cannot just copy a ‘Hello, World’ app from Stack Overflow and expect server-melting speeds.

Another personal highlight was that the old ‘server vs client’ rendering debate is alive and well. It was amusing for me to hear two young PayPal team members talk about ditching Node/Dust combo (a normal PayPal staple) for client side rendering using Angular. In my chat with them they revealed that they managed to wrestle decent performance from Angular on mobile only after a lot of work (see my comment about Box’s experience above). Funnily enough, in a later talk Peter Marton shared tricks on isomorphic Node.js apps, where the same template is reused on both sides of the divide (my preferred technique as well). It just shows that there is no right or wrong approach – whatever works in your particular situation.

If I had one complaint about the two days I spent at the conference, it was that it was somewhat light on hard-core technical content. The focus was on high-level panels, ‘this is how we migrated from the monolith to the Node micro-services’ talks and ‘look at all the boxes in our architecture’. Now that we have all congratulated ourselves for the foresight to support Node.js and turn it into such a world-class platform to write modern systems, we should probably take off our celebratory hats and dig into the details of hard core, production Node.

All in all, I enjoyed the summit but left with mixed emotions. Last year I was fully aware of doing familiar things in a new platform, and the most mundane tasks (‘look ma, I am setting a cookie using Node.js!’) felt new and exciting. I think we are leaving that ‘new car smell’ age and entering the period where Node.js will just dissolve into the background – become like air or water, and we will focus on ‘what’ we are doing, rather than the fact we are doing it in Node.js. It may become like (gasp) Java at some point in the future.

For some developers (see my post on another TJ) there is no fun in that (look, ma, I am setting a cookie using Go!), and restless Node.js committers eager to move fast and break things are already busy in the io.js repository. I on the other hand just want to build awesome apps. Node.js is already a great platform for me and as long as it remains stable, reasonably bug free and supported, I am happy. I am sure I share this with many enterprise Node.js enthusiasts.

© Dejan Glozic, 2015

Isomorphic Apps Part 1: Node, Dust, and Socket.io

Two-headed turquoise serpent. Mixtec-Aztec, 1400-1521. Held at British Museum. Credits: Wikimedia Commons.
Two-headed turquoise serpent. Mixtec-Aztec, 1400-1521. Held at British Museum. Credits: Wikimedia Commons.

On the heels of the last week’s post on the futility of server vs client side debate, here comes an example. I wanted to do this a long time and now the progress of the Bluemix project I am working on has made it important to figure out isomorphic apps in earnest.

For those who have not read the often cited blog post by the Airbnb team, isomorphic apps blur the line between the client and the server by sharing code and templates. For this to work, it helps that both sides of the divide are similar in nature (read: can run JavaScript). Not that it is impossible to do in other stacks, but using Node.js on the server is the most direct path to isomorphism. JavaScript libraries are now written with the implicit assumption that people will want to run them in a Node app as well as in the browser.

Do I need isomorphic?

Why would we want to render on the server to begin with?

  1. We want to send HTML to the browser on first request, showing some content to the user immediately. This will help the perceived performance since browsers are amazing at quickly rendering raw HTML, and we don’t have to wait for all the client side JavaScript to load before we can see the anything.
  2. This HTML will also give something to the search engine crawlers to chew on and give you a decent SEO without a lot of effort.
  3. Your app will fit nicely into the Web as it was designed (a collection of linked pages).

Why would you want to do something on the client then?

  1. You want to provide nice interactive experience to the user – static documents (even those dynamically rendered on the server) are not a lot of fun beyond actual content.
  2. You want your page to respond to changes on the server (other users making changes that affect the content of your page) using Web Sockets.
  3. You want to provide features that involve a number of panels that need to flow like a native app, and don’t want to reload full pages for that.

How to skin this particular cat

Today we are not hurting for choices when it comes to libraries and frameworks for Web development. As a result, I decided to write a multi-part article covering some of those options, and allow you to choose what works in your particular situation.

We will start with the simplest way to go isomorphic – by simply exploiting the fact that many JavaScript templating libraries run on both sides of the network divide. In our current projects in IBM, dustjs-linkedin is our trusted choice – a solid library used by many companies and a pleasure to work with. It can be used for rendering views of Node/Express applications, but if you compile the template down to JavaScript, you can load it and render partials on the client as well.

The app

For this exsercise, we will write a rudimentary Todo app, which is really just a collection of records we want to keep. There is already a proverbial Todo MVC app designed to test all the client side MVC frameworks known to man, but we want our app to store data on the server, and render the initial Todo list using Node.js, Express and Dust.js. Once the list arrives at the client, we want to be able to react to changes on the server, and to add new Todos by entering them on the client. In both cases, we want to render the new entries on the client using the same templates we used to render the initial list.

Since we will want to use the REST API (folded into the same app for simplicity) as the single source of truth, we will use Socket.io library to build a MVC-CV app (full MVC on the server, only the controller and the view on the client). The lack of the client model means that when we make changes to the server model through the REST API, we will rely on Socket.io to communicate with the client side controller and update the client view. With a full client side MV*, client side model would be updated immediately, followed by the asynchronous reconciliation with the server. This approach provides for immediacy and makes the application feel snappy, at the expense of the possibility that a seemingly successful operation eventually fails on the server. Mobile app developers prefer this tradeoff.

In order to make Todos a bit more fun, we will toss in Facebook-based authentication so that we can have user profile and store Todos for each user separately. We will use Passport module for this. For now, we will use jQuery and Bootstrap to round up the app. In the future instalments, we will get progressively fancier with the choices.

Less taking, mode coding

We will start by creating a Dust.js partial to render a single Todo card:

<div class="todo">
   <div class="todo-image">
      <img src="{imageUrl}">
   </div>
   <div class="todo-content">
      <div class="todo-first-row">
         <span class="todo-user">{userName}</span>
         <span class="todo-when" data-when="{when}">{whenText}</span>
      </div>
      <div class="todo-second-row">
         <span class="todo-text">{text}</span>
      </div>
   </div>
</div>

As long as the variables it needs are passed in as a dictionary, Dust Core library can render this template in NodeJS or the browser. In order to be able to load it, we need to compile it down to JavaScript and place in the ‘public/js’ directory:

#!/bin/bash          
dustc -name=todo views/todo.dust public/js/todo.js

We can now create a page where todos are rendered on the server as a list, with a text area to enter a new todo, and a ‘Delete’ button to delete them all:

<h1>Using Dust.js for View</h1>
      
<h2>Todos</h2>
<div class="new">
  <textarea id="new-todo-text" placeholder="New todo">
  </textarea>
</div>
<div class="delete">
  <button type="button" id="delete-all" class="btn btn-primary">Delete All</button> 
</div>
<div class="todos">
  {#todos}
    {>todo todo=./}
  {/todos}
</div>

You will notice in the snippet above that we are now inlining the partial we have defined before (todo). The collection ‘todos’ is passed to the view by the server side controller, which obtained it from the server side model.

The key for interactivity of this code lies in the JavaScript for this page:

<script src="http://cdn.jsdelivr.net/dustjs/2.4.0/dust-core.js"></script>  
<script src="/js/todo.js"></script>   

<script>
  var socket = io.connect('/');
  socket.on('todos', function (message) {
    if (message.type=='add') {
      dust.render("todo", message.state, function(err, out) {
        $(".todos").prepend(out);
      });
    }
    else if (message.type=='delete') {
      $('.todos').empty();
    }
  });
      
  $('#delete-all').on('click', function(e) {
    $.ajax({url: "/todos", type: "DELETE"});
  });
 
  $('#new-todo-text').keyup(function (e) {
    var code = (e.keyCode ? e.keyCode : e.which);
    if (code == 13) {
      e.preventDefault();
      $.post("/todos", { text: $('#new-todo-text').val() });
      $('#new-todo-text').val('');
    }
  });
</script> 

New todos are created by capturing the Enter key in the text area and posting the todo using POST /todos endpoint. Similarly, deleting all todos is done by executing DELETE /todo Ajax call.

Notice how we don’t do anything else here. We let the REST endpoint execute the operation on the server and send an event using Web Sockets. When we receive the message on the client, we update the view. This is the CV part of MVC-CV architecture that we just executed. The message sent via Web Sockets contains the state of the todo object that is passed to the Dust renderer. The output of the todo card rendering process is simply prepended to the todo list in the DOM.

REST endpoint and model

On the server, our REST endpoint is responsible for handling requests from the client. Since we are using Passport for authentication, the requests arrive at the endpoint with the user object attached, allowing us to execute the endpoints on behalf of the user (in fact, we will return a 401 if there is no user info).

var model = require('../models/todos');

module.exports.get = function(req, res) {
  model.list(req.user, function (err, todos) {
    res.write(JSON.stringify(todos));
    res.sendStatus(200);
    res.end();
  });
};

module.exports.post = function(req, res) {
  var body = req.body;
   
  model.add(req.user, body.text, function (err, todo) {
    res.sendStatus(201);
    res.end();
    _pushEvent("add", req.user, todo);
  });
};

module.exports.delete = function(req, res) {
  model.deleteAll(req.user, function(err) {
    res.sendStatus(204);
    res.end();
    _pushEvent("delete", req.user, {});
  });
};

function _pushEvent(type, user, object) {
  var restrictedUser = {
    id: user.id,
    name: user.displayName
  };
  var message = {
    type: type,
    state: object,
    user: restrictedUser
  };
  exports.io.sockets.emit("todos", message);
}

We are more-less delegating the operations to the model object, and firing events for verbs that change the data (POST and DELETE). The model is very simple – it uses lru-cache to store data (configured to handle 50,000 users and TTL of 1 hour for entries before they are evicted). This is good enough for a test – in the real world you would hook up a database here.

var LRU = require("lru-cache")
, options = { max: 50000
            , length: function (n) { return 1 }
            , maxAge: 1000 * 60 * 60 }
, cache = LRU(options)
;

module.exports.add = function (user, text, callback) {
  var todo = {
    text: text,
    imageUrl: "https://graph.facebook.com/"+user.id+"/picture?type=square",
    userName: user.displayName,
    when: Date.now()
  };
  var model = cache.get(user.id);
  if (!model) {
    model = { todos: [todo] };
  }
  else {
    model = JSON.parse(model);
    model.todos.splice(0, 0, todo);
  }
  cache.set(user.id, JSON.stringify(model));
  callback(null, todo);
};

module.exports.list = function(user, callback) {
  var model = cache.get(user.id);
  if (model)
    model = JSON.parse(model);
  var todos = model?model.todos:[];
  callback(null, todos);
};

module.exports.deleteAll = function(user, callback) {
  cache.del(user.id);
  callback(null);
};

The entire example is available as a public project on IBM DevOps Services. You can clone the Git repository and play on your machine, or just click on Code and inspect it in the Web IDE directly.

The app is currently running on Bluemix – log in using your Facebook account and give it a spin.

Commentary and next steps

This was the simplest way to achieve isomorphism. It has its downsides, among them the lack of immediacy caused by the missing client side model, but it is blessed by the complete freedom from client side frameworks (jQuery and Bootstrap nonwithstanding). In the part 2 of the post, I will insert Backbone on the client. Since it has support for models, collections and views, it is a particularly good choice for gradually evolving our application (AngularJS would require a complete rewrite, where Backbone can reuse our Dust.js template for the View). Also, as frameworks go, it is tiny (~9K minified gzipped).

Finally, in the part 3, we will swap Dust.js for React.js in the Backbone View implementation, just to see what all the fuss is about. Now you realize why I need to do this in three parts – so many frameworks, so little time.

© Dejan Glozic, 2015

Should I Build a Site or an App? Yes!

Minnesota State Capitol Woodworkers Toolbox, circa 1900, Wikimedia Commons.
Minnesota State Capitol Woodworkers Toolbox, circa 1900, Wikimedia Commons.

Yes, I know. I stopped blogging to take a desperately needed break. Then I returned only to be hit with a mountain of fresh, ‘hit the ground running’, honest to God January work that knocked the air out of my lungs and pinned me down for a while. Then an IBM colleague tried to ask me a Dust.js question, my doors were closed due to a meeting, and he found his answer in one of my blog posts.

So my blog is actually semi-useful, but it will stop being so without new content, so here is the first 2015 instalment. It is about one of my favorite hobbies – being annoyed with people being Wrong on the Internet. Judging by various discussion threads, developers are mostly preoccupied by these topics:

  1. All the reasons why AngularJS is awesome/sucks and will be the next jQuery/die in agony when 2.0 ships (if it ever ships/it will be awesome/cannot wait).
  2. Picking the right client side MVC framework (lots of people out there frozen into inaction while looking at the subtle differences of TODO app implementations in 16 different incarnations)
  3. Declaring client side single-page apps ‘the cool way’ and server side rendering ‘the old way’ of Web development

These topics are all connected, because if you subscribe to the point of view in (3), you either pray at the church of AngularJS (1) or you didn’t drink the Kool-Aid and subsequently need to pick an alternative framework (2).

Dear fellow full-stack developers and architects, that’s pure nonsense. I didn’t put an image of a toolbox at the top because @rands thinks it will nicely fit Restoration Hardware catalog. It is a metaphor of all the things we learn along the way and stash in our proverbial tool box.

Sites and apps

The boring and misleading discussion ‘server or client side apps’ has its origin in the evolution of the Web development. The Web started as a collection of linked documents with strong emphasis on indexing, search and content. Meanwhile, desktop applications were all about programming – actions, events, widgets, panes. Managing content in desktop apps was not as easy as on the Web. As a flip side, having application-like behaviour on the Web was hard to achieve at first.

When Ajax burst onto the scene, this seemed possible at last, but many Ajax apps were horrible – they broke the Back button, didn’t respect the Web, were slow to load due to tons of JavaScript (the dreaded blank page), and the less I say about hashes and hash bangs in URLs, the better.

It is 2015 now and the situation is much better (and at least one IBM Fellow concurs). Modern Ajax apps are created with more predictable structure thanks to the client side MV* frameworks such as BackboneJS, AngularJS, EmberJS etc. HTML5 pushState allows us to go back to deep linking. That still does not mean that you should use a hammer to drill a hole in the wall. Right tool for the right job.

And please don’t look at native mobile apps in envy (they talk to the server using JSON APIs only, I should do that too). They are physically installed on the devices, while your imposter SPA needs to be sent over mobile networks before anything can be seen on the screen (every bit of your overbuilt, 1MB+ worth of JavaScript fatness). Yes, I know about caching. No, your 1MB+ worth of JavaScript still needs to be parsed every time with the underpowered JavaScript engine of the mobile browser.

But I digress.

So, when do you take out site tools instead of Web app tools? There are a few easy questions to ask:

  1. Can people reach pages of your app without authenticating?
  2. Do you care about search engine optimization of those pages? (I am curious to find people who answer ‘No’ to this question)
  3. Are your pages mostly linked content with a little bit of interactivity?

If this describes your project, you would be better off writing a server-side Web app (say, using NodeJS, express and a rendering engine like Handlebars or Dust.js), with a bit of jQuery and Bootstrap with a custom theme to round things up.

Conversely, these may be the questions to ask if you think you need a single-page app:

  1. Do people need to log in in order to use my site?
  2. Do I need a lot of complex interactive behaviour with smooth transition similar to native apps?
  3. Do I expect users to spend a lot of time in my app doing something creative and/or collaborative?

What if I need both?

Most people actually need both. Your site must have a landing page, some marketing content, documentation, support – all mostly content based, open to search engine crawlers and must be quick to download (i.e. no large JS libraries please).

Then there is the walled up section where you need to log in, and then interact with stuff you created. This part is an app.

The thing is, people tend to think they need to pick an approach first, then do everything using that single approach. When site people discuss with app people on the Internet, they sound to me like Abbott and Costello’s ‘Who’s on First?’ routine. Site people want the home page to be fast, and don’t want to wait for AngularJS to download. They also don’t want content people to learn Angular to produce new pages. App people shudder at the thought of implementing all the complex interactions by constantly redrawing the entire page (sooner or later Web 1.0 is mentioned).

The thing is, they are both right and wrong at the same time. It may appear they want to have their cake and eat it too, but that is fairly easy to do. All you need to do is apply some care in how your site is structured, and give up on the ideological prejudice. Once you view server and client side techniques as mere tools in the toolbox, all kinds of opportunities open up.

Mixing and matching

The key in mixing sites and apps is your navigational structure. Where SPA people typically lose it is when they assume EVERYTHING in their app must be written in their framework of choice. This is not necessary, and most frameworks are embeddable. If you construct your site navigation using normal deep links, you can construct your navigational areas (for example, your site header) on the server and just use these links as per usual. Your ‘glue’ navigational areas should not be locked in the client side MV* component model because they will not work on the server for the content pages.

What this means is that you should not write your header as an Angular directive or a jQuery plug-in. Send it as plain HTML from the server, with some vanilla JavaScript for dynamic effects. Keep your options wide open.

For this to work well, the single page apps that are folded into this structure need to enable HTML5 mode in their routers so that you can transparently mix and match server and client side content.

Now add micro-services and stir for 10 minutes

To make things even more fun, these links can be proxied to different apps altogether if your site is constructed using micro-services. In fact, you can create a complex site that mixes server-side content with several SPAs (handled by separate micro-services). This is the ultimate in flexibility, and if you are careful, you can still maintain a single site experience for the user.

To illustrate the point, take a look at the demo I have created for the Full Stack Toronto conference last year. It is still running on Bluemix, and the source code is on GitHub. If you look at the header, it has several sections listed. They are powered by multiple micro-services (Node apps with Nginx proxy in front). It uses the UI composition technique described in one of the previous posts. The site looks like this when you click on ‘AngularJS’ link:

fsto-angular

The thing is, this page is really a single-page app folded in, and a NodeJS micro-service sends AngularJS content to the browser, where it takes over. In the page, there are two Angular ‘pages’ that are selectable with two tabs. Clicking on the tabs activates Angular router with HTML5 mode enabled. As a result, these ‘pages’ have normal URLs (‘/angular-seed/view1’ and ‘/angular-seed/view2’).

Of course, when clicking on the links in the browser, Angular router will handle them transparently, but if you bookmark the deep URL and paste in the browser address bar, the browser will now hit the server first. The NodeJS service is designed to handle all links under /angular-seed/* and will simply serve the app, allowing Angular router to take over when loaded.

The really nice thing is that Angular SPA links can sit next to links such as ‘About’ that are a plain server-side page rendered using express and Dust.js. Why wrestle with Angular when a straightforward HTML page will do?

Floor wax and dessert topping

There you go – move along, nothing to see here. There is no point in wasting time on Reddit food fights. A modern Web project needs elements of server and client side approaches because most projects have heterogeneous needs. Once you accept that, real fun begins when you realize you can share between the server and the client using a technique called ‘isomorphic apps’. We will explore these techniques in one of the future posts.

© Dejan Glozic, 2015

The Genius of Bootstrap (OK, and Foundation)

Credit: Carlos Paes, 2005, Wikimedia Commons
Credit: Carlos Paes, 2005, Wikimedia Commons

This week we spent a lot of time sifting through the available options for the client side Web component model. We were doing it in the context of figuring out what to use for the next generation of Bluemix, so we were really trying to think hard and strategic. It is a strange time to do this. Web Components are so close you can touch them (on Chrome at least), but the days you can code against the entire standard and not bat an eyelash are still further into the future than we would have liked (the same can be said for ES6 – the future is going to be great, just wait a little longer).

You must believe

In its core, the Web is based on linked documents. That didn’t change all these years not matter how much exciting interactive stuff we managed to cram on top. In fact, when people fond of the founding principles cry ‘don’t break the Web’, they mostly rail on approaches that create black holes in the Web universe – domains where rules of the Web such as the ability to crawl the DOM, follow the links and browser history stop applying.

By and large, Web document is consumed as a whole by the browsers. There is no native HTML component model, at least not in the way similar to CSS and JavaScript. It is possible to include any number of modular CSS files, and any number of individual JavaScript libraries (not that it is particularly healthy for your performance). Not so for your markup – in fact browsers are positively hostile to content coming from other places (I don’t blame them because security).

In that climate, any component model so far was mounted on top of a library or framework. Before you can use jQuery widgets, you need jQuery to provide the plug-in component model. All the solutions to date were necessarily two-part: first you buy into a particular buffet table aka proprietary component model, then you can fill up your plate from the said buffet. This is nerve-racking – you must pick the particular model that you think will work for you and stay with your project long enough (and be maintained in the future). Rolling a complete set of building blocks on your own is very expensive, but so is being locked into a wrong library or framework.

Client side only

Another problem that most of the usual offerings share is that they are unapologetically client side. What it means is that a typical component will provide some dummy content such as ‘Please wait…’ if it shows content, or nothing if it is a ‘building block’ widget of some kind. Only after JavaScript loads will it spring to life, which means show anything useful. Widgets that are shown on user input (calendar picker being the quintessential example) suffer no ill consequences from this approach, but if you put client-side-only widgets on the main page, your SEO, performance and user experience will suffer.

Whether this is of importance to you depends on where you stand on the ‘server vs client side’ religious war. Twitter made it very clear that loading JavaScript, making an XHR request back to the mother ship for data, and then rendering the data on the client is not working for them in their seminal 2012 blog post. I am against it as well, as we were bitten hard with bad initial performance of large JavaScript SPAs. YMMV.

Hidden DOM

Web Components as a standard bring another thing to the table: hidden DOM. When you add a custom component to your page, the buck stops at the component boundary – parent styles will not leak into the component, and DOM queries will not include elements inside the custom component. This yields vital encapsulation currently possible only using iframes, with all the nastiness they bring to the table. However, it also makes it hard to style and provide initial state of the components while rendering the page on the server.

In theory, Node.js may allow us to run JavaScript on the server and construct the initial content (again, a theory, I am not sure it is actually possible without ugly hacks). Even if possible, it would not work for other server stacks. Essentially Web Components want you to just drop the component in your markup, set a few properties and let it do its stuff, which in most cases means ‘nothing’ until JavaScript for the component loads.

Model transfiguration

One of the perennial problems of starting your rendering on the server and resuming on the client is model transfer. You had to do some work on the server to curate data required to render the component’s initial state. It would be a waste to discard this data and let the JavaScript for the component go through the same process again when loaded. There are two different approaches to this:

  1. Model embedding – during server side rendering, nuggets of data are embedded in the markup using HTML5 data-* properties. Client side JavaScript uses these nuggets to reconstruct the model without the need to make network requests.
  2. Model bootstrapping – this approach is used by some MV* frameworks (e.g. BackboneJS). You can construct your component’s model, use it to render on the server, then inline the model as text in HTML to be eval-ed on the client. The result is the same – model is ready and does not need to be synced with the server, necessitating a network request.

Enter Bootstrap

Our experience with proprietary web components was mostly with Dojo/Dijit, since IBM made a sizeable investment in this open source library and a ton of products were written using it. It has all the characteristics of a walled garden – before you sample from its buffet (Dijit), you need to buy into the widget component model that Dojo Core provides. Once you do it, you cannot mix and match with Prototype, YUI, or jQuery UI. This is not an exclusive fault of Dojo – all JavaScript component models are like this.

Remember when I told you how Twitter wanted to be able to send something from the server ready for the browser to consume? When we first discovered Bootstrap, we were smitten by its approach. Were were looking for a proprietary widget system to which we had to sell our souls but failed to find it (in fact, the Bootstrap creator Mark Otto expressed open distaste for components that require extensive JavaScript).

Consider:

  1. There is no hidden DOM. There is just plain HTML that is styled by Bootstrap CSS.
  2. This HTML can arrive from the server, or can be dynamically created by JavaScript – no distinction.
  3. Behaviour is added via jQuery plug-ins.
  4. Plug-ins look for Bootstrap components in the DOM and attach event listeners, and start the dynamic behaviour (e.g. Carousel).
  5. The data needed by JavaScript is extracted from ‘data-*’ properties in HTML, and can be programmatically modified once JavaScript loads (model embedding, remember?).

Considering Twitter’s blog post on server side rendering, it is no wonder Bootstrap is incredibly easy to put to use in such a context. You don’t pass a list of entries to the ‘menu’ component, only to be turned into a menu when JavaScript loads. Your menu is simply an ‘ul’ element, with menu items being ‘li’ elements that are just styled to look like a menu. Thanks to CSS3, a lot of animation and special effects are provided natively by the browser, without the need for custom JavaScript to slow down your page. As a result, Bootstrap is really mostly CSS with a sprinkling of JavaScript for behavior (no surprise because it grew out of Twitter’s style guide document).

<div class="dropdown">
   <button class="btn btn-default dropdown-toggle" type="button" id="dropdownMenu1" data-toggle="dropdown" aria-expanded="true">
      Dropdown
      <span class="caret"></span>
   </button>
   <ul class="dropdown-menu" role="menu" aria-labelledby="dropdownMenu1">
      <li role="presentation"><a role="menuitem" tabindex="-1" href="#">Action</a></li>
      <li role="presentation"><a role="menuitem" tabindex="-1" href="#">Another action</a></li>
      <li role="presentation"><a role="menuitem" tabindex="-1" href="#">Something else here</a></li>
      <li role="presentation"><a role="menuitem" tabindex="-1" href="#">Separated link</a></li>
   </ul>
</div>

How important this is for your use case depends on the components. Building block components such as menus, nav bars, tabs, containers, carousels etc. really benefit from server-side construction because they can be immediately rendered by the browser, making your page feel very snappy and immediately useful. The rest of the page can be progressively enhanced as JavaScript arrives and client-side-only components are added to the mix.

If server side is not important to you, Web Components custom element approach seems more elegant:

<fancy-dropdown></fancy-dropdown>

The rest of the markup visible in Bootstrap example is all in the hidden DOM. Neat, except if you want something rendered on the server as well.

Truth to be told, it seems to be possible to create Web Components that act similarly to Bootstrap components. In fact, there is a demo showing a selection of Bootstrap components re-imagined as custom elements. I don’t know how real or ‘correct’ this is, just adding it to the mix for completeness. What is not clear is whether this is merely possible or actually encouraged for all custom element creators.

Haters gonna hate

Bootstrap is currently in its third major version and has been immensely popular, but for somewhat different reasons than I listed here. It comes with a very usable, fresh and modern looking theme that many developers use as-is, never bothering to customize. As a result, there are many cookie-cutter web sites out there, particularly if put together by individuals rather than brand-sensitive corporations and startups.

This has created a massive wave of hate from designers. In the pre-Bootstrap days, developers normally could not design if their life depended on it, putting designers on the critical path for every single UI. Now, most internal, prototype and throwaway apps and sites can look ‘good enough’, freeing up designers to focus on big, long running projects and clients looking to impart their own ‘design language’ on their properties.

I would claim that while Bootstrap as-is may not be suitable for a real professional product, Bootstrap approach is something that should not be thrown away with the bathwater. I know that ‘theming Bootstrap’ sounds like ‘Cuba Libre without the rum’ (note for teetotalers – it’s just Coke). If a toolkit is mostly CSS, and you replace it, what is left? Well, what is left are class names, documentation, jQuery plug-ins and the general approach. A small team of designers and developers can create a unique product or company theme, and the army of developers can continue to use all of Bootstrap documentation without any change.

I know many a company designer is tempted to ‘start fresh’ and build a custom system, but it a much bigger job than it looks like, and is not much different from just theming Bootstrap, with the difference being that you are now on the hook to provide JavaScript for behavior and extensively document it. You can create themes that transform Bootstrap beyond recognition, demonstrated in the Bootstrap Expo. And it is a massive challenge to match the open source network effect (599 contributors, 10,495 commits).

Devil’s Advocate

In the past, there were complains that Bootstrap is bloated (which can be addressed to a degree by cherry-picking Less/Sass files and building a custom CSS), not accessible (this is getting better over time), and has too many accessor rules (no change here). Another complaint is that when a component doesn’t quite do what is desired, modifications eventually cost more than if the component was written from scratch.

I have no problem buying any and all of these complaints, but still claim that the approach is more important than the actual design system. In fact, I put Zurb’s Foundation in the title to indicate a competitor that uses an identical approach (styling HTML with jQuery for behaviour). I could use either (in fact, I have a growing appreciation for Foundation’s clean and understated look that is less immediately recognizable compared to Bootstrap). And the community numbers are nothing to sneeze at (603 contributors, 7,919 commits).

So your point is…

My point is that before thinking about reusable Web components for your project, settle on a design system, be it customized Bootstrap, Foundation or your own. This will ensure a design language fit for your product, and will leave a lot of options open for the actual implementation of user interfaces. Only then should you think of client-side-only components, and you should only use them for building blocks that you can afford to load lazily.

© Dejan Glozic, 2014

Full Stack Toronto Conference 2014

IMG_0871

We at IBM are not strangers to large, well capitalized conferences. As things go in the conference-industrial complex, it is a big deal when one of your keynote speakers is Kevin Spacey, or Imagine Dragons entertain you after hours. So to say that the first Full Stack Toronto Conference was on the opposite side of the spectrum would be an understatement.

How about ‘no food’, ‘no drinks except coffee in the morning’, and ‘no entertainment’ of any kind except finding parking around Ryerson University building? OK, not fair, there was a get together in the nearby Irish pub the first night that I didn’t go to because I was tired.

This conference was really just a meetup making the next step. And on a weekend. Starting on 8:45am. Who does that? Our keynote speaker Anila Arthanari, Director Of Software Development at Infusionsoft, wore a t-shirt ‘I am not a morning person’ expressing the mood of most of the audience. By all accounts, this was supposed to be a flop.

And yet. When you peel the layers of conference pageantry, what remains is the kernel of it all – good talks. When talks are good, people will not mind walking out to a nearby panini store to buy lunch, or walk a block up the Church Street to get a Starbucks hit. Nothing matters if talks are good.

And they were. We had multiple tracks, and I could not go to all the talks, but those I attended were very informative, thought-provoking and immanently applicable. After every talk I had tons of things I wrote down to try after, or catch up on. So what were my key takeaways from the conference?

  1. Lots of people use Angular.js. If you need a client side MVC, you can do worse, with a caveat that version 2.0 is on the slowly approaching horizon, and to say that migration will be interesting would be an understatement.
  2. More and more people use Browserify over RequireJS. Put off by weird configuration syntax, and feeling the need easier code reuse in the case of Node.js, people seem to prefer to just ‘require’ their modules. It makes it easier to go back and forth. I am definitely going to try using it soon. This blog might help as well.
  3. Micro-services are everywhere. In the tongue-in-cheek ‘Show Us Your Stack’ track, multiple presenters described their journey from monoliths to micro-service systems. I like how people are now past the hype and deep in the gory details on standing up such systems in practice. Many were openly asking the audience for their feedback and red flags if they see any.
  4. Not everybody uses Angular.js. I know this is a contradiction, but people who value control and being able to grow into a client side MVC still value Backbone.js and its modular approach. If anything, Angular.js 2.0 promises to be more modular and less arrogant, for the lack of a better word. Here is hoping that in the future, considering Angular.JS will not be such an ‘all or nothing’ dilemma.
  5. Isomorphic and ‘federation of single-page apps’ is a thing. I thought I will be the only one pushing for rendering stuff on both the server and the client using the same templates, but Matthew Conlen from New York Data Company talked about exactly such an approach. Personally, I find it funny that people are happy to partition the API space into micro-services, but don’t feel the need to do the same with the Web apps. As the system grows, a single one page app providing all the UI is going to be the bottleneck of the system. Which is ironic, because user interfaces are the most transient and need to be evolved at a rapid pace. In essence, we are creating a system where API services can move at a rapid speed, but the UI is one big ball of MVC mud.
  6. React can help with isomorphic. Once you decide rendering on both sides of the fence is important to you, React is an attractive proposition because it can do exactly that, and plays nice with Node.js.
  7. Internet Of Things is still in its infancy. It seems like we all feel this weird excitement over turning the lights with Node.js apps and sending messages to robots and receiving MQTT messages that it is now 24C in the room. It is apparent that all this stuff will matter one day and great things will come, but I don’t see what to do with it today other than marvel in the possibilities. I guess once somebody does something really awesome with IoT, we will all slap our collective foreheads and say ‘but of course, so elegant’.

By the way, yours truly presented as well. You can see my slides up on Slideshare, and the source code of my demo on GitHub. You may find it interesting – I got Angular.js to fit into the micro-service driven Web UI, use normal URLs (no horrible hashes or hash bangs), and share a common header with other pages. I also demonstrated SSO using Facebook as the identity provider, lively UI using Web Sockets and isomorphic approach using Dust.js rendered on both client and server. The best part was when an audience member posted a todo into the demo running live on Bluemix, and his entry popped up on screen as I was demoing it. Audience participation, live demo, unexpected proof that the code actually works as designed – priceless!

So there you have it – you can make the attendees feed themselves, only give them coffee in the morning (what is life even), and dispense with most of the usual conference perks, and they will still come if the talks are good. I would say that the first FullStack TO conference focused on the most important thing, and succeeded. Good talks first, creature comforts to follow – good priorities in my book. Looking forward to the next year!

© Dejan Glozic, 2014

Angular.js 2.0, Index Investing and Micro-Services

Beuckelaer_Girl_with_a_basket_of_eggs
Beuckelaer: Girl with a basket of eggs, Wikimedia Commons.

Now here is somebody with all her eggs in one basket, literarily. I used it to illustrate what index investing tries to avoid. I thought of index investing while reading the bitter and often hilarious reactions to the announced changes in Angular.js 2.0 on Reddit. I also thought about my experience with Google services. Watch me tie all these things together in one magic feat.

First off, let me be the first to acknowledge that righteous indignation about changes to a free product or service is always a bit rich. “I used this for my benefit for months and now they changed it – I demand they fix it and continue to invest real money so that I can continue using it for free”. Right.

That thing off the table, here is my amusing experience with Google services. A while ago I amassed a number of feeds I wanted to keep up with every morning over breakfast. I created a nice multi-page dashboard using iGoogle. It worked, and it was even responsive – it loaded fast and works well on my iPhone.

Then one day Google pulled the plug on it. After venting my, you guessed it, righteous indignation for a while, I looked for a replacement and found that Google Reader can be used for that. So I moved all my feeds to it. You guess what happened next – they pulled the plug on it too, and I had to move my feeds to Feedly, where they remain to this day.

Apart from feeling like the first of the three little piggies forced to change its address often due to the certain wolf, it taught me how Google feels about its free services. While it has a number of exciting and often groundbreaking products in the air at any point, you better steer clear if long term stability is important to you. Google engineers are not sentimental about their software and change their minds at an alarming frequency, which moves things forward but also leaves a lot of victims in their wake.

The moment I learned about Angular.js and how it was bestowed on the world and maintained by Google, my first thought was ‘uh, oh’. It had Google fingerprints all over it:

  1. A vertically integrated opinionated framework that attempts to solve all your needs.
  2. It does not play well with other well loved and popular libraries
  3. A lot of the approaches need getting used to and don’t look like anything you have seen before
  4. As a result, the learning curve is steep, and once you climbed it, you feel personally invested to a degree that is not healthy

And now with the announcement of Angular.js 2.0, we have the final shoe to drop – Google’s famous impatience with continuity and careful evolution. There were many people on Reddit who evangelized for Angular 1.x in their companies, and now feel betrayed. Others are contemplating switching to Knockout, Backbone, or leaving Web development altogether.

Index Investing

To change the pace a bit, let’s look at index investing. It is an investment technique that openly gives up on picking stock market winners and losers. Through the bitter experience, some people discovered that their stock picks are worse than if they let a blindfolded monkey choose their investments by throwing darts. Instead, they decided to invest in a basket of investments in each category, using low-fee instruments such as ETFs (Exchange Traded Fonds). All empirical evidence suggests most people can’t pick winners if their life depended on it, and even those who can cannot sustain that track record over any length of time. Full disclosure – I moved all my investments to index funds and my results are way better than my dart throwing years.

Index investing is all about diversification, asset allocation and risk containment. It applies to Web development more than you think.

The trouble with frameworks

I was picking on Angular.js, but I didn’t need to go that far outside my own company. We at IBM have written a hot mess of Web UIs using the Dojo framework. Truth to be told, a few years ago it was a pretty decent option and had some solutions that mattered to enterprises (i18n, important widgets such as sorting tables and trees, and support for all god-awful IE browsers known to man). The problem is that once you write all that code on top of it, you are stuck – you are forever bound to it. Dojo was a basket we put all our eggs in, like that girl above.

In the investment analogy, we bought one stock for all our money. Any investor will tell you that such a strategy is crazy – way too much alpha for a good night sleep. At the first opportunity to reflect on our strategy and devise something saner, we decided to use stable, standard-based protocols for integration, and confine stack choices to individual services. While we can change our minds on the implementation of one service, the other services can continue to work because the integration protocols are stable.

We also learned the hard way that frameworks tend to generate additional work – the source of accidental complexity. If you find yourself spending nontrivial amount of time ‘feeding the framework’ i.e. writing code not because it makes sense for your project but because the framework needs it done a certain special way, you are the victim of accidental complexity.

As a result, we also developed a strong preference for toolkits over frameworks. It allows us to maintain control and have a better chance of avoiding nasty surprises such as Angular.js 2.0.

Micro-services and risk minimization

Our current love affair with micro-services have several reasons, many of which I have written about in the previous posts. However, one of the reasons is closely related to the subject of this post: risk control. Like in investing, our ability to pick the right framework has a dismal track record. Therefore, with our switch to micro-services, we focused first on the way they communicate with each other. We invested in stable REST APIs, and message brokers that pass messages around using open protocols such as MQTT and AMQP 1.0. Due partially to the glacial pace of protocol standardization, the danger of them changing overnight is much lower.

Our approach to individual micro-service implementation is then to confine risk to the service boundary. If the service is small enough, picking the wrong framework (if you even need a framework) will doom only that one service. A small service can be re-written if need be. The entire system cannot without a world of pain.

Essentially our approach is to officially declare that we will not assemble a committee to choose one framework to rule them all. We will apply the following mitigation rules instead:

  1. Use the simplest approach you can get away with
  2. If you can get away with server-side generated content, do it
  3. If you can get away with server-side content + jQuery + Bootstrap, do it
  4. If you need a bit of MV* magic, try Backbone combined with isomorphic templates (e.g. Dust.js partials that are reused on both server and client)
  5. If you must use Angular.js 1.3, do it, but you are on the hook to keep up with Google, and have a contingency rewriting plan
  6. We will NOT base any of the integration code on any of the frameworks de jour. Instead, we will use REST, AMQP/MQTT, JSON, HTML5, CSS3 and vanilla JS.

Be pessimistic

Someone once said that we are all writing legacy code every day, so we should strive to make it the best legacy code we can muster. Angular 1.3 turned from shiny to legacy in a flash (even though in ultra slow motion since 2.0 will only arrive in the early 2016). Our approach may be pessimistic, but it will help us sleep better in the years to come, and will make those that come after us curse us a bit less. Micro-services help in this regard because they confine the risk, in the way oil tankers break up the cargo space into compartments. If you ensure that you can change your mind about the implementation of each service, the risk and importance of choosing the right framework diminishes.

The right question should not be “should I use Backbone.js, Angular.js, Ember.js or something else”. The question should be: will I be able to recover when the ADD-suffering maintainers of your framework of choice inevitably lose interest.

Right now, a starry-eyed college dropout is writing the next shiny framework to take the world by storm. With the micro-service approach, you will be able to give it a shot without betting the farm on it. You are welcome.

© Dejan Glozic, 2014

Micro-Services and Page Composition Problem

800px-20121027_0811_Sintra_06

Dispite many desirable properties, micro-services carry two serious penalties to be contended with: authentication (which we covered in the previous post) and Web page composition, which I intend to address now.

Imagine you are writing a Node.js app and use Dust.js for the V of the MVC, as we are doing. Imagine also that several pages have shared content you want to inject. It is really easy to do using partials, and practically every templating library has a variation of that (and not just for Node.js).

However, if you build a micro-service system and your logical site is spread out between several micro-services, you have a complicated problem on your hands. Now partial inclusion needs to happen across the network, and another service needs to serve the shared content. Welcome to the wonderful world of distributed composition.

This topic came into sharp focus during Nodeconf.eu 2014. Clifton Cunningham presented the work of his team in this particular area, and the resulting project Compoxure they have open-sourced and shared with us. Clifton has written about it in his blog and it is a very interesting read.

Why bother?

At this point I would like to step back and look at the general problem of document component model. For all their sophistication and fantastic feature set, browsers are stubbornly single document-oriented. They fight with us all the time when it comes to where the actual content on the page comes from. It is trivially easy to link to a number of stylesheets and JavaScript files in the HEAD section of the document, but you cannot point at a page fragment and later use it in your document (until Web Components become a reality, that is – including page fragments that contain custom element templates and associated styles and scripts is the whole point of this standard).

Large monolithic server-side applications were mostly spared from this problem because it was fairly easy to include shared partials within the same application. More recently, single page apps (SPAs) have dealt with this problem using client side composition. If everything is a widget/plug-in/addon, your shared area can be similarly included into your page from the client. Some people are fine with this, but I see several flaws in this approach:

  1. Since there is no framework-agnostic client side component model, you end up stuck with the framework you picked (e.g. Angular.js headers, footers or navigation areas cannot be consumed in Backbone micro-services)
  2. The pause until the page is assembled in SPAs due to JavaScript downloading and parsing can range from a short blip to a seriously annoying blank page stare. I understand that very dynamic content may need some time to be assembled but shared areas such as headers, footers, sidebars etc. should arrive quickly, and so should the initial content (yeah, I don’t like large SPAs, why do you ask?)

The approach we have taken can be called ‘isomorphic’ – we like to initially render on the server for SEO and fast first content, and later progressively enhance using JavaScript ‘on the fly’, and dynamically load with Require.js. If you use Node.js and JavaScript templating engine such as Dust.js, the same partials can be reused on the client (something Airbnb has demonstrated as a viable option). The problem is – we need to render a complete initial page on the server, and we would like the shared areas such as headers, sidebars and footers to arrive as part of that first page. With a micro-service system, we need a solution for distributed document model on the server.

Alternatives

Clifton and myself talked about options at length and he has a nice breakdown of alternatives at the Compoxure GitHub home page. For your convenience, I will briefly call out some of these alternatives:

  1. Ajax – this is a client-side MVC approach. I already mentioned why I don’t like it – it is bad for SEO, and you need to stare at the blank page while JavaScript is being downloaded and/or parsed. We prefer to use JavaScript after the initial hit.
  2. iFrames – you can fake document component models by using seamless iframes. Bad for SEO again, there is no opportunity for cashing (therefore, performance problems due to latency), content in iFrames is clipped at the edges, and problems for cross-frame communication (although there are window.postMessage workarounds). They do however solve the single-domain restriction browsers impose on Ajax. Nevertheless, they have all the cool factor of re-implementing framesets from the 90s.
  3. Server Side Includes (SSIs) – you can inject content using this approach if you use a proxy such as Nginx. It can work and even provide for some level of caching, but not the programmatic and fine grain control that is desirable when different shared areas need different TTL (time to live) values.
  4. Edge Side Includes (ESIs) – a more complete implementation that unfortunately locks you into Varish or Akamai.

Obviously for Clifton’s team (and ourselves), none of these approaches quite delivers, which is why services like Compoxure exist in the first place.

Direct composition approach

Before I had an opportunity to play with Compoxure, we spent a lot of time wrestling with this problem in our own project. Our current thinking is illustrated in the following diagram:

composition1The key aspects of this approach are:

  1. Common areas are served by individual composition services.
  2. Common area service(s) are proxied by Nginx so that they can later be called by Ajax calls. This allows the same partials to be reused after the initial page has rendered (hence ‘isomorphic’).
  3. Common area service can also serve CSS and JavaScript. Unlike the hoops we need to go through to stitch HTML together, CSS and JavaScript can simply be linked in HEAD of the micro-service page. Nginx helps making the URLs nice, for example ‘/common/header/style.css’ and ‘/common/header/header.js’.
  4. Each micro-service is responsible for making a server-side call, fetching the common area response and passing it into the view for inlining.
  5. Each micro-service takes advantage of shared Redis to cache the responses from each common service. Common services that require authentication and can deliver personalized response are stored in Redis on a per-user basis.
  6. Common areas are responsible for publishing messages to the message broker when something changes. Any dynamic content injected into the response is monitored and if changed, a message is fired to ensure cached values are invalidated. At the minimum, common areas should publish a general ‘drop cache’ message on restart (to ensure new service deployments that contain changes are picked up right away).
  7. Micro-services listen to invalidation messages and drop the cached values when they arrive.

This approach has several things going for it. It uses caching, allowing micro-services to have something to render even when common area services are down. There are no intermediaries – the service is directly responding to the page request, so the performance should be good.

The downside is that each service is responsible for making the network calls and doing it in a resilient manner (circuit breaker, exponential back-off and such). If all services are using Node.js, a module that encapsulates Redis communication, circuit breaker etc. would help abstract out this complexity (and reduce bugs). However, if micro-services are in Java or Go, we would have to duplicate this using language-specific approaches. It is not exactly rocket science, but it is not DRY either.

The Compoxure approach

Clifton and guys have taken a route that mimics ESI/SSI, while addressing their shortcomings. They have their own diagrams but I put together another one to better illustrate the difference to the direct composition diagram above:

composition2In this approach, composition is actually performed in the Compoxure proxy that is inserted between Nginx and the micro-services. Instead of making its own network calls, each micro-service adds special attributes to the DIV where the common area fragment should be injected. These attributes control parameters such as what to include, what cache TTLs to employ, which cache key to use etc. There is a lot of detail in the way these properties are set (RTFM), but suffice to say that Compoxure proxy will serve as an HTML filter that injects the content from the common areas into these DIVs as instructed.

<div cx-url='{{server:local}}/application/widget/{{cookie:userId}}'
     cx-cache-ttl='10s' cx-cache-key='widget:user:{{cookie:userId}}'
     cx-timeout='1s' cx-statsd-key="widget_user">
This content will be replaced on the way through
</div>

This approach has many advantages:

  1. The whole business of calling the common area service(s), caching the response according to TTLs, dealing with network failure etc. is handled by the proxy, not by micro-services.
  2. Content injection is stack-agnostic – it does not matter how the micro-service that serves the HTML is written (in Node.js, Java, Go etc.) as long as the response contains the expected tags
  3. Even in a system written entirely in Node.js, writing micro-services is easier – no special code to add to each controller
  4. Compoxure is used only to render the initial page. After that, Ajax takes over and composition service is hit with Ajax calls directly.

Contrasting the approach with direct composition, we identified the following areas of concern:

  1. Compoxure parses HTML in order to locate DIVs with special tags. This adds a performance hit, although practical results imply it is fairly small
  2. Special tags are not HTML5 compliant (‘data-‘ prefix would work). If this bothers you, you can configure Compoxure to completely replace the DIV with these tags with the injected content, so this is likely a non-issue.
  3. Obviously Compoxure inserts itself in front of the micro-services and must not go down. It goes without saying that you need to run multiple instances and practice ZDD (Zero-Downtime Deployment).
  4. Caching is static i.e. content is cached based on TTLs. This makes picking the TTL values tricky – our approach that involves pub/sub allows us to use higher TTL values because we will be told when to drop the cached value.
  5. When you develop, direct composition approach requires that you have your own micro-service up, as well as common area services. Compoxure adds another process to start and configure locally in order to be able to see your page with all the common areas rendered. If you hit your micro-service directly, all the DIVs with the ‘cx-‘ properties will be empty (or contain the placeholder content).

Discussion

Direct composition and Compoxure proxy are two valid approaches to the server-side document component model problem. They both work well, with different tradeoffs. Compoxure is more comfortable for developers – they just configure a special placeholder div and magic happens on the way to the browser. Direct composition relies on fewer moving parts, but makes each controller repeat the same code (unless that code is encapsulated in a shared Node.js module).

An approach that bridges both worlds and something we are seriously thinking of doing is to write a Dust.js helper that further simplifies inclusion of the common areas. Instead of importing a module, you would import a helper and then just use it in your markup:

<div>
{@import url="{headerUrl}" cache-ttl="10s"
cache-key="widget:user:{userid}" timeout="1s"}
</div>

Of course, Compoxure has some great properties that are not easy to replicate with this approach. For example, it does not pass TTL values to Redis directly because it would cause the cashed content to disappear after the coundown, and Compoxure perfers to keep the last content past TTL in case the service is down (better to serve slightly stale content than no content at all). This is a great feature and would need to be replicated here. I am sure I am missing other great features and Clifton will probably remind me about it.

Conclusion

In the end, I like both approaches for different reasons, and I can see a team use both successfully. In fact, I could see a solution where both are available – a Dust.js helper for Node.js/Dust.js micro-services, and Compoxure for everybody else (as a fallback for services that cannot or do not want to fetch common areas programmatically). Either way, the result is superior to the alternatives – I strongly encourage you to try it in your next micro-service project.

You don’t even have to give up your beloved client-side MVCs – we have examples where direct composition is used in a page with Angular.js apps and another with a Backbone app. These days, we are spoiled for choice.

© Dejan Glozic, 2014

Sharing micro-service authentication using Nginx, Passport and Redis

Abgeschlossen_1
Wikimedia Commons, Abgeschlossen 1, by Montillona

And we are back with the regularly scheduled programming, and I didn’t talk about micro-services in a while. Here is what is occupying my days now – securing a micro-service system. Breaking down a monolith into a collection of micro-services has some wonderful properties, but also some nasty side-effects. One of them is authentication.

The key problem of a micro-service system is ensuring that its federated nature is transparent to the users. This is easier to accomplish in the areas that are naturally federated (e.g. a collection of API end points). Alas, there are two areas where it is very hard to hide the modular nature of the system: composing Web pages from contributions coming from multiple services, and security. I will cover composition in one of the next posts, which leaves us with the topic of the day.

In a nutshell, we want a secure system but not at the expense of user experience. Think about European Union before 1990. In some parts of Europe, you could sit in a car, drive in any direction and cross five countries before the sunset. In those days, waiting in line at the custom checkpoint would get old fast and could even turn into quite an ordeal in extreme cases.

Contrast it to today – once you enter EU, you just keep driving, passing countries as if they are Canadian provinces. Much better user experience.

We want this experience for our micro-service system – we want to hop between micro-services, be secure yet not being actively aware that the system is not a monolith.

It starts with a proxy

A first step in securing a micro-service system starts at a proxy such as Nginx. Placing a multi-purpose proxy in front of our services have several benefits:

  1. It allows us to practice friendly URL architecture – we can proxy nice front end URLs such as http://foobar.io/users or http://foobar.io/projects to separate micro-services (‘users’ and ‘projects’, respectively). It also goes around the fact that each Node.js service runs on a separate port (something that JEE veterans tend to hate in Node.js, since several apps running in the same JEE container can share ports).
  2. It allows us to enable load balancing – we can proxy the same front end location to a collection of service instances running in different VMs or virtual containers (unless you are using a PaaS, at which point you just need to increment the instance counter).
  3. It represents a single domain to the browser – this is beneficial when it comes to sharing cookies, as well as making Ajax calls without tripping the browser’s ‘same origin’ policy (I know, CORS, but this is much easier)

If we wanted to be really cheap, we could tack on another role for the front end proxy – authentication. Since all the requests are passing through it, we can configure a module to handle authentication as well and make all the micro-services behind it handle only authenticated requests. Basic auth module is readily available for Nginx, but anything more sophisticated is normally not done this way.

Use Passport

Most people will need something better than basic auth, and since we are using Node.js for our micro-service system, Passport is a natural choice. It has support for several different authentication strategies, including OAuth 1.0 and OAuth 2.0, and several well known providers (Twitter, Facebook, LinkedIn). You can easily start with a stock strategy and extend it to handle your unique needs (this will most likely be needed for OAuth 2.0 which is not a single protocol but an authentication framework, much to the dismay of Eran Hammer).

Passport is a module that you insert in your Node.js app as middleware, and hook up to the Express session. You need to expose two endpoints: ‘/auth/<provider>’ and ‘/auth/<provider>/callback’. The former is where you redirect the flow in order to start user login, while the latter is where the Auth server will call back after authenticating the user, bringing in some kind of authorization code. You can use the code to go to the token endpoint and obtain some kind of access token (e.g. bearer token in case of OAuth 2.0). With the access token, you can make authenticated calls to downstream services, or call into the profile service and fetch the user info. Once this data is obtained, Passport will tack it on the request object and also serialize it in the session for subsequent use (so that you don’t need to authenticate for each request).

Passport module has a very nice web site with plenty of examples, so we don’t need to rehash them here. Instead, I want us to focus on our unique situation of securing a number of micro-services working together. What that means is:

  1. Each micro-service needs to be independently configured with Passport. In case of OAuth (1.0 or 2.0), they can share client ID and secret.
  2. Each micro-service can have a different callback URL as long as they all have the same domain. This is where we reap the benefit of the proxy – all the callback URLs do have the same domain thanks to it. In order to make this work, you should register your client’s callback_uri with the authentication provider as the shared root for each services’ callback. An actual callback passed to the authentication endpoint for each micro-service can be longer than the registered callback_uri as long as they all share common root.
  3. Each micro-service should use the same shared authentication strategy and user serialization/deserialization.

Using this approach, we can authenticate paths served by different micro-services, but we still don’t have Single Sign-On. This is because Express session is configured using an in-memory session store by default, which means that each micro-service has its own session.

Shared session cookie

This is not entirely true: since we are using the default session key (or can provide it explicitly when configuring session), and we are using single domain thanks to the proxy, all Node.js micro-services are sharing the same session cookie. Look in the Firebug and you will notice a cookie called ‘connect.sid’ once you authenticate. So this is good – we are getting there. As I said, the problem is that while the session cookie is shared, this cookie is used to store and retrieve session data that is in memory, and this is private to each micro-service instance.

This is not good: even different instances of the same micro-service will not share session data, let alone different micro-services. We will be stuck in a pre-1990 Europe, metaphorically speaking – asked to authenticate over and over as we hop around the site.

Shared session store

In order to fix this problem, we need to configure Express session to use an external session store as a service. Redis is wonderfully easy to set up for this and works well as long as you don’t need to persist your session forever (if you restart Redis, you will lose session data and will need to authenticate again).


var ropts = {
   host: "localhost",
   port: 5556,
   pass: "secret"
}

...

    app.use(express.session({ key: 'foobar.sid',
                             store: new RedisStore(ropts),
                             secret: 'secret'}));
    app.use(passport.initialize());
    app.use(passport.session());

I am assuming here that you are running Redis somewhere (which could range from trivial if you are using a PaaS, to somewhat less trivial if you need to install and configure it yourself).

What we now have is a system joined at both ends – Nginx proxy ensures session cookie is shared between all the micro-service instances it proxies, and Redis store ensures actual session data is shared as well. The corollary of this change is that no matter which service initiated the authentication handshake, the access token and the user profile are stored in the shared session and subsequent micro-services can readily access it.

micro-authentication

Single Sign Off

Since we have Redis already configured, we can also use it for pub/sub to propagate the ‘logout’ event. In case there is state kept in Passport instances in micro-services, a system-wide logout for the session ensures that we don’t have a “partially logged on” system after a log out in one service.

I mentioned Redis just for simplicity – if you are writing a micro-service system, you most likely have some kind of a message broker, and you may want to use it instead of Redis pub/sub for propagating logout events for consistency.

Calling downstream services

Not all micro-services will need full Passport configuration. You can configure services that require access token – they can just look for ‘Authorization’ header and refuse to do anything if it is not present. For example, for OAuth 2.0 authentication, the app will need something like:


Authorization: Bearer 0b79bab50daca910b000d4f1a2b675d604257e42

The app can go back to the authentication server and verify that the token is still valid, or go straight to the profile endpoint and obtain user profile using the token (this doubles as token validation because the profile service will protest if the token is not valid). API services are good candidates for this approach, at least as one of the authentication mechanisms (they normally need another way for app-to-app authentication that does not involve an actual user interacting with the site).

What about Java or Go?

This solution obviously works great if all the micro-services are written in Node.js. In a real-world system, some services may be written using other popular stacks. For example, what will happen if we write a Java service and try to participate?

Obviously, running a proxy to the Java micro-service will ensure it too has access to the same session cookie. Using open source Redis clients like Jedis will allow it to connect to the same session store. However, the picture is marred slightly by the fact that Express session signs the cookie with a combination of HMAC-Sha256 and ‘base64’ digest, plus some additional tweaking. This is obviously a very Express-centric approach and while it can be re-created on the Java side, there is this lingering feeling we created a Node-centric system and not a stack-agnostic one.

Java has its own session management system and you can see the JSESSIONID cookie sitting next to the one created by Express. I will need to study this more to see if I can make Java and Node share the session cookie creation and signing in a more stack-neutral way. In a system that is mostly Node.js with a Java service here and there, signing and unsigning the session cookie the way Express likes it may not be a big deal.

In addition, my current experimentation with Java points at creation of a JEE filter (middleware) that checks for the session cookie and redirects to authentication endpoints if a user is not found in the session. True Java authentication solutions are not used, which may or may not be a problem for you if you are a JEE veteran. JEE filters provide for wrapping HTTP requests so methods such as ‘getRemoteUser()’ can be implemented to provide the expected results to Java servlets.

I mentioned Go because it is an up and coming language that people seem to use for writing micro-services more and more these days. I have no idea how to write the session support for Go in this context so I am tossing this in just as food for thought. If you know more, drop me a line.

Closing thoughts

There are many alternative ways to solve this problem, and I profess my mixed feelings about authentication. For me, it is like broccoli – I understand that it is important and good for me, but I cannot wait to get to the ice cream, or whatever is the ice cream equivalent in writing micro-service systems. Spending time with Passport and OAuth 2.0, I had to learn more than I ever wanted to know about authentication, but I am fairly pleased with how the system works now. What I like the most is its relative simplicity (to the extend that authentication can be). My hope is that by avoiding smart solutions, the chances of the system actually working well and not presenting us with difficult edge cases every day are pretty good. I will report back if I was too naive.

© Dejan Glozic, 2014

Nodeconf.eu 2014: Trip Report (Part 3)

nodeconfeu-day3alt

This is the third and final installment of my Nodeconf.eu report. There was also part 1 and part 2 you should probably read first for continuity.

Day three of Nodeconf.eu started with a ‘Mad Science Act’, with all the presenters trading the pre-requisite lab coat (to add ‘science’ and also a bit of ‘mad’ to their presentations).

First off, James Halliday (also known as substack). Demonstrating ultra hard core cred with a Linux laptop surrounded by a sea of Macs, substack repeated the cherished Node.js ethos of creating many small modules that can be combined and re-combined ad nauseum, forming your growing tool box. This approach does not need to end with modules – you can apply it to data, in form of data streams. Substack demoed running code where multiple streams can be combined by piping data at different routes of the server. You can read more and play with the demo at the dataplex GitHub repo.

nodeconfeu-23Feross Aboukhadijeh took us on a wild ride of using WebRTC to replicate the P2P BitTorrent communication between browsers, using the WebRTC data channels. The presentation was a lot of fun, with the audience participating by sharing obligatory pictures of cats from their browsers in real time. This feature is made available in the open source library called WebTorrent, together with a BitTorrent bridge (direct connection to BitTorrent is currently not possible due to the need for TCP connection (there needs to be a ‘hybrid’ client that Feross is currently working on).

nodeconfeu-24

Travell Perkins, Fidelity Investments CTO had a talk that I could not immediately place in this block and that got me confused until I realized that it was originally supposed to be on Monday, as part of the ‘Node.js in the enterprise’ section (lab coat notwithstanding). I *think* his talk was about databases, possibly his SQLdown NPM module, but sadly I cannot locate my notes on the talk (too much coffee?).

nodeconfeu-25Dominic Tarr (dubbed ‘our resident Jesus’ which is not hard to understand – just look at the picture!) was another of the hard core speakers using a Linux laptop. The notes on his talk were in the same MIA file as for Travell, so only a picture here.

nodeconfeu-26 The last talk in this long-ish section was Mikola Lysenko on the challenges of developing multi-player online games using Web technologies alone. A key problem to solve is replication because multi-player games are distributed but they share state, and this state needs to be replicated in a way that does not ruin the real-time nature of the game. He demonstrated several options in modeling real time physical systems and their pros and cons (which, in case of ‘eventual consistency’ and error correction can yield some comical side effects as seen in the demo). You can read more on his work in his multi-part blog post.

nodeconfeu-27After a needed sugar and caffeine coma break, we switched to the hardware track. First speaker was Colin Vernon, a designer on a strange mission – to bring NPM-style componentization to the hardware world. I was a hobbyist in my youth, and no stranger to the soldering gun. I had a long pause in dealing with hardware other than in a very limited way of ‘adding cards to PC slots’, so this resurgence of interest in hardware definitely brings back memories. Colin introduced us to the amazing world of littlebits, where Internet of Things can be manually put together using small modules that can be combined with the cloud and mobile remote control apps in many interesting ways. This presentation reminded me of how Twitter looked like a bizarre and pointless toy until it turned into something much bugger and more powerful – this could be another example.

nodeconfeu-28Raquel Vélez brought us Node.js-driven Bat-bot that she configured with a pen, with some freedom as to where to go on the canvas and when to draw. The philosophical question was – at which point we can call the result ‘art’. The practical demonstration almost succeeded but the artist committed robocide by falling off the drawing table before finishing the painting (which made the price of the painting skyrocket as a result). These modest starts like this may seem of little value, but Raquel reminded us that Mars Rover is essentially just a much bigger and more complex bot.

nodeconfeu-29Afternoon workshops reduced our collective age to pre-school, with all the toys laying around – drones, Korg bits (you can assemble an analog synth by connecting the bits together) and other goodies to wake up a kid in you.

nodeconfeu-30All in all, it was a great conference, in a format that made TJ Fontaine call it ‘Node.js vacation’ but it was of course much more than that. A lot of time for informal discussions (and environment that encouraged it because, you know, we are on an island with nothing else to do :). Being able to talk to people that wrote Node.js modules we use daily was absolutely a treat, as well as meeting some internet contacts in person for the first time. I came back home with a bag full of memories and lots of ideas for new projects.

I will close with a view of the Waterford Castle ferry that we got to know well during our conference. See you all again next year!

nodeconfeu-day3

© Dejan Glozic, 2014

Nodeconf.eu 2014: Trip Report (Part 2)

nodeconfeu-14

Read part 1 of the report.

Before I continue to part 2, a word about timing. Some of you may ask why it took me so long to publish my impressions of an event that happened last week (which, in Internet time, is ‘way in the past’). The thing is, I prefer to live the reality, rather than be outside it and look into it through the phone or tablet screen. As Louis CK would say, ‘the resolution on reality is amazing, it is super HD’, which is way better than even the new iPhone 6. Hence I make records of events as quickly as I can and get back to, you know, be in them.

On with the event. Day 2 of nodeconf.eu started with the front-end track, with Alex Liu from Netflix dazzling us with the radial approach they took with Node.js and Dust.js when it comes to A/B testing various designs. What is unique in this approach is that A/B testing is not done at a load ballancer level. A traditional approach would see two apps serving the same routes and the ballancer (such as Nginx) routing requests to instance A or B using some routing rule, and then measuring the outcome. Netflix took Dust.js partials to a completely new level by testing out many different combinations of designs within the same app or page, and testing more than A and B (there is C and D and E etc.). Of course, common sense suggest this is very hard to manage manually, hence their approach to use a registry that applies packaging rules and puts together distinct combinations of parts that form a particular test (they are served from CDN for the next person participating in it to cut on the processing latency). That, plus I got a kick of Netflix being among the users of Dust.js that we are also using.

Alex Liu from Netflix on Node.js/Dust.js A/B/C/D/E/F testing.
Alex Liu from Netflix on Node.js/Dust.js A/B/C/D/E/F testing.

You may know Matteo Collina already from my blog posts because he wrote the MQTT NPM module we are using in our own code. This time around, Matteo talked about the new work that is attempting at make communication between micro-services possible without the message brokers. This work is inspired by libchan by Docker team that is written in Go. Matteo and my internet buddy Adrian Rossouw are collaborating on providing a Node.js version as part of the project Graft.

nodeconfeu-16
Matteo Collina talks about project Graft.

Jake Verbaten shared the nitty-gritty of writing Node.js services in Uber, where everything you do needs to be up all the time, and run on a multitude of machines. He highlighted all the extra steps that separate anything you make from the day it is ‘productized’. Jake showed the tools Uber uses for this process, including the tool they call ‘potter’ (will be open-sourced at some point). It gets all the scaffolding going for a project, including the repo, continuous integration server, and monitoring.

nodeconfeu-17
Jake Verbaten on production-grade Node.js services at Uber.

Thorsen Lorentz went into the details of Chrome JavaScript engine (V8) and JavaScript performance. This is not something you normally thing right away when writing JavaScript, but can come in handy when doing performance tuning. However, his examples left me with mixed feelings. We like our abstractions and the fact that declaring variables, and later assigning values (as opposed to initializing them in the first statement) can have detrimental effect on performance filled me with dread. I don’t want to break the black box approach to Node.js if possible.  Don’t get me wrong, it is perfectly understandable that V8 will work better if helped by certain coding pattern, but it means that I need to become aware of the actual JavaScript engine running my code, which further breaks the Node.js promise of front end and back end unity (unless you restrict yourself to only serving your Chrome clients, that is). Of course, all this is not Thorsen’s fault – we should not shoot the messenger.

nodeconfeu-18
Thorsen Lorentz on disturbing secrets about how V8 goes about running your code.

After the break, Bert Belder from StrongLoop kicked off a Node Core track. StrongLoop is a company build around helping others succeed with Node.js, and has the most corporate contributions to the upcoming 0.12 version, as well as the current 0.10 (outside of Joyent, of course). Bert has invited the audience to ask for fixes in node, express, node inspector and other areas.

nodeconfeu-19
Bert Belder from StrongLoop on the upcoming 0.12, Node.js community, and what StrongLoop can do to help.

Fedor Indutny held up to the stereotype that Russians are scary good at math by taking us down to follow the white rabbit of TLS encryption. His credentials (see what I did there): he is the author of the TLS module for Node.js. Of course, most of us just want encryption to work and don’t care how, but it was fun to peek behind the curtain at RFC 5246 and find out more about it. He spent the rest of the talk walking us through code snippets of setting up the server with tls.js and exchanging encrypted hello’s.

nodeconfeu-20
Fedor Indutny on the wonderful world of TLS encryption.

For the second time at this conference, TJ Fontaine went into the dark allays of tracing Node and finding out what is happening when you launch ‘node app.js’. Apart from replicating Node graphics on T-shirts and the backs of his laptops, TJ’s favorite pastime is debugging v8 core dumps on OSX and Linux using lldb-v8. I got to see more V8 detritus than I saw in a – ever? Golden takeaway – please, please, please name your functions – debugging anonymous functions postmortem is no fun.

nodeconfeu-21

After lunch, there were more workshops – more debugging with TJ and hands on with NearForm’s own nscale deployment solution. Unfortunately, my jet lag finally caught up with me and I had to crash.

Our afternoon event involved vising a local hurling club (an Irish sport involving an ash bat and a ball, although hitting a fellow player instead is fine too). We got to make a nice group photo on the green.

Our evening entertainment took us to Waterford on the sea shore. Lots of Guinness (I swear it tastes better here than from a can back home!), and some traditional Irish music, including wonderfully quirky Irish pipes.

nodeconfeu-22

Continue on to the third and final installment of the report.

© Dejan Glozic, 2014