for Tardy React Status Indicators

The White Rabbit, Wikimedia Commons
The White Rabbit, Wikimedia Commons

In one of my favorite movies “The Blues Brothers”, the wife of the trucker pit stop owner proudly exclaims that they play ‘both kinds of music – Country and Western’. Readers of my blog know I too tend to serve two kinds of articles. In think-pieces, I tend to pontificate on a topic that buzzes between my ears that week. In ‘hey, look at this code we wrote’, I share something we learned by doing. I am sure there is an audience for both kinds but if you are in the latter camp, this is your day.

I’m late, I’m really really late

The problem we had to solve recently was as follows: a React store contains a list of items. These items are rendered in a hierarchy of React components, with a ‘dummy’ React component rendering a card for each item. The item has a number of visual elements on it that can be easily rendered from the item’s properties, but it also contains a status indicator.


This status indicator was killing us: when we fetch the array of items via the XHR API call, we can get them fairly quickly. However, each item represents an entity that can be working or not working (and therefore have a green or red status indicator). Computing this status takes a while, needs to be done separately for each item, and cannot hold the main XHR call.

We solved this problem with the help of the module (at this point I need to remind you we use Node.js for our server side). You can apply the solution even if you are not using it, nor using Node at all – just use whatever Web Sockets library comes with your stack of choice.

The devil is in the pudding

OK, on to the details. What we do is as follows:

  • Whenever we want to update the status of the store, we issue an XHR.
  • The XHR fetches the items on the server via a downstream API call.
  • Before sending the response, the server endpoint initiates an array of async parallel calls to obtain status for each of the items. It sends the response to the client without waiting.
  • The client renders the cards, with the status rendered as a gray circle.
  •  When the status API calls return, the result is sent back via Web Socket channel as an event.
  • On the client, this event is converted into a Flux action, which in turn updates the Store, which updates the views.


Produce the code

OK, enough architecture, produce the code. In the interest of space conservation, I am going to assume you have rudimentary knowledge of both React and Flux. If not, Google away until React components, dispatcher, actions and stores are all concepts that make sense to you.

Our action creator is capable of dispatching two actions – fetching items and initiation service status update:

var AppDispatcher = require("./dispatcher");

var ServiceConstants = require("./action-constants");

var ServiceActions = {
	fetchItems: function(data) {
			actionType: ServiceConstants.FETCH_ITEMS,
			data: data
	serviceStatusUpdate: function(data) {
			actionType: ServiceConstants.SERVICE_STATUS_UPDATE,
			data: data

module.exports = ServiceActions;

OK, on to the ItemStore. It is a fairly standard affair. When asked to fetch items, it makes an XHR request to the endpoint in the Node.js server that returns server items. When status update arrives, it merges the provided status with the items and fires a change event again. This will drive the view to re-render:

// Modules

var Constants = require("./action-constants");
var EventEmitter = require("events").EventEmitter;

// Globals

var AppDispatcher = require("./dispatcher");
var XhrUtils = require("./xhr-util");

var EVENT_CHANGE = "items";
var items;
var emitter = new EventEmitter();
var listeningForSocket = false;

// Public Methods ------------------------------------------------------------->

var ItemStore = {

	getItems: function(type) {
		return items;

	emitChange: function(data) {
		emitter.emit(EVENT_CHANGE, data);

	addChangeListener: function(callback) {
		emitter.on(EVENT_CHANGE, callback);

	removeChangeListener: function(callback) {
		emitter.removeListener(EVENT_CHANGE, callback);


module.exports = ItemStore;

// Private Methods ------------------------------------------------------------>

function fetchItems(url) {
	if (!url) {
		console.warn("Cannot fetch items - no URL provided");

	// Fetch content
	XhrUtils.doXhr({url: url, json: true}, [200], function(err, result) {
		if (err) {
			console.warn("Error fetching assets from url: " + url);

		items = result;

function serviceStatusUpdate(data) {
	for (var i in data) {
		var s = data[i];

function _updateStatus(s) {
	for (var i in items) {
		var item = items[i];
		if ( {
			item.status = s.status;

// Dispatcher ------------------------------------------------------------>

// Register dispatcher callback
AppDispatcher.register(function(payload) {
	var action = payload.action;

	// Define what to do for certain actions
	switch (action.actionType) {
		case Constants.FETCH_ITEMS:
			return true;

	return true;


Finally, the view. It fires the action that fetches items upon mounting, and proceeds to render the initial state with the loading indicator. Once the items are ready, it re-renders itself with the server items in place. Finally, when the status update arrives, it just re-renders and lets React diff the DOM for changed properties. Which is why we love React in the first place:

var React = require('react');
var Router = require('react-router');
var ItemStore = require('../flex/item-store');
var ActionCreator = require('../flex/action-creator');
var ActionConstants = require('../flex/action-constants');

module.exports = React.createClass({
  getInitialState: function() {
    return { loading: true, items: [] };

  componentDidMount: function() {

  componentWillUnmount: function() {

  _handleAssetsChanged: function(type) {
    if(type === ActionConstants.STATE_ERROR) {
        error: "Error while loading servers"
    } else {
        loading: false,
        error: null,
        items: ItemStore.getItems() || []

  render: function render() {
    var loading;
    var items;
    var error;

    if (this.state.loading) {
      loading = (
         <div className="items-loading">
            <div className="LoadingSpinner-dark"></div>
    else {
      items = (
          { {
            var statusClass = "status-indicator";
            if (item.status==="active")
              statusClass+= " status-active";
            else if (item.status==="error")
              statusClass+= " status-error";
              statusClass+= " status-loading";
            return (
               <li className="service-card" key={}>
                  <div className="service-card-status">
                  <div className={statusClass}/></div>
                  <div className="service-card-name">{item.title}</div>
                  <div className="service-card-type">{item.dist}</div>
    if (this.state.error) {
      error = (
         <div className="error-box">{this.state.error}</div>
    return (
       <div id='list'>
             This list shows currently available servers and their status (
             <span className="text-loading">gray</span> - loading,&nbsp; 
             <span className="text-active">green</span> - active,&nbsp; 
             <span className="text-error">red</span> - error)

The status update portion involves module. When the status is being computed, it is being emitted via the Web Socket. Of course, for this example we are faking the delay, but you can imagine actual status API request being involved:

var io = require('')(server);

function emitStatus(items) {
  var status = [];
  for (var i in items) {
    var item = items[i];
    var s = {};
    s.status = (Math.random()<.5)?"active":"error"; =;
    setTimeout(function(status) {
      io.emit("status", [status]);
    }, (Math.floor((Math.random() * 900) + 100)), s);

This status even is being picked up on the client, producing the status update action that is dispatched, picked up by the store and eventually causing the view to update:

  var socket = io.connect(location.origin);
  socket.on('status', function (data) {

Conclusion and commentary

One of the many realization we have come to using React in production is that one can get very far with views alone. A top-level view can make XHR requests, then pass the data down to child components as props. Child components can capture low-level events, and translate them into events of higher level of abstraction via callbacks passed in as props.

We realized many simple pages can be entirely done this way. What we have noticed though is that when Web Sockets and server-side events are involved, Flux architecture really comes to its own and shows its potential. Mixing server side events with events produced by users interacting with the UI is much better done using Flux.

The problem we tried to solve is rendering the initial page. If the user stays on the page for a prolonged period of time, status changes of the items can continue, but the code above can handle it without any changes. Of course, if others can create or remove server items, new events and actions are needed to model that.

As usual, the entire example is available as a Node.js app on Github, and I have deployed it in Bluemix so that you can see the app running here: Just keep refreshing the browser to see the three-step rendering (page, then items, and finally status).

© Dejan Glozic, 2016

Vive la Révolution App

Source: Wikimedia Commons

This post is a based on a presentation I made on a dare – something a former colleague proposed with only a title and a description, and it was up to me as the replacement to provide the actual content. It sort of reminds me of a debate club, where you are told that the topic is ‘App Revolution’, and you have 20 minutes to argue the ‘Pro’ position. What follows is my attempt to do it justice. Have fun (and mercy).

When we are confronted with the topic of revolutions, most of my North American friends immediately conjure up the sound of Yankee Doodle and the picture of George Washington crossing the Delaware River (I saw it last year in The Met – boy, is that painting big!). Being of European descent, my thoughts give preference to the French Revolution. It has essentially given us the modern European society, with milestone documents such as ‘Declaration of the Rights of Men and Citizen’ shown above. It has also given us the guillotine, which is sad but as Jacques Mallet du Pen famously quipped, all revolutions devour their children. What can you do – it’s a revolution, so16,594 people are bound to lose their heads, give or take.

One of the indispensable aspects of revolutions are slogans, something you can easily chant at the large group gatherings. Something catchy, such as ‘Freedom, Equality and Fraternity’ in the case of the French Revolution. Or as Blackadder interpreted it ‘Freedom, Equality and fewer fat bastards eating all the pie’.

As you correctly noticed, these slogans often call for three things. If that is true, and we are indeed witnessing an App Revolution, what would our slogan be? What three things would we want from our oppressors?

We are fighting for the freedom and abundance of data, infrastructure and architecture.

– Oppressed developers everywhere.

Note that when I say ‘freedom’, I don’t necessarily mean ‘completely free’. We know we will need to pay for some of it, hence the word ‘abundance’. While food in Western society is not exactly free, it is definitely abundant. You can go into any supermarket and leave with a whole rotisserie chicken for a few dollars. During the French Revolution, only the aforementioned fat bastards could afford it. That’s progress.

Hence, let me try to explain why we are fighting for these three things.

Freedom of Data

You have probably heard the phrase that we are living in an age of ‘API Economy’. What does that actually mean? In the past, data was a by-product of people using your application. Over time, your app’s database would fill up with data. The thinking was that the app is the product, and data is just internal by-product, a consequence of app usage. More recently, data started to take off as something that can be as important, or in some cases the only product you provide.

While in the past tacking on an API to your app would be an afterthought, something you may consider for partner or customer integrations, most modern systems are now built by first building the API for the data, then building up various clients that consume it. Your own clients are just ‘reference implementation’ of hopefully many other consumers of your APIs that will follow.

Source: IBM

Even music is going API these days. Sound engineers are now expected to provide stems of mastered music (drums, bass, guitars, keyboards, vox) so that remixers can easily provide derivative value without the hassle of sampling fully mixed songs (the audio equivalent of screen-scraping). What are stems but audio APIs?

Why is this important to us? Because when you open up your APIs, you become a platform, and platforms foster app eco-systems, with apps creating new value in many unexpected ways. Today, the most coveted place for any company is not to create a consumer product, but to create a platform that offers data and API, and creates a flourishing eco-system of apps built to take advantage of it. API discovery is now in the vogue, catalogs are sprouting, and all you need is to subscribe, obtain the authentication key and start building your innovative abstraction on top of it, or by combining multiple data sources in an innovative way. You can be data mining, providing innovative interfaces, analytics, or integrations with other systems.

If you are building a mobile app, all you need is a laptop and a phone to test your app. However, if you need anything in the back end you need to build a companion server-side app, which leads us to…

Freedom of Infrastructure

When I was a child, my parents bought me a Meccano kit. In those days, giving a child a box full of tiny sharp metal objects was consider totally cool. I quickly built all the possible toys based on the accompanied booklet, but sneaky bastards from Meccano also put a picture of a crane on the box that would require something like 10 sets to build. Since then, I developed this realization that I need to find a discipline in which I will not be limited by a box of finite number of parts.

Source: Meccano Beam Engine, Liskeard Museum

That’s why I chose software engineering – it is rare you will run out of files, or classes, or functions or variables the way you can run out of Meccano panels or tiny nuts and bolts.

However, once you venture into Web development, you hit the infrastructure version of Meccano. Your database, your server, your front end proxy all need to be hosted on physical boxes, and Mordac The Preventer from Information Services can make your life miserable in a hurry.

This is why Cloud is so important for our revolution. Regardless of where you fall on your ‘as a Service’ comfort level, you can use an IaaS or PaaS or SaaS to stand up your apps in minutes. Assuming you have found free or abundant source of data, your app can now be running and stay running without the need to worry about the messy sysadmin details or melted boards.

It does not end with just seeing your app running, either – you can jump into the third freedom that is the final cornerstone of our revolution.

Freedom of Architecture

In the dark ages of IT, it used to be that architecture was for the rich, and The Big Ball of Mud was for the rest of us. While you instinctively know that you should not be cashing those objects in memory, who is going to stand up, maintain and cluster Redis for it. You know that a message broker would be a real answer for your particular problem, but don’t have the stomach to stand up and administer RabbitMQ, or any of the popular alternatives. There is no accident that the famous Martin Fowler’s book from 2002 is called Patterns of Enterprise Application Architecture. At that time, only an enterprise could afford to provision and maintain all the boxes that such an architecture requires.

Source: Dejan Glozic

That same Martin Fowler not talks about Polyglot Persistence – the approach where apps in a distributed system choose different types of databases that perfectly suit their diverse needs, instead of underpowered MySql for everything. And he is not using the word ‘ enterprise’ this time, fully aware that a nerd hacking away on his Mac in Starbucks can provision such a system in minutes. App revolution indeed.

All together now

When we put our three demands together, great things can happen. To illustrate how far we have come, consider the system that I made the attendees of an IBM Interconnect 2015 lab build over the course of 2 hours:

Source: Dejan Glozic

This system is just a toy, designed to teach modern micro-service architecture, and yet it would require that we stand up several servers, install and configure a ton of software, and build our own user management system:

  1. It uses Facebook for delegated authentication and to tap into Facebook’s data. No need to stand up anything, just register as a Facebook developer, obtain your client ID and secret and off you go.
  2. It deploys complex infrastructure (two Node.js app servers, a proxy, a data cache) to Bluemix PaaS within a matter of minutes, all using just a Web browser. In a pinch you could do it on a bus using your iPad, while also debating someone totally wrong on the Internet.
  3. It uses serious architecture (OAuth2 provider, Nginx proxy, Node.js micro-services, session sharing via Redis store) that was unheard of for non-institutional developers in the past.

Platforms everywhere

Of course, the notion of a platform is not limited to the Web. In fact, some of you may have initially thought the article is about mobile apps. Phones are huge app ecosystems, and so are the upcoming wearable platforms, of which iWatch is just the latest example.

Venturing further away from the classic Web apps, cars are now becoming rife with platforms unleashing the app revolution of sorts. Consider Apple’s CarPlay that Scott Rich wrote about in O’Reilly Radar – a platform for apps in your car, tapping at the latent and closed data world and opening it up as a new app eco system. It is a different context but the model seems to be the same: create a platform, open up the data through APIs, and unleash the inventions of app revolutionaries hunched over their laptops around the world.

Means of production

In the past, the control of data, infrastructure and architecture were limiting factors for the masses of developers around the world. Creativity and ideas are dispersed far more equitably than the control over resources would make you believe. At least in the area of software development, the true app revolution is in removing these control points and allowing platforms and eco systems to let the best ideas bubble up.

Whether you are a guy at a reclaimed wood desk overlooking San Francisco’s Mission district, or a girl in Africa at a reclaimed computer in a school built by a humanitarian mission, we are approaching the time when we will only be limited by our creativity, and by our ability to dream and build great apps. And that, my fellow developers, is worth fighting for.

© Dejan Glozic, 2015

Should I Build a Site or an App? Yes!

Minnesota State Capitol Woodworkers Toolbox, circa 1900, Wikimedia Commons.
Minnesota State Capitol Woodworkers Toolbox, circa 1900, Wikimedia Commons.

Yes, I know. I stopped blogging to take a desperately needed break. Then I returned only to be hit with a mountain of fresh, ‘hit the ground running’, honest to God January work that knocked the air out of my lungs and pinned me down for a while. Then an IBM colleague tried to ask me a Dust.js question, my doors were closed due to a meeting, and he found his answer in one of my blog posts.

So my blog is actually semi-useful, but it will stop being so without new content, so here is the first 2015 instalment. It is about one of my favorite hobbies – being annoyed with people being Wrong on the Internet. Judging by various discussion threads, developers are mostly preoccupied by these topics:

  1. All the reasons why AngularJS is awesome/sucks and will be the next jQuery/die in agony when 2.0 ships (if it ever ships/it will be awesome/cannot wait).
  2. Picking the right client side MVC framework (lots of people out there frozen into inaction while looking at the subtle differences of TODO app implementations in 16 different incarnations)
  3. Declaring client side single-page apps ‘the cool way’ and server side rendering ‘the old way’ of Web development

These topics are all connected, because if you subscribe to the point of view in (3), you either pray at the church of AngularJS (1) or you didn’t drink the Kool-Aid and subsequently need to pick an alternative framework (2).

Dear fellow full-stack developers and architects, that’s pure nonsense. I didn’t put an image of a toolbox at the top because @rands thinks it will nicely fit Restoration Hardware catalog. It is a metaphor of all the things we learn along the way and stash in our proverbial tool box.

Sites and apps

The boring and misleading discussion ‘server or client side apps’ has its origin in the evolution of the Web development. The Web started as a collection of linked documents with strong emphasis on indexing, search and content. Meanwhile, desktop applications were all about programming – actions, events, widgets, panes. Managing content in desktop apps was not as easy as on the Web. As a flip side, having application-like behaviour on the Web was hard to achieve at first.

When Ajax burst onto the scene, this seemed possible at last, but many Ajax apps were horrible – they broke the Back button, didn’t respect the Web, were slow to load due to tons of JavaScript (the dreaded blank page), and the less I say about hashes and hash bangs in URLs, the better.

It is 2015 now and the situation is much better (and at least one IBM Fellow concurs). Modern Ajax apps are created with more predictable structure thanks to the client side MV* frameworks such as BackboneJS, AngularJS, EmberJS etc. HTML5 pushState allows us to go back to deep linking. That still does not mean that you should use a hammer to drill a hole in the wall. Right tool for the right job.

And please don’t look at native mobile apps in envy (they talk to the server using JSON APIs only, I should do that too). They are physically installed on the devices, while your imposter SPA needs to be sent over mobile networks before anything can be seen on the screen (every bit of your overbuilt, 1MB+ worth of JavaScript fatness). Yes, I know about caching. No, your 1MB+ worth of JavaScript still needs to be parsed every time with the underpowered JavaScript engine of the mobile browser.

But I digress.

So, when do you take out site tools instead of Web app tools? There are a few easy questions to ask:

  1. Can people reach pages of your app without authenticating?
  2. Do you care about search engine optimization of those pages? (I am curious to find people who answer ‘No’ to this question)
  3. Are your pages mostly linked content with a little bit of interactivity?

If this describes your project, you would be better off writing a server-side Web app (say, using NodeJS, express and a rendering engine like Handlebars or Dust.js), with a bit of jQuery and Bootstrap with a custom theme to round things up.

Conversely, these may be the questions to ask if you think you need a single-page app:

  1. Do people need to log in in order to use my site?
  2. Do I need a lot of complex interactive behaviour with smooth transition similar to native apps?
  3. Do I expect users to spend a lot of time in my app doing something creative and/or collaborative?

What if I need both?

Most people actually need both. Your site must have a landing page, some marketing content, documentation, support – all mostly content based, open to search engine crawlers and must be quick to download (i.e. no large JS libraries please).

Then there is the walled up section where you need to log in, and then interact with stuff you created. This part is an app.

The thing is, people tend to think they need to pick an approach first, then do everything using that single approach. When site people discuss with app people on the Internet, they sound to me like Abbott and Costello’s ‘Who’s on First?’ routine. Site people want the home page to be fast, and don’t want to wait for AngularJS to download. They also don’t want content people to learn Angular to produce new pages. App people shudder at the thought of implementing all the complex interactions by constantly redrawing the entire page (sooner or later Web 1.0 is mentioned).

The thing is, they are both right and wrong at the same time. It may appear they want to have their cake and eat it too, but that is fairly easy to do. All you need to do is apply some care in how your site is structured, and give up on the ideological prejudice. Once you view server and client side techniques as mere tools in the toolbox, all kinds of opportunities open up.

Mixing and matching

The key in mixing sites and apps is your navigational structure. Where SPA people typically lose it is when they assume EVERYTHING in their app must be written in their framework of choice. This is not necessary, and most frameworks are embeddable. If you construct your site navigation using normal deep links, you can construct your navigational areas (for example, your site header) on the server and just use these links as per usual. Your ‘glue’ navigational areas should not be locked in the client side MV* component model because they will not work on the server for the content pages.

What this means is that you should not write your header as an Angular directive or a jQuery plug-in. Send it as plain HTML from the server, with some vanilla JavaScript for dynamic effects. Keep your options wide open.

For this to work well, the single page apps that are folded into this structure need to enable HTML5 mode in their routers so that you can transparently mix and match server and client side content.

Now add micro-services and stir for 10 minutes

To make things even more fun, these links can be proxied to different apps altogether if your site is constructed using micro-services. In fact, you can create a complex site that mixes server-side content with several SPAs (handled by separate micro-services). This is the ultimate in flexibility, and if you are careful, you can still maintain a single site experience for the user.

To illustrate the point, take a look at the demo I have created for the Full Stack Toronto conference last year. It is still running on Bluemix, and the source code is on GitHub. If you look at the header, it has several sections listed. They are powered by multiple micro-services (Node apps with Nginx proxy in front). It uses the UI composition technique described in one of the previous posts. The site looks like this when you click on ‘AngularJS’ link:


The thing is, this page is really a single-page app folded in, and a NodeJS micro-service sends AngularJS content to the browser, where it takes over. In the page, there are two Angular ‘pages’ that are selectable with two tabs. Clicking on the tabs activates Angular router with HTML5 mode enabled. As a result, these ‘pages’ have normal URLs (‘/angular-seed/view1’ and ‘/angular-seed/view2’).

Of course, when clicking on the links in the browser, Angular router will handle them transparently, but if you bookmark the deep URL and paste in the browser address bar, the browser will now hit the server first. The NodeJS service is designed to handle all links under /angular-seed/* and will simply serve the app, allowing Angular router to take over when loaded.

The really nice thing is that Angular SPA links can sit next to links such as ‘About’ that are a plain server-side page rendered using express and Dust.js. Why wrestle with Angular when a straightforward HTML page will do?

Floor wax and dessert topping

There you go – move along, nothing to see here. There is no point in wasting time on Reddit food fights. A modern Web project needs elements of server and client side approaches because most projects have heterogeneous needs. Once you accept that, real fun begins when you realize you can share between the server and the client using a technique called ‘isomorphic apps’. We will explore these techniques in one of the future posts.

© Dejan Glozic, 2015

Full Stack Toronto Conference 2014


We at IBM are not strangers to large, well capitalized conferences. As things go in the conference-industrial complex, it is a big deal when one of your keynote speakers is Kevin Spacey, or Imagine Dragons entertain you after hours. So to say that the first Full Stack Toronto Conference was on the opposite side of the spectrum would be an understatement.

How about ‘no food’, ‘no drinks except coffee in the morning’, and ‘no entertainment’ of any kind except finding parking around Ryerson University building? OK, not fair, there was a get together in the nearby Irish pub the first night that I didn’t go to because I was tired.

This conference was really just a meetup making the next step. And on a weekend. Starting on 8:45am. Who does that? Our keynote speaker Anila Arthanari, Director Of Software Development at Infusionsoft, wore a t-shirt ‘I am not a morning person’ expressing the mood of most of the audience. By all accounts, this was supposed to be a flop.

And yet. When you peel the layers of conference pageantry, what remains is the kernel of it all – good talks. When talks are good, people will not mind walking out to a nearby panini store to buy lunch, or walk a block up the Church Street to get a Starbucks hit. Nothing matters if talks are good.

And they were. We had multiple tracks, and I could not go to all the talks, but those I attended were very informative, thought-provoking and immanently applicable. After every talk I had tons of things I wrote down to try after, or catch up on. So what were my key takeaways from the conference?

  1. Lots of people use Angular.js. If you need a client side MVC, you can do worse, with a caveat that version 2.0 is on the slowly approaching horizon, and to say that migration will be interesting would be an understatement.
  2. More and more people use Browserify over RequireJS. Put off by weird configuration syntax, and feeling the need easier code reuse in the case of Node.js, people seem to prefer to just ‘require’ their modules. It makes it easier to go back and forth. I am definitely going to try using it soon. This blog might help as well.
  3. Micro-services are everywhere. In the tongue-in-cheek ‘Show Us Your Stack’ track, multiple presenters described their journey from monoliths to micro-service systems. I like how people are now past the hype and deep in the gory details on standing up such systems in practice. Many were openly asking the audience for their feedback and red flags if they see any.
  4. Not everybody uses Angular.js. I know this is a contradiction, but people who value control and being able to grow into a client side MVC still value Backbone.js and its modular approach. If anything, Angular.js 2.0 promises to be more modular and less arrogant, for the lack of a better word. Here is hoping that in the future, considering Angular.JS will not be such an ‘all or nothing’ dilemma.
  5. Isomorphic and ‘federation of single-page apps’ is a thing. I thought I will be the only one pushing for rendering stuff on both the server and the client using the same templates, but Matthew Conlen from New York Data Company talked about exactly such an approach. Personally, I find it funny that people are happy to partition the API space into micro-services, but don’t feel the need to do the same with the Web apps. As the system grows, a single one page app providing all the UI is going to be the bottleneck of the system. Which is ironic, because user interfaces are the most transient and need to be evolved at a rapid pace. In essence, we are creating a system where API services can move at a rapid speed, but the UI is one big ball of MVC mud.
  6. React can help with isomorphic. Once you decide rendering on both sides of the fence is important to you, React is an attractive proposition because it can do exactly that, and plays nice with Node.js.
  7. Internet Of Things is still in its infancy. It seems like we all feel this weird excitement over turning the lights with Node.js apps and sending messages to robots and receiving MQTT messages that it is now 24C in the room. It is apparent that all this stuff will matter one day and great things will come, but I don’t see what to do with it today other than marvel in the possibilities. I guess once somebody does something really awesome with IoT, we will all slap our collective foreheads and say ‘but of course, so elegant’.

By the way, yours truly presented as well. You can see my slides up on Slideshare, and the source code of my demo on GitHub. You may find it interesting – I got Angular.js to fit into the micro-service driven Web UI, use normal URLs (no horrible hashes or hash bangs), and share a common header with other pages. I also demonstrated SSO using Facebook as the identity provider, lively UI using Web Sockets and isomorphic approach using Dust.js rendered on both client and server. The best part was when an audience member posted a todo into the demo running live on Bluemix, and his entry popped up on screen as I was demoing it. Audience participation, live demo, unexpected proof that the code actually works as designed – priceless!

So there you have it – you can make the attendees feed themselves, only give them coffee in the morning (what is life even), and dispense with most of the usual conference perks, and they will still come if the talks are good. I would say that the first FullStack TO conference focused on the most important thing, and succeeded. Good talks first, creature comforts to follow – good priorities in my book. Looking forward to the next year!

© Dejan Glozic, 2014

Micro-Services and Page Composition Problem


Dispite many desirable properties, micro-services carry two serious penalties to be contended with: authentication (which we covered in the previous post) and Web page composition, which I intend to address now.

Imagine you are writing a Node.js app and use Dust.js for the V of the MVC, as we are doing. Imagine also that several pages have shared content you want to inject. It is really easy to do using partials, and practically every templating library has a variation of that (and not just for Node.js).

However, if you build a micro-service system and your logical site is spread out between several micro-services, you have a complicated problem on your hands. Now partial inclusion needs to happen across the network, and another service needs to serve the shared content. Welcome to the wonderful world of distributed composition.

This topic came into sharp focus during 2014. Clifton Cunningham presented the work of his team in this particular area, and the resulting project Compoxure they have open-sourced and shared with us. Clifton has written about it in his blog and it is a very interesting read.

Why bother?

At this point I would like to step back and look at the general problem of document component model. For all their sophistication and fantastic feature set, browsers are stubbornly single document-oriented. They fight with us all the time when it comes to where the actual content on the page comes from. It is trivially easy to link to a number of stylesheets and JavaScript files in the HEAD section of the document, but you cannot point at a page fragment and later use it in your document (until Web Components become a reality, that is – including page fragments that contain custom element templates and associated styles and scripts is the whole point of this standard).

Large monolithic server-side applications were mostly spared from this problem because it was fairly easy to include shared partials within the same application. More recently, single page apps (SPAs) have dealt with this problem using client side composition. If everything is a widget/plug-in/addon, your shared area can be similarly included into your page from the client. Some people are fine with this, but I see several flaws in this approach:

  1. Since there is no framework-agnostic client side component model, you end up stuck with the framework you picked (e.g. Angular.js headers, footers or navigation areas cannot be consumed in Backbone micro-services)
  2. The pause until the page is assembled in SPAs due to JavaScript downloading and parsing can range from a short blip to a seriously annoying blank page stare. I understand that very dynamic content may need some time to be assembled but shared areas such as headers, footers, sidebars etc. should arrive quickly, and so should the initial content (yeah, I don’t like large SPAs, why do you ask?)

The approach we have taken can be called ‘isomorphic’ – we like to initially render on the server for SEO and fast first content, and later progressively enhance using JavaScript ‘on the fly’, and dynamically load with Require.js. If you use Node.js and JavaScript templating engine such as Dust.js, the same partials can be reused on the client (something Airbnb has demonstrated as a viable option). The problem is – we need to render a complete initial page on the server, and we would like the shared areas such as headers, sidebars and footers to arrive as part of that first page. With a micro-service system, we need a solution for distributed document model on the server.


Clifton and myself talked about options at length and he has a nice breakdown of alternatives at the Compoxure GitHub home page. For your convenience, I will briefly call out some of these alternatives:

  1. Ajax – this is a client-side MVC approach. I already mentioned why I don’t like it – it is bad for SEO, and you need to stare at the blank page while JavaScript is being downloaded and/or parsed. We prefer to use JavaScript after the initial hit.
  2. iFrames – you can fake document component models by using seamless iframes. Bad for SEO again, there is no opportunity for cashing (therefore, performance problems due to latency), content in iFrames is clipped at the edges, and problems for cross-frame communication (although there are window.postMessage workarounds). They do however solve the single-domain restriction browsers impose on Ajax. Nevertheless, they have all the cool factor of re-implementing framesets from the 90s.
  3. Server Side Includes (SSIs) – you can inject content using this approach if you use a proxy such as Nginx. It can work and even provide for some level of caching, but not the programmatic and fine grain control that is desirable when different shared areas need different TTL (time to live) values.
  4. Edge Side Includes (ESIs) – a more complete implementation that unfortunately locks you into Varish or Akamai.

Obviously for Clifton’s team (and ourselves), none of these approaches quite delivers, which is why services like Compoxure exist in the first place.

Direct composition approach

Before I had an opportunity to play with Compoxure, we spent a lot of time wrestling with this problem in our own project. Our current thinking is illustrated in the following diagram:

composition1The key aspects of this approach are:

  1. Common areas are served by individual composition services.
  2. Common area service(s) are proxied by Nginx so that they can later be called by Ajax calls. This allows the same partials to be reused after the initial page has rendered (hence ‘isomorphic’).
  3. Common area service can also serve CSS and JavaScript. Unlike the hoops we need to go through to stitch HTML together, CSS and JavaScript can simply be linked in HEAD of the micro-service page. Nginx helps making the URLs nice, for example ‘/common/header/style.css’ and ‘/common/header/header.js’.
  4. Each micro-service is responsible for making a server-side call, fetching the common area response and passing it into the view for inlining.
  5. Each micro-service takes advantage of shared Redis to cache the responses from each common service. Common services that require authentication and can deliver personalized response are stored in Redis on a per-user basis.
  6. Common areas are responsible for publishing messages to the message broker when something changes. Any dynamic content injected into the response is monitored and if changed, a message is fired to ensure cached values are invalidated. At the minimum, common areas should publish a general ‘drop cache’ message on restart (to ensure new service deployments that contain changes are picked up right away).
  7. Micro-services listen to invalidation messages and drop the cached values when they arrive.

This approach has several things going for it. It uses caching, allowing micro-services to have something to render even when common area services are down. There are no intermediaries – the service is directly responding to the page request, so the performance should be good.

The downside is that each service is responsible for making the network calls and doing it in a resilient manner (circuit breaker, exponential back-off and such). If all services are using Node.js, a module that encapsulates Redis communication, circuit breaker etc. would help abstract out this complexity (and reduce bugs). However, if micro-services are in Java or Go, we would have to duplicate this using language-specific approaches. It is not exactly rocket science, but it is not DRY either.

The Compoxure approach

Clifton and guys have taken a route that mimics ESI/SSI, while addressing their shortcomings. They have their own diagrams but I put together another one to better illustrate the difference to the direct composition diagram above:

composition2In this approach, composition is actually performed in the Compoxure proxy that is inserted between Nginx and the micro-services. Instead of making its own network calls, each micro-service adds special attributes to the DIV where the common area fragment should be injected. These attributes control parameters such as what to include, what cache TTLs to employ, which cache key to use etc. There is a lot of detail in the way these properties are set (RTFM), but suffice to say that Compoxure proxy will serve as an HTML filter that injects the content from the common areas into these DIVs as instructed.

<div cx-url='{{server:local}}/application/widget/{{cookie:userId}}'
     cx-cache-ttl='10s' cx-cache-key='widget:user:{{cookie:userId}}'
     cx-timeout='1s' cx-statsd-key="widget_user">
This content will be replaced on the way through

This approach has many advantages:

  1. The whole business of calling the common area service(s), caching the response according to TTLs, dealing with network failure etc. is handled by the proxy, not by micro-services.
  2. Content injection is stack-agnostic – it does not matter how the micro-service that serves the HTML is written (in Node.js, Java, Go etc.) as long as the response contains the expected tags
  3. Even in a system written entirely in Node.js, writing micro-services is easier – no special code to add to each controller
  4. Compoxure is used only to render the initial page. After that, Ajax takes over and composition service is hit with Ajax calls directly.

Contrasting the approach with direct composition, we identified the following areas of concern:

  1. Compoxure parses HTML in order to locate DIVs with special tags. This adds a performance hit, although practical results imply it is fairly small
  2. Special tags are not HTML5 compliant (‘data-‘ prefix would work). If this bothers you, you can configure Compoxure to completely replace the DIV with these tags with the injected content, so this is likely a non-issue.
  3. Obviously Compoxure inserts itself in front of the micro-services and must not go down. It goes without saying that you need to run multiple instances and practice ZDD (Zero-Downtime Deployment).
  4. Caching is static i.e. content is cached based on TTLs. This makes picking the TTL values tricky – our approach that involves pub/sub allows us to use higher TTL values because we will be told when to drop the cached value.
  5. When you develop, direct composition approach requires that you have your own micro-service up, as well as common area services. Compoxure adds another process to start and configure locally in order to be able to see your page with all the common areas rendered. If you hit your micro-service directly, all the DIVs with the ‘cx-‘ properties will be empty (or contain the placeholder content).


Direct composition and Compoxure proxy are two valid approaches to the server-side document component model problem. They both work well, with different tradeoffs. Compoxure is more comfortable for developers – they just configure a special placeholder div and magic happens on the way to the browser. Direct composition relies on fewer moving parts, but makes each controller repeat the same code (unless that code is encapsulated in a shared Node.js module).

An approach that bridges both worlds and something we are seriously thinking of doing is to write a Dust.js helper that further simplifies inclusion of the common areas. Instead of importing a module, you would import a helper and then just use it in your markup:

{@import url="{headerUrl}" cache-ttl="10s"
cache-key="widget:user:{userid}" timeout="1s"}

Of course, Compoxure has some great properties that are not easy to replicate with this approach. For example, it does not pass TTL values to Redis directly because it would cause the cashed content to disappear after the coundown, and Compoxure perfers to keep the last content past TTL in case the service is down (better to serve slightly stale content than no content at all). This is a great feature and would need to be replicated here. I am sure I am missing other great features and Clifton will probably remind me about it.


In the end, I like both approaches for different reasons, and I can see a team use both successfully. In fact, I could see a solution where both are available – a Dust.js helper for Node.js/Dust.js micro-services, and Compoxure for everybody else (as a fallback for services that cannot or do not want to fetch common areas programmatically). Either way, the result is superior to the alternatives – I strongly encourage you to try it in your next micro-service project.

You don’t even have to give up your beloved client-side MVCs – we have examples where direct composition is used in a page with Angular.js apps and another with a Backbone app. These days, we are spoiled for choice.

© Dejan Glozic, 2014

Sharing micro-service authentication using Nginx, Passport and Redis

Wikimedia Commons, Abgeschlossen 1, by Montillona

And we are back with the regularly scheduled programming, and I didn’t talk about micro-services in a while. Here is what is occupying my days now – securing a micro-service system. Breaking down a monolith into a collection of micro-services has some wonderful properties, but also some nasty side-effects. One of them is authentication.

The key problem of a micro-service system is ensuring that its federated nature is transparent to the users. This is easier to accomplish in the areas that are naturally federated (e.g. a collection of API end points). Alas, there are two areas where it is very hard to hide the modular nature of the system: composing Web pages from contributions coming from multiple services, and security. I will cover composition in one of the next posts, which leaves us with the topic of the day.

In a nutshell, we want a secure system but not at the expense of user experience. Think about European Union before 1990. In some parts of Europe, you could sit in a car, drive in any direction and cross five countries before the sunset. In those days, waiting in line at the custom checkpoint would get old fast and could even turn into quite an ordeal in extreme cases.

Contrast it to today – once you enter EU, you just keep driving, passing countries as if they are Canadian provinces. Much better user experience.

We want this experience for our micro-service system – we want to hop between micro-services, be secure yet not being actively aware that the system is not a monolith.

It starts with a proxy

A first step in securing a micro-service system starts at a proxy such as Nginx. Placing a multi-purpose proxy in front of our services have several benefits:

  1. It allows us to practice friendly URL architecture – we can proxy nice front end URLs such as or to separate micro-services (‘users’ and ‘projects’, respectively). It also goes around the fact that each Node.js service runs on a separate port (something that JEE veterans tend to hate in Node.js, since several apps running in the same JEE container can share ports).
  2. It allows us to enable load balancing – we can proxy the same front end location to a collection of service instances running in different VMs or virtual containers (unless you are using a PaaS, at which point you just need to increment the instance counter).
  3. It represents a single domain to the browser – this is beneficial when it comes to sharing cookies, as well as making Ajax calls without tripping the browser’s ‘same origin’ policy (I know, CORS, but this is much easier)

If we wanted to be really cheap, we could tack on another role for the front end proxy – authentication. Since all the requests are passing through it, we can configure a module to handle authentication as well and make all the micro-services behind it handle only authenticated requests. Basic auth module is readily available for Nginx, but anything more sophisticated is normally not done this way.

Use Passport

Most people will need something better than basic auth, and since we are using Node.js for our micro-service system, Passport is a natural choice. It has support for several different authentication strategies, including OAuth 1.0 and OAuth 2.0, and several well known providers (Twitter, Facebook, LinkedIn). You can easily start with a stock strategy and extend it to handle your unique needs (this will most likely be needed for OAuth 2.0 which is not a single protocol but an authentication framework, much to the dismay of Eran Hammer).

Passport is a module that you insert in your Node.js app as middleware, and hook up to the Express session. You need to expose two endpoints: ‘/auth/<provider>’ and ‘/auth/<provider>/callback’. The former is where you redirect the flow in order to start user login, while the latter is where the Auth server will call back after authenticating the user, bringing in some kind of authorization code. You can use the code to go to the token endpoint and obtain some kind of access token (e.g. bearer token in case of OAuth 2.0). With the access token, you can make authenticated calls to downstream services, or call into the profile service and fetch the user info. Once this data is obtained, Passport will tack it on the request object and also serialize it in the session for subsequent use (so that you don’t need to authenticate for each request).

Passport module has a very nice web site with plenty of examples, so we don’t need to rehash them here. Instead, I want us to focus on our unique situation of securing a number of micro-services working together. What that means is:

  1. Each micro-service needs to be independently configured with Passport. In case of OAuth (1.0 or 2.0), they can share client ID and secret.
  2. Each micro-service can have a different callback URL as long as they all have the same domain. This is where we reap the benefit of the proxy – all the callback URLs do have the same domain thanks to it. In order to make this work, you should register your client’s callback_uri with the authentication provider as the shared root for each services’ callback. An actual callback passed to the authentication endpoint for each micro-service can be longer than the registered callback_uri as long as they all share common root.
  3. Each micro-service should use the same shared authentication strategy and user serialization/deserialization.

Using this approach, we can authenticate paths served by different micro-services, but we still don’t have Single Sign-On. This is because Express session is configured using an in-memory session store by default, which means that each micro-service has its own session.

Shared session cookie

This is not entirely true: since we are using the default session key (or can provide it explicitly when configuring session), and we are using single domain thanks to the proxy, all Node.js micro-services are sharing the same session cookie. Look in the Firebug and you will notice a cookie called ‘connect.sid’ once you authenticate. So this is good – we are getting there. As I said, the problem is that while the session cookie is shared, this cookie is used to store and retrieve session data that is in memory, and this is private to each micro-service instance.

This is not good: even different instances of the same micro-service will not share session data, let alone different micro-services. We will be stuck in a pre-1990 Europe, metaphorically speaking – asked to authenticate over and over as we hop around the site.

Shared session store

In order to fix this problem, we need to configure Express session to use an external session store as a service. Redis is wonderfully easy to set up for this and works well as long as you don’t need to persist your session forever (if you restart Redis, you will lose session data and will need to authenticate again).

var ropts = {
   host: "localhost",
   port: 5556,
   pass: "secret"


    app.use(express.session({ key: 'foobar.sid',
                             store: new RedisStore(ropts),
                             secret: 'secret'}));

I am assuming here that you are running Redis somewhere (which could range from trivial if you are using a PaaS, to somewhat less trivial if you need to install and configure it yourself).

What we now have is a system joined at both ends – Nginx proxy ensures session cookie is shared between all the micro-service instances it proxies, and Redis store ensures actual session data is shared as well. The corollary of this change is that no matter which service initiated the authentication handshake, the access token and the user profile are stored in the shared session and subsequent micro-services can readily access it.


Single Sign Off

Since we have Redis already configured, we can also use it for pub/sub to propagate the ‘logout’ event. In case there is state kept in Passport instances in micro-services, a system-wide logout for the session ensures that we don’t have a “partially logged on” system after a log out in one service.

I mentioned Redis just for simplicity – if you are writing a micro-service system, you most likely have some kind of a message broker, and you may want to use it instead of Redis pub/sub for propagating logout events for consistency.

Calling downstream services

Not all micro-services will need full Passport configuration. You can configure services that require access token – they can just look for ‘Authorization’ header and refuse to do anything if it is not present. For example, for OAuth 2.0 authentication, the app will need something like:

Authorization: Bearer 0b79bab50daca910b000d4f1a2b675d604257e42

The app can go back to the authentication server and verify that the token is still valid, or go straight to the profile endpoint and obtain user profile using the token (this doubles as token validation because the profile service will protest if the token is not valid). API services are good candidates for this approach, at least as one of the authentication mechanisms (they normally need another way for app-to-app authentication that does not involve an actual user interacting with the site).

What about Java or Go?

This solution obviously works great if all the micro-services are written in Node.js. In a real-world system, some services may be written using other popular stacks. For example, what will happen if we write a Java service and try to participate?

Obviously, running a proxy to the Java micro-service will ensure it too has access to the same session cookie. Using open source Redis clients like Jedis will allow it to connect to the same session store. However, the picture is marred slightly by the fact that Express session signs the cookie with a combination of HMAC-Sha256 and ‘base64’ digest, plus some additional tweaking. This is obviously a very Express-centric approach and while it can be re-created on the Java side, there is this lingering feeling we created a Node-centric system and not a stack-agnostic one.

Java has its own session management system and you can see the JSESSIONID cookie sitting next to the one created by Express. I will need to study this more to see if I can make Java and Node share the session cookie creation and signing in a more stack-neutral way. In a system that is mostly Node.js with a Java service here and there, signing and unsigning the session cookie the way Express likes it may not be a big deal.

In addition, my current experimentation with Java points at creation of a JEE filter (middleware) that checks for the session cookie and redirects to authentication endpoints if a user is not found in the session. True Java authentication solutions are not used, which may or may not be a problem for you if you are a JEE veteran. JEE filters provide for wrapping HTTP requests so methods such as ‘getRemoteUser()’ can be implemented to provide the expected results to Java servlets.

I mentioned Go because it is an up and coming language that people seem to use for writing micro-services more and more these days. I have no idea how to write the session support for Go in this context so I am tossing this in just as food for thought. If you know more, drop me a line.

Closing thoughts

There are many alternative ways to solve this problem, and I profess my mixed feelings about authentication. For me, it is like broccoli – I understand that it is important and good for me, but I cannot wait to get to the ice cream, or whatever is the ice cream equivalent in writing micro-service systems. Spending time with Passport and OAuth 2.0, I had to learn more than I ever wanted to know about authentication, but I am fairly pleased with how the system works now. What I like the most is its relative simplicity (to the extend that authentication can be). My hope is that by avoiding smart solutions, the chances of the system actually working well and not presenting us with difficult edge cases every day are pretty good. I will report back if I was too naive.

© Dejan Glozic, 2014 2014: Trip Report (Part 3)


This is the third and final installment of my report. There was also part 1 and part 2 you should probably read first for continuity.

Day three of started with a ‘Mad Science Act’, with all the presenters trading the pre-requisite lab coat (to add ‘science’ and also a bit of ‘mad’ to their presentations).

First off, James Halliday (also known as substack). Demonstrating ultra hard core cred with a Linux laptop surrounded by a sea of Macs, substack repeated the cherished Node.js ethos of creating many small modules that can be combined and re-combined ad nauseum, forming your growing tool box. This approach does not need to end with modules – you can apply it to data, in form of data streams. Substack demoed running code where multiple streams can be combined by piping data at different routes of the server. You can read more and play with the demo at the dataplex GitHub repo.

nodeconfeu-23Feross Aboukhadijeh took us on a wild ride of using WebRTC to replicate the P2P BitTorrent communication between browsers, using the WebRTC data channels. The presentation was a lot of fun, with the audience participating by sharing obligatory pictures of cats from their browsers in real time. This feature is made available in the open source library called WebTorrent, together with a BitTorrent bridge (direct connection to BitTorrent is currently not possible due to the need for TCP connection (there needs to be a ‘hybrid’ client that Feross is currently working on).


Travell Perkins, Fidelity Investments CTO had a talk that I could not immediately place in this block and that got me confused until I realized that it was originally supposed to be on Monday, as part of the ‘Node.js in the enterprise’ section (lab coat notwithstanding). I *think* his talk was about databases, possibly his SQLdown NPM module, but sadly I cannot locate my notes on the talk (too much coffee?).

nodeconfeu-25Dominic Tarr (dubbed ‘our resident Jesus’ which is not hard to understand – just look at the picture!) was another of the hard core speakers using a Linux laptop. The notes on his talk were in the same MIA file as for Travell, so only a picture here.

nodeconfeu-26 The last talk in this long-ish section was Mikola Lysenko on the challenges of developing multi-player online games using Web technologies alone. A key problem to solve is replication because multi-player games are distributed but they share state, and this state needs to be replicated in a way that does not ruin the real-time nature of the game. He demonstrated several options in modeling real time physical systems and their pros and cons (which, in case of ‘eventual consistency’ and error correction can yield some comical side effects as seen in the demo). You can read more on his work in his multi-part blog post.

nodeconfeu-27After a needed sugar and caffeine coma break, we switched to the hardware track. First speaker was Colin Vernon, a designer on a strange mission – to bring NPM-style componentization to the hardware world. I was a hobbyist in my youth, and no stranger to the soldering gun. I had a long pause in dealing with hardware other than in a very limited way of ‘adding cards to PC slots’, so this resurgence of interest in hardware definitely brings back memories. Colin introduced us to the amazing world of littlebits, where Internet of Things can be manually put together using small modules that can be combined with the cloud and mobile remote control apps in many interesting ways. This presentation reminded me of how Twitter looked like a bizarre and pointless toy until it turned into something much bugger and more powerful – this could be another example.

nodeconfeu-28Raquel Vélez brought us Node.js-driven Bat-bot that she configured with a pen, with some freedom as to where to go on the canvas and when to draw. The philosophical question was – at which point we can call the result ‘art’. The practical demonstration almost succeeded but the artist committed robocide by falling off the drawing table before finishing the painting (which made the price of the painting skyrocket as a result). These modest starts like this may seem of little value, but Raquel reminded us that Mars Rover is essentially just a much bigger and more complex bot.

nodeconfeu-29Afternoon workshops reduced our collective age to pre-school, with all the toys laying around – drones, Korg bits (you can assemble an analog synth by connecting the bits together) and other goodies to wake up a kid in you.

nodeconfeu-30All in all, it was a great conference, in a format that made TJ Fontaine call it ‘Node.js vacation’ but it was of course much more than that. A lot of time for informal discussions (and environment that encouraged it because, you know, we are on an island with nothing else to do :). Being able to talk to people that wrote Node.js modules we use daily was absolutely a treat, as well as meeting some internet contacts in person for the first time. I came back home with a bag full of memories and lots of ideas for new projects.

I will close with a view of the Waterford Castle ferry that we got to know well during our conference. See you all again next year!


© Dejan Glozic, 2014