Components Are (Still) Hard on the Web

matryoshka_dolls_3671820040_2

Here’s Johnny! I know, I know. It’s been a while I posted something. In my defence, I was busy gathering real world experience – the kind that allows me to forget the imposter syndrome for a while. We have been busy shipping a complex microservice system, allowing me to test a number of my blog posts in real life. Most are holding up pretty well, thank you very much. However, one thing that continues to bother me is that writing reusable components is still maddeningly hard on the Web in 2016.

Framing the problem

First, let’s define the problem. In a Web app of a sufficient complexity, there will be a number of components that you would want to reuse. You already know we are a large Node.js shop – reusing modules via NPM is second nature to us. It would be so nice to be able to npm install a Web component and just put it in your app. In fact, we do exactly that with React components. Alas, it is much harder once you leave the React world.

First of all, let’s define what ‘reusing a Web component’ would mean:

  1. You can put a component written by somebody else somewhere on your page.
  2. The component will work well and will not do something nasty.
  3. The component will not look out of place in your particular design.
  4. You will know what is happening inside the component, and will be able to react to its lifecycle.

First pick a buffet

Component (or widget) reuse was for the longest time a staple of desktop UI development. You bought into the component model by using the particular widget toolkit. That is not a big deal on Windows or MacOs – you have no choice if you want to make native applications. The same applies to native mobile development. However, on the Web there is no single component model. Beyond components that are intrinsic part of HTML, in order to create custom components you need to buy into one of the popular Web frameworks first. You need to pick the proverbial buffet first before you can sample from it.

In 2016, the key battle is currently between abstracting out and embracing the Web Platform. You can abstract the platform (HTML, CSS, DOM) using JavaScript. This is the approach used by React (and my team by extension). Alternatively, you can embrace the platform and use HTML as your base (what Web Components, Polymer and perhaps Angular 2 propose). You cannot mix and match – you need to pick your approach first.

OK, I lied. You CAN mix and match, but it becomes awkward and heavy. React abstracts out HTML but if you use a custom component instead of the built-in HTML, React will work fine. All the React traits (diff-ing the two incremental iterations of virtual DOM, then applying the difference to the actual DOM) works for custom components as well. Therefore, it is fine to slip in a Web Component into a React app.

The opposite does not really work well – consuming a React component in Angular or Polymer is awkward and not worth it really. Not that the original direction is worth it necessarily – you need to load Web Components JavaScript AND React JavaScript.

Don’t eat the poison mushroom

One of the ways people loaded components in their pages in the past was by using good old iframes. Think what you will about them, but you could really lock the components down that way. If you load a component into your own DOM, you need to really trust it. Single origin policy and CORS are supposed to help you prevent a component leaking data from your page to the mother ship. Nevertheless, particularly when it comes to more complex components, it pays to know what they are doing, go through the source code etc. This is where open source really helps – don’t load a black box component into your DOM.

The shoes don’t match the belt

One of the most complex problems to deal with when consuming a Web component of any type is the design. When you are working in a native SDK, the look and feel of the component is defined by the underlying toolkit. All iOS components have the ‘right’ look and feel out of the box when you consume them. However, Web apps have their own themes that create combinatorial explosion. A reusable component needs to do one of the following things:

  1. Be configurable and themeable so that you can either set a few parameters to better blend it into your style guide, or provide entire template to really dial it in
  2. Be generic and inoffensive enough to be equidistant from any parent theme
  3. Be instantly recognizable (think youtube player) in a way that makes it OK that it has its own look and feel.

A very complex reusable component with a number of elements can be very hard to dial in visually by consumers. In corporations, this may reduce the number of themes you want to support. A large component may take it upon itself to support 2-3 supported and widely used design style guides. Then all you need to do is provide a single parameter (style guide name) to make the component use the right styles across the board.

What is going on inside?

Adding a component into your page is not only a matter of placement and visual style. Virtually all reusable components are interactive. A component can be self-contained (for example, all activity in a youtube player is confined to its bounding box), or expected to interact with the parent. If the component must interact with the parent, you need to consider the abstraction chain. Consider the simple countdown timer as a reusable component. Here is how the abstraction chain works:

timer-abstraction

The timer itself uses two low-level components – ‘Start’ and ‘Stop’ buttons. Inside the timer, the code will add click listeners for both buttons. The listeners will add semantic meaning to the buttons by doing things according to their role – starting and stopping the timer.

Finally, when this component is consumed by your page, only one listener is available – ‘onTimerCountdown()’. Users will interact with the timer, and when the timer counts down to 0, the listener you registered will be notified. You should be able to expect events at the right semantic level from all reusable components, from the simplest calendars to large complex components.

If a component can be made part of a larger document, two things you will care the most is serialization and dirty state. When users interact with the component and make a modification, you want to be told that the component is changed. This should trigger the dirty state of the parent. When the user clicks ‘Save’, you should be able to serialize the component and store this state in the larger document. Inversely, on bootstrap you should be able to pass the serialized state to the component to initialize itself.

Note that the actual technology used does not matter here – even the components embedded using iframes can use window.postMessage to send events up to the parent (and accept messages from the parent). While components living in your DOM will resize automatically, iframe-ed components will need to also send resizing events via window.postMessage to allow the parent to set the new size of the iframe.

The long tail

More complex reusable components don’t only have client-side presence. They have a need to call back to the server and fetch the data they need. You can configure such a component in two ways:

  1. You can fetch the data it requires for the component. In that case, the component is fully dependent on the container and it is container’s responsibility to perform all the XHR calls to fetch the data and pass it to the component. This approach may be best for pages that want full control of the network calls. As an added bonus, you can fit such a component into a data flow such as Flux, where some of the data may be coming from Web Socket driven server-side push, not just XHR request.
  2. You can proxy the requests that the component is performing. This approach is also acceptable because it allows the proxy to control which third-party servers are going to be whitelisted.
  3. You can configure CORS so that the component can make direct calls on its own. This needs to be done carefully to avoid the component siphoning data from the page to servers you don’t approve.

On all of these cases you may still want to be told about the events inside the web component using the component events as discussed above.

Frameworks are just the beginning

So there you go – all the problems you need to wrestle with when trying to reuse components in a larger project. Chances are the component is written in the ‘wrong’ framework, but trying to make the component load in your page is only the beginning. Fitting the component into the page visually, figuring out what is happening in it events-wise, and feeding it data from the server is the real battle. Unless you are trying to load a calendar widget, this is where you will spend most of your time.

© Dejan Glozic, 2016

Pessimism as a Service

As far as I can remember, I was forgetful (ironic, I know). I could be driving for ten minutes, wandering why I don’t see the phone Bluetooth symbol on my car’s dash, and realizing I forgot my cellphone at home. Lately, when I reach the door, I ask myself: “OK, what have you forgotten”? Not “have you forgotten anything” but “what”, assuming that an affirmative answer is a forgone conclusion. Such a negative, “guilty until proven innocent” approach saved me many times, but taxed my soul. Am I really that predictable? Is cynicism the only way?

As our super cool, micro-service packed, React supercharged project is picking up steam, I am looking at everything we have done and counting the ways we have deployed ‘Pessimism as a Service’ to production. These examples may seem disconnected to you, but I assure you, there is a cold, calculated thread binding them. Hey, it’s a totally accepted artistic form – my own omnibus, as it were.

Micro services and human nature

I said it before, and I will say it again – micro services are more about people and process than about technology. In his strained attempt to disguise his distaste for micro services, Martin Fowler has still mustered a fainted praise in the way micro services tend to enforce code modularity.

The trouble is that, with a monolithic system, it’s usually pretty easy to sneak around the barrier. Doing this can be a useful tactical shortcut to getting features built quickly, but done widely they undermine the modular structure and trash the team’s productivity. Putting the modules into separate services makes the boundaries firmer, making it much harder to find these cancerous workarounds.

Martin Fowler on Strong Module Boundaries

It this will not inject you with a healthy dose of Weltschmerz, nothing will. What he is saying is that reaching directly into modules instead of using proper interfaces is a tech version of a cookie jar, and instead of counting on your maturity and discipline, micro services simply hide the cookie jar or put it on a top shelf, were you can’t reach it because you skipped the gym too many times.

Large systems are built by real-world organizations, and people are messy, petty, complicated, full of hidden agendas and desires. Engineers who try to look at micro services as a rational system fail to grasp the potent property that requires high emotional intelligence to understand. And it is nothing new – in fact I posit that the first micro service architecture has been practiced by the Nipmuk Indians, living near the lake in today’s Massachusets of the impossible name Chargoggagoggmanchauggagoggchaubunagungamaugg. Translated, it is really a module boundary protocol:

You fish on your side [of the lake], I fish on mine, nobody fishes in the middle.


– Full Indian name for the lake Manchaug, shortened by locals not familiar with micro-service architecture

So, yeah. Ideally, a monolithic system could be highly modular and clean if implemented by highly disciplined, rational people impervious to human foibles. When you manage to hire a teamful of such people, do let me know. In the mean time, the jaded micro service system we are using is humming in production.

AKKA is not a true micro service system

True story – I went to present in the first Toronto Reactive meetup because: (a) I mixed Reactive with React and (b) I wanted to learn what the whole Reactive Manifesto was by presenting on it. Hey, learning by doing!

As such, I was exposed to the AKKA framework. You can read all about Reactive in one of my previous blogs, but suffice to say that AKKA is a framework based on the ‘actor’ pattern and designed specifically to foster an asynchronous, dynamic and flexible architecture that can be deployed to a single server, and then spread out across any number of clusters as the needs grow.

There is a lot to like in AKKA, but I must sadly posit here that it is not a true representative of a micro service system. It is a system inspired by micro services, implementing many of their tenets and with some really nice properties. And yet it betrays one of the key aspects of micro services in that it is not pessimistic. In order to get the benefits of it, you need to lock yourself into a Scala/AKKA stack, paraphrasing the famous Ford Model T joke (you could order it in any color as long as it was black). You lose the ability to choose your stack per micro service.

This property is often misunderstood as a licence for anarchy – a recipe for disaster, cobbling together a concoction of languages, platforms, stacks and runtimes that nobody will be able to keep running and maintain. Of course that unchecked freedom has its price: a real world microservice system will most likely be using only 2-3 stacks (in our case, they ended up being Node.js and Java) and a small number of client side frameworks (for our extended team, React and AngularJS). But there is an ocean of separation between one and two platforms – the former representing lock-in, the latter being freedom.

As I always assume I forgot something, we should always assume that something better is just around the corner, and we don’t want to be hopelessly locked in when it arrives. But we also don’t want to bet our farm on it just yet. This is where the ability to start small is vital: we can try out new approaches in a single micro service without the obligation for a wholesale switch. AKKA requires that we profess our undying love to it and its Scala/JVM stack. Your milage may vary, but I cannot put all my money in that or any other single basket.

React is smart so you can be dumb

On to the client side of the the full stack. My readers know I have expressed my reservation about AngularJS before. I always found its syntax weird, its barrier of entry too high for a practical working system, and that’s before we even mention the version 2.0 schism. However, I always feared I will be viewed as ‘old man that yells at cloud‘ for not recognizing Angular’s genius, until React arrived.

You see, I got React instantly. I didn’t have to scratch my head and re-read its examples. When you read React code, you know exactly what is happening. Of course, that’s because it does less – just the View part. You need to implement Flux for coordinating actions, data stores and views, but Flux is even simpler, and consists of a single dispatcher module you fetch from NPM. You also need something like react-router in order to handle client side page switching. Then you need something like react-engine if you want isomorphic apps (I was told the new term is ‘universal’; I will use both for fun).

You may not fathom the difference in approaches between AngularJS and React until you watch the video explaining React’s design philosophy. You can tell that Facebook deploys React to production. In my opinion, Angular suffers from being designed by rock stars for other rock stars. Once you start getting real and deploying non-trivial apps to production, you need to scale, and that means increasing the number of people that can be productive with your framework of choice. React was designed with the assumption that if the framework is predictable and relatively simple, the velocity can be increased without the proportional increase in the bug rate. Otherwise, what’s the point?

React designers took human nature into account, assumed that we are all dumb at various times of day or week, and ensured that even in those unhappy moments, we can still read our React code and understand what it is doing with relative ease. It feels like a rotten compromise, but it is pure genius.

Web Components just around the corner

Ah, Web Components. The ultimate native component model that will solve Everything. Three years ago there was a lot of excitement, and people jumping on the polyfills to ‘temporarily’ shim the browsers until everybody implements them natively. Fast-forward to November 2015, and today you still cannot bet your project on them in production. Yes, they are natively implemented in Chrome, but if you didn’t want to use IE-only browser extensions 15 years ago, why would you do it when Google, and not Microsoft, is the vendor trying to sell its agenda as a standard.

Yes, there has been some movement on cross-browser support for Web Components, at least when Shadow DOM is concerned. Nevertheless, nothing stands still, and now some aspects of the ES6 module loading are at odds with HTML Imports (an important part of Web Components spec).

And of course, what has also happened in the last three years is that we got React. It has a very strong component model (albeit one that you can only peruse if you lock yourself into React), and more importantly, it extends to the server and even native rendering. This makes React attractive in ways that Web Components will never be able to match.

A year ago, we seriously toyed with the idea of just using shims until Web Components, clearly the future of the component models, arrive. I am glad I allowed my jaded self to prevail and instead used React – it helped us ship to production, with no performance compromises coming from shims, and looking back, we would be nowhere close to the promised glorious future if we allowed exuberance to sway our better judgement.

I am not saying ‘No’ to Web Components forever – they are actually not incompatible with React, and in fact a low-level Web Component can be used just like a native component in a React application, reaping the benefits of the DOM diffing. However, we don’t intend to write Web Components ourselves – we are fully isomorphic and server-side rendering gives us benefits that a comparable Web Component would not.

I predict that Web Components will be the way for incompatible frameworks to co-exist, the way to ‘fish in the middle’ of the Nipmuk lake mentioned above.

Optimism dreams, pessimism ships

These four examples show why enthusiasm and optimism rule the prototypes, meetups and articles, but pessimism takes over in production. Taking human nature into account, rolling with the imperfections of reality, expecting and preparing for the worst pays off tenfold once the projects get serious.

Now, if I can only remember if I turned the stove off before leaving home.

© Dejan Glozic, 2015

With React, I Don’t Need to Be a Ninja Rock Star Unicorn

San Diego Comic-Con 2011 - Lego Ninja, The Conmunity - Pop Culture Geek from Los Angeles, CA, USA

Followers of my blog may remember my microservice period. There was a time I could not shut up about them. Now, several blogs and not a peep. Am I over microservices? Not by a long stretch.

For us, microservices are gravity now. I remember an interview with Billy Corgan of The Smashing Pumpkins where, when pressed about his choice of guitar strings, he answered: “I use them”. That’s how I feel about microservices – now that we live and breathe them every day, they are not exciting, they are air. The only people who get exited about air are SCUBA divers, I suppose, particularly if they are running low.

ReactJS, on the other hand, is interesting to us because we are still figuring it out. For years we were trying to have our cake and eat it too – merge the benefits of server and client side rendering. I guess my answer to ‘vanilla or chocolate ice cream’ is ‘yes please’, and with React, I can have my chocolate sundae for breakfast, lunch and dinner.

The problem with ninja rock star unicorns

The magical creature from the title is of course the sought after 10x developer. He/she knows all the modern frameworks, sometimes reads about the not so modern ones just for laughs, and thrives where others are banging their heads against their desks repeatedly.

Rock star developers not only do not shy away from frameworks with high barrier of entry such as Angular, they often write there own, even more sophisticated and intricate. Nothing wrong with that, except that you cannot hire a full team of them, and even if you could, I doubt team dynamics would be particularly great. The reality of today’s developer job market is that you will likely staff a team with great, competent and potentially passionate developers. I say potentially because in many cases their passion will depend on you as a leader and your ability to instill it.

The React connection

This is where React comes into play. Careful readers of this blog may remember my aversion to JavaScript frameworks in general. For modestly interactive sites, you can go a long way with just Node.js, Express, a templating library such as Dust.js and a sprinkle of jQuery for a good measure. However, a highly dynamic app driven by REST APIs is too much of a challenge for jQuery or vanilla JS alone. I am not saying it cannot be done, but by the same token, while you can cut your grass with box cutters, it is massively less efficient than a lawn mower. At the end of the day, you need the right tools for the right job, and that means some kind of a JavaScript library or a framework.

What kept me away from Angular for the longest time was the opinionated nature of it, and the extent to which it seeks to define your entire world. Angular is a cult – you cannot be in it part time.

“Angular is a cult – you cannot be only a part time member.”

Not an Angular cult member

I have already written about why I got attracted to React from a technical point of view. There are great things you can do with isomorphic apps when you combine great React libraries. But these are all technical reasons. The main reason I am attracted to React is its philosophy.

We are all idiots at times

Angular used to pride itself as a ‘super-heroic JavaScript framework’. Last time I checked, they removed it from the home page (although it still appears in Google searches – ironic, I know). I presume they meant that the framework itself gives you super-hero powers, not that you need to be a super-hero developer in order to use it, but sometimes I felt that way.

I am singling out Angular somewhat unfairly – most MVC JavaScript framework approach the problem by giving you tools to carefully wire up elements on the page with events, react to watched variables, surgically change styles, properties, collections and so on. Sounds great in the beginning, until you scale to a real-world application, and things become really complex.

This complexity may not be a big deal while you are in the thick of it, coding like a beast. You may be a great developer. The problem is that the moment you turn your head away from that code, you start the ‘idiot’ clock – until that time when you no longer remember how everything fits together.

Now, if you are looking at your own code and cannot figure out how it works, what are the chances another team member will? I long ago proclaimed that dumb code is good and smart code is bad. Not bad code, just straightforward, easy to understand, ‘no software patent here’ code. Your future self will be grateful, future maintainers doubly so.

React allows us to write boring code

Let me summarize the key React philosophy in one sentence:

“Something changed in my application’s state. Better re-render it.”

React in a nutshell

I cannot emphasize enough the importance of this approach. It does not say “watch 50 variables and do surgically change DOM elements and properties when something happens in one component, then cascade those surgical changes to other components also watching those components”. Behind this, of course, is React’s ingenious approach of using a virtual DOM and only updating the real DOM with the actual changes between the two. You can read about it on React’s web page.

After many years of ‘surgical’ JavaScript DOM manipulations, there is something completely counter-intuitive about ‘just re-render’ approach. It feels like it should not work. It feels wasteful to keep creating all those JavaScript objects, until you realize that they are really cheap, and that the true cost are actual DOM manipulations.

In fact, you can use this approach with any JavaScript rendering engine – Mustache, Handlebars, Dust. The only problem is – if you try the ‘something changed, re-render the component’ approach there, templates will re-render into inner HTML, and that is wasteful. It is also potentially disruptive if users are interacting with form elements you just recycled under their feet. React, on the other hand, will not do it – it will carefully update the DOM elements and properties around the form controls.

Increase velocity without increasing bug rate

The key design goal of React was to help real world projects running code in production. Developers of modern cloud applications are under constant pressure from product management to increase velocity. The same product management of course expects you to maintain quality of your apps, which is very hard. It is not hard to imagine that shortening the cycles will increase the bug rate, unless the code we write is simplified. Writing ‘surgical’, intricate code in less time is asking for trouble, but React is easy to understand. There is uniformity in its approach, a repeatability that is reassuring and easy for people who didn’t write the code originally to understand when they pick it up.

Developers take great pride in their work, and sometimes they get carried away by thinking that code is their deliverable. They are wrong. What you are delivering are user experiences, and code is just means to an end. In the future, we will just explain what we want to some more powerful Siri or Cortana and our app will come to existence. Until then, we should use whatever allows us to deliver it with high velocity but without bugs that would normally come with it.

For my team, React is just the ticket. As is often in life, YMMV.

© Dejan Glozic, 2015

Micro-Services and Page Composition Problem

800px-20121027_0811_Sintra_06

Dispite many desirable properties, micro-services carry two serious penalties to be contended with: authentication (which we covered in the previous post) and Web page composition, which I intend to address now.

Imagine you are writing a Node.js app and use Dust.js for the V of the MVC, as we are doing. Imagine also that several pages have shared content you want to inject. It is really easy to do using partials, and practically every templating library has a variation of that (and not just for Node.js).

However, if you build a micro-service system and your logical site is spread out between several micro-services, you have a complicated problem on your hands. Now partial inclusion needs to happen across the network, and another service needs to serve the shared content. Welcome to the wonderful world of distributed composition.

This topic came into sharp focus during Nodeconf.eu 2014. Clifton Cunningham presented the work of his team in this particular area, and the resulting project Compoxure they have open-sourced and shared with us. Clifton has written about it in his blog and it is a very interesting read.

Why bother?

At this point I would like to step back and look at the general problem of document component model. For all their sophistication and fantastic feature set, browsers are stubbornly single document-oriented. They fight with us all the time when it comes to where the actual content on the page comes from. It is trivially easy to link to a number of stylesheets and JavaScript files in the HEAD section of the document, but you cannot point at a page fragment and later use it in your document (until Web Components become a reality, that is – including page fragments that contain custom element templates and associated styles and scripts is the whole point of this standard).

Large monolithic server-side applications were mostly spared from this problem because it was fairly easy to include shared partials within the same application. More recently, single page apps (SPAs) have dealt with this problem using client side composition. If everything is a widget/plug-in/addon, your shared area can be similarly included into your page from the client. Some people are fine with this, but I see several flaws in this approach:

  1. Since there is no framework-agnostic client side component model, you end up stuck with the framework you picked (e.g. Angular.js headers, footers or navigation areas cannot be consumed in Backbone micro-services)
  2. The pause until the page is assembled in SPAs due to JavaScript downloading and parsing can range from a short blip to a seriously annoying blank page stare. I understand that very dynamic content may need some time to be assembled but shared areas such as headers, footers, sidebars etc. should arrive quickly, and so should the initial content (yeah, I don’t like large SPAs, why do you ask?)

The approach we have taken can be called ‘isomorphic’ – we like to initially render on the server for SEO and fast first content, and later progressively enhance using JavaScript ‘on the fly’, and dynamically load with Require.js. If you use Node.js and JavaScript templating engine such as Dust.js, the same partials can be reused on the client (something Airbnb has demonstrated as a viable option). The problem is – we need to render a complete initial page on the server, and we would like the shared areas such as headers, sidebars and footers to arrive as part of that first page. With a micro-service system, we need a solution for distributed document model on the server.

Alternatives

Clifton and myself talked about options at length and he has a nice breakdown of alternatives at the Compoxure GitHub home page. For your convenience, I will briefly call out some of these alternatives:

  1. Ajax – this is a client-side MVC approach. I already mentioned why I don’t like it – it is bad for SEO, and you need to stare at the blank page while JavaScript is being downloaded and/or parsed. We prefer to use JavaScript after the initial hit.
  2. iFrames – you can fake document component models by using seamless iframes. Bad for SEO again, there is no opportunity for cashing (therefore, performance problems due to latency), content in iFrames is clipped at the edges, and problems for cross-frame communication (although there are window.postMessage workarounds). They do however solve the single-domain restriction browsers impose on Ajax. Nevertheless, they have all the cool factor of re-implementing framesets from the 90s.
  3. Server Side Includes (SSIs) – you can inject content using this approach if you use a proxy such as Nginx. It can work and even provide for some level of caching, but not the programmatic and fine grain control that is desirable when different shared areas need different TTL (time to live) values.
  4. Edge Side Includes (ESIs) – a more complete implementation that unfortunately locks you into Varish or Akamai.

Obviously for Clifton’s team (and ourselves), none of these approaches quite delivers, which is why services like Compoxure exist in the first place.

Direct composition approach

Before I had an opportunity to play with Compoxure, we spent a lot of time wrestling with this problem in our own project. Our current thinking is illustrated in the following diagram:

composition1The key aspects of this approach are:

  1. Common areas are served by individual composition services.
  2. Common area service(s) are proxied by Nginx so that they can later be called by Ajax calls. This allows the same partials to be reused after the initial page has rendered (hence ‘isomorphic’).
  3. Common area service can also serve CSS and JavaScript. Unlike the hoops we need to go through to stitch HTML together, CSS and JavaScript can simply be linked in HEAD of the micro-service page. Nginx helps making the URLs nice, for example ‘/common/header/style.css’ and ‘/common/header/header.js’.
  4. Each micro-service is responsible for making a server-side call, fetching the common area response and passing it into the view for inlining.
  5. Each micro-service takes advantage of shared Redis to cache the responses from each common service. Common services that require authentication and can deliver personalized response are stored in Redis on a per-user basis.
  6. Common areas are responsible for publishing messages to the message broker when something changes. Any dynamic content injected into the response is monitored and if changed, a message is fired to ensure cached values are invalidated. At the minimum, common areas should publish a general ‘drop cache’ message on restart (to ensure new service deployments that contain changes are picked up right away).
  7. Micro-services listen to invalidation messages and drop the cached values when they arrive.

This approach has several things going for it. It uses caching, allowing micro-services to have something to render even when common area services are down. There are no intermediaries – the service is directly responding to the page request, so the performance should be good.

The downside is that each service is responsible for making the network calls and doing it in a resilient manner (circuit breaker, exponential back-off and such). If all services are using Node.js, a module that encapsulates Redis communication, circuit breaker etc. would help abstract out this complexity (and reduce bugs). However, if micro-services are in Java or Go, we would have to duplicate this using language-specific approaches. It is not exactly rocket science, but it is not DRY either.

The Compoxure approach

Clifton and guys have taken a route that mimics ESI/SSI, while addressing their shortcomings. They have their own diagrams but I put together another one to better illustrate the difference to the direct composition diagram above:

composition2In this approach, composition is actually performed in the Compoxure proxy that is inserted between Nginx and the micro-services. Instead of making its own network calls, each micro-service adds special attributes to the DIV where the common area fragment should be injected. These attributes control parameters such as what to include, what cache TTLs to employ, which cache key to use etc. There is a lot of detail in the way these properties are set (RTFM), but suffice to say that Compoxure proxy will serve as an HTML filter that injects the content from the common areas into these DIVs as instructed.

<div cx-url='{{server:local}}/application/widget/{{cookie:userId}}'
     cx-cache-ttl='10s' cx-cache-key='widget:user:{{cookie:userId}}'
     cx-timeout='1s' cx-statsd-key="widget_user">
This content will be replaced on the way through
</div>

This approach has many advantages:

  1. The whole business of calling the common area service(s), caching the response according to TTLs, dealing with network failure etc. is handled by the proxy, not by micro-services.
  2. Content injection is stack-agnostic – it does not matter how the micro-service that serves the HTML is written (in Node.js, Java, Go etc.) as long as the response contains the expected tags
  3. Even in a system written entirely in Node.js, writing micro-services is easier – no special code to add to each controller
  4. Compoxure is used only to render the initial page. After that, Ajax takes over and composition service is hit with Ajax calls directly.

Contrasting the approach with direct composition, we identified the following areas of concern:

  1. Compoxure parses HTML in order to locate DIVs with special tags. This adds a performance hit, although practical results imply it is fairly small
  2. Special tags are not HTML5 compliant (‘data-‘ prefix would work). If this bothers you, you can configure Compoxure to completely replace the DIV with these tags with the injected content, so this is likely a non-issue.
  3. Obviously Compoxure inserts itself in front of the micro-services and must not go down. It goes without saying that you need to run multiple instances and practice ZDD (Zero-Downtime Deployment).
  4. Caching is static i.e. content is cached based on TTLs. This makes picking the TTL values tricky – our approach that involves pub/sub allows us to use higher TTL values because we will be told when to drop the cached value.
  5. When you develop, direct composition approach requires that you have your own micro-service up, as well as common area services. Compoxure adds another process to start and configure locally in order to be able to see your page with all the common areas rendered. If you hit your micro-service directly, all the DIVs with the ‘cx-‘ properties will be empty (or contain the placeholder content).

Discussion

Direct composition and Compoxure proxy are two valid approaches to the server-side document component model problem. They both work well, with different tradeoffs. Compoxure is more comfortable for developers – they just configure a special placeholder div and magic happens on the way to the browser. Direct composition relies on fewer moving parts, but makes each controller repeat the same code (unless that code is encapsulated in a shared Node.js module).

An approach that bridges both worlds and something we are seriously thinking of doing is to write a Dust.js helper that further simplifies inclusion of the common areas. Instead of importing a module, you would import a helper and then just use it in your markup:

<div>
{@import url="{headerUrl}" cache-ttl="10s"
cache-key="widget:user:{userid}" timeout="1s"}
</div>

Of course, Compoxure has some great properties that are not easy to replicate with this approach. For example, it does not pass TTL values to Redis directly because it would cause the cashed content to disappear after the coundown, and Compoxure perfers to keep the last content past TTL in case the service is down (better to serve slightly stale content than no content at all). This is a great feature and would need to be replicated here. I am sure I am missing other great features and Clifton will probably remind me about it.

Conclusion

In the end, I like both approaches for different reasons, and I can see a team use both successfully. In fact, I could see a solution where both are available – a Dust.js helper for Node.js/Dust.js micro-services, and Compoxure for everybody else (as a fallback for services that cannot or do not want to fetch common areas programmatically). Either way, the result is superior to the alternatives – I strongly encourage you to try it in your next micro-service project.

You don’t even have to give up your beloved client-side MVCs – we have examples where direct composition is used in a page with Angular.js apps and another with a Backbone app. These days, we are spoiled for choice.

© Dejan Glozic, 2014

Should We Fight or Embrace the DOM?

Dom Tower, Utrecht, 2013, Anitha Mani (Wikimedia Commons)
Dom Tower, Utrecht, 2013, Anitha Mani (Wikimedia Commons)

Now, my story is not as interesting as it is long.

Abe Simpson

One of the privileges (or curses) of experience is that you amass a growing number of cautionary tales with which you can bore your younger audience to death. On the other hand, knowing the history of slavery came in handy for Captain Pickard to recognize that what Federation was trying to do by studying Data is create an army of Datas, not necessarily benefit the humanity. So experience can come in handy once in a while.

So let’s see if my past experience can inform a topic de jour.

AWT, Swing and SWT

The first Java windowing system (AWT) was based on whatever the underlying OS had to offer. The original decision was to ensure the same Java program runs anywhere, necessitating a ‘least common denominator’ approach. This translated to UIs that sucked equally on all platforms, not exactly something to get excited about. Nevertheless, they embraced the OS, inheriting both the shortcomings and receiving the automatic improvements.

The subsequent Swing library took a radically different approach, essentially taking on the responsibility of rendering everything. It was ‘fighting the OS’, or at least side-stepping it by creating and controlling its own reality. In the process, it also became responsible for keeping up with the OS. The Eclipse project learned that fighting the OS is trench warfare that is never really ‘won’. Using an alternative system (SWT) that accepted the windowing system of the underlying OS turned out to be a good strategic decision, both in terms of the elusive ‘look and feel’, and for riding the OS version waves as they sweep in.

The 80/20 of custom widgets

When I was working on the Eclipse project, I had my own moment of ‘sidestepping’ the OS by implementing Eclipse Forms. Since browsers were not ready yet, I wrote a rudimentary engine that gave me text reflows, hyperlinks and images. This widget was very useful when mixed with other normal OS widgets inside Eclipse UI. As you could predict, I got the basic behavior fairly quickly (the ’80’ part). Then I spent couple of years (including help from younger colleagues), doing the ‘last mile’ (the ’20’) – keyboard, accessibility, BIDI. It was never ‘finished’ and it was never quite as good as the ‘real’ browser, and not nearly as powerful.

One of the elements of that particular custom widget was managing the layout of its components. In essence the container was managing a collection of components but the layout of the components was delegated to the layout manager that could be set on the container. This is an important characteristic that will come in handy later in the article. I remember the layout class as one of the trickiest and hardest to get done right and fully debug. After it was ‘sort of’ working correctly, everybody dreaded touching it, and consequently forgot how it worked.

DOM is awesome

I gave you this Abe Simson moment of reflection to set the stage for a battle that is raging today between people who want to work with the browser’s DOM, and people who think it is the root of all evil and should be worked around. As is often these days, both points of view came across my twitter feed from different directions.

In the ’embrace the DOM’ corner, we have the Web Components crowd, who are thinking that DOM is just fine. In fact, they want us to expand it to turn it into a universal component model (instead of buying into the ‘bolt on’ component models of widget libraries). I cannot wait for it: I always hated the barrier of entry for Web libraries. In order to start reusing components today, you first need to buy into the bolt-on component model (not unlike needing to buy another set top box in order to start enjoying programming from a new content provider).

‘Embracing the DOM’ means a lot of things, and in a widely retweeted article by Reto Schläpfer about React.js, he argued that the current MV* client side framework treat DOM as the view, managing data event flow ‘outside the DOM’. Reto highlights the example of React.js library as an alternative, where the DOM that already manages the layout of your view can be pressed into double-duty of serving as the ‘nervous system’.

This is not entirely new, and has been used successfully elsewhere. I wrote previously on DOM event bubbling used in Bootstrap that we used successfully in our own code. Our realization that with it we didn’t feel the need for MVC is now echoed by React.js. In both cases, layout and application events (as opposed to data events) are fused – layout hierarchy is used as a scaffolding for the event paths to flow, using the build-in DOM behavior.

For completeness, not all people would go as far as claim that React.js obviates the need for client-side MVC – for example, Backbone.js has been shown to play nicely with React.js.

DOM is awful

In the other corner are those that believe that DOM (or at least the layout part of it) is broken beyond repair and should be sidestepped. My micro-service camarade de guerre Adrian Rossouw seems to be quite smitten with the Famo.us framework. This being Adrian, he approached this is the usual comprehensive way, collecting all the relevant articles using wayfinder.co (I am becoming increasingly spoiled/addicted to this way of capturing Internet wisdom on a particular topic).

Studying Famo.us is an archetype of a red herring – while its goal is to allow you build beautiful apps using JavaScript, transformations and animation, the element relevant to this discussion is that they sidestep DOM as the layout engine. You create trees and use transforms, which Famo.us uses to manage the DOM as an implementation detail, mostly as a flat list of nodes. Now recall my Abe Simpson story about SWT containers and components – doesn’t it ring similar to you? A flat list of components and a layout manager on top of it controlling the layout as manifestation of a strategy pattern.

Here is what Famo.us has to say about they approach to DOM for layouts and events:

If you inspect a website running Famo.us, you’ll notice the DOM is very flat: most elements are siblings of one another. Inspect any other website, and you’ll see the DOM is highly nested. Famo.us takes a radically different approach to HTML from a conventional website. We keep the structure of HTML in JavaScript, and to us, HTML is more like a list of things to draw to the screen than the source of truth of a website.

Developers are used to nesting HTML elements because that’s the way to get relative positioning, event bubbling, and semantic structure. However, there is a cost to each of these: relative positioning causes slow page reflows on animating content; event bubbling is expensive when event propagation is not carefully managed; and semantic structure is not well separated from visual rendering in HTML.

They are not the only one with ‘the DOM is broken’ message. Steven Wittens in his Shadow DOM blog post argues a similar position:

Unfortunately HTML is crufty, CSS is annoying and the DOM’s unwieldy. Hence we now have libraries like React. It creates its own virtual DOM just to be able to manipulate the real one—the Agile Bureaucracy design pattern.

The more we can avoid the DOM, the better. But why? And can we fix it?

……

CSS should be limited to style and typography. We can define a real layout system next to it rather than on top of it. The two can combine in something that still includes semantic HTML fragments, but wraps layout as a first class citizen. We shouldn’t be afraid to embrace a modular web page made of isolated sections, connected by reference instead of hierarchy.

Beware what you are signing up for

I would have liked to have a verdict for you by the end of the article, but I don’t. I feel the pain of both camps, and can see the merits of both approaches. I am sure the ‘sidestep the DOM’ camp can make their libraries work today, and demonstrate how they are successfully addressing the problems plaguing the DOM in the current browser implementations.

But based on my prior experience with the sidestepping approach, I call for caution. I will also draw on my experience as a father of two. When a young couple goes through the first pregnancy, they focus on the first 9 months, culminating in the delivery. This focus is so sharp and short-sighted that many of the couples are genuinely bewildered when the hospital hands them their baby and kicks them to the hospital entrance, with newly purchased car seat safely secured in the back. It only dawns on them at that point that baby is forever – that the bundle of joy is now their responsibility for life.

With that metaphor in mind, I worry about taking over the DOM’s responsibility for layout. Not necessarily for what it means today, but couple of years down the road when both the standards and the browser implementations inevitably evolve. Will it turn into a trench warfare that cannot be won, a war of attrition that drains resources and results in abandoned libraries and frameworks?

Maybe I can figure that one out after a nap.

© Dejan Glozic, 2014

The Rise of the Full-Stack Architect

Pebble stack, Wikimedia Commons, Zzubnik
Pebble stack, Wikimedia Commons, Zzubnik

Full-Stack Web Architect needed with the experience with web services, back end Web platforms, databases, cloud based hosting such as Heroku and AWS, visual design, UX/UI, experience of mobile web development, all in agile environment.

 

An actual 2014 posting for a job based in London, UK

Ta-da – the moment has come. In post #2 of this blog, I promised to formally apologize for calling myself ‘an architect’. Only 39 weeks later, I am making good on the promise. Procrastinating much?

The hate on architects ebbs and flows, but never fully goes away. This creates a horrible problem for experienced developers who are fascinated with technology and are not overjoyed with becoming managers, yet clearly need to be in a leadership position of some kind because they are kicking them out of the developers’ kindergarten (what do you mean I am too old to hang around the playground, and why is it ‘creepy’?).

Management has been a traditional career growth path, and some still believe it is the only smart choice for former developers (kids, if you never before saw click bait, here it is):

From my chair, everything points in a different direction. Wherever I look (including my own company), hierarchies are being flattened, leading by example and by inspiration is replacing pulling the rank, and some companies are eliminating management altogether. So management is receeding as a leadership model to aspire to, but architects are being openly mocked:

So what is an experienced developer with so much to offer to younger colleagues to do? Looks like we are stranded between Scylla and Charybdis of the software leadership career paths. But it is a false dilemma. Bill was clear about this in his blog post – you should aspire to be a leader, to have a vision that inspires people and makes them follow you of their own will, not because they report to you. Somebody else can handle TPS reports, and you can focus on leading a growing group of people in making awesome products, and enjoying coming to work every morning because working in such an environment is intoxicating. Then you move the hell out of their way and let them do the work – they can do it better than you anyway.

OK, figured it all out – have a breathtaking vision, generate a growing following, solve the world hunger, and still have time to catch up on the new episodes of Silicon Valley on HBO. Got it.

Meanwhile on planet Earth…

For us lesser souls, some more attainable pointers:

  1. Be a wise sage – your experience with waves of technology allows you to notice repetition – new generations of developers re-solving problems from 5 or 10 years ago. When a puppy developer comes to you with a very cool client side MVC framework, point at the geological layers of MVC in the past technologies. Do that not to disparage the new framework, but to look at it for its own merit, not because MVC is a revolutionary new concept (you will know it is not, but chances are your junior colleague will not). Warning: don’t be an old fart for whom ‘things are not as good as they used to be’. That’s nonsense – a lot of things are many times better now, so be on the lookout for the first signs of being stuck in your own ways and nip it in the bud.
  2. Look over their heads into the future – there is always so much to do every day, and so few hours before your caffein-induced code turns into garbage. Allow developers to code like beasts, solving the ‘now’, while you look ahead, planning where you need to be in 3 months, 6 months, a year. Don’t try to go too far – we are all in a transition, and things are insane now.  Long term predictions have become somewhat useless lately, so ensure you have enough vision to prevent the team go idle, and be prepare to tweak the vision when course-correcting data becomes available.
  3. Resolve technical disputes – as fiery technical debates would indicate, there are many ways to skin any technical cat (a metaphor, I don’t condone skinning actual cats). One of your key jobs as a technical leader is to ensure the entire system works, and that can only happen if all the parts of the system speak the same language to each other. In other words, as long as everybody uses the same protocol, and that protocol does not suck too much, it is better than when parts of the system use superior but incompatible protocols.
  4. Prefer practical over pure – architects are often derided for making everything more complicated than it needs to be. Don’t do it. Look at the most popular APIs today – they may not satisfy REST police but they are clean, they work, they are stable and well documented and the deviations from the ideal actually solve some real world problems for developers.
  5. Maintain street creed – you can only lead developers if they respect you as one of their own, and that can only happen if you continue to code. Don’t put yourself on the critical path because you will spend too much time heads down, which will prevent you from doing (2), but you need to know what you are talking about. That requires that for every library or framework or protocol you want to recommend, you need to educate yourself, write some real code (sorry, more than just Hello, World) – give it a serious test drive.
  6. Don’t practice drive-by architecting – nothing irks developers more than an architect that designs a system, provides all the specifications, drops the finished docs on their poor heads and disappears, leaving them to suffer from the consequences of the poor choices. You need to see your architecture live in the running code, solving problems. Nothing makes me happier than developers gobbling up the new architecture and moving to it as fast and they can because it addresses their long standing problems. As a colleague of mine would say ‘the devil is in the pudding’ – if the system using your architecture is not faster, more scalable, more maintainable, your architecture sucks.
  7. Be a connector – just because people don’t officially report to you, it does not mean that you are free from practicing your soft skills. You need to talk to design, engineering, operations, product management – all speaking their own distinct languages. More importantly, you must be able to do it without strangling somebody with a Cat 6 cable. That privilege is reserved for developers, you need to be above it.
  8. Have cloud, DevOps and Web-scale in mind – practically all modern systems today are distributed systems running in data centers. They need to be evolved using Continuous Integration, features need to be promptly deployed using Continuous Deployment, services need to be clusterable, scalable and redundant. Architecting a system today without keeping these in mind is a recipe for a career spiral of death.

If you follow the suggestions above, you are really ‘system-developer-in-cheif’. Note that you are not a team lead even though it may sound like it – you are a ‘system lead’ serving a number of teams that already have team leads. You are a ‘how everything fits together lead’, which is too long to write, so for practical reasons, we will call you an ‘architect’. There – we closed the full circle.

What about full-stack?

In the olden days, developers have been segregated by their skills, because different tiers of a complex system were build with such a divergent technology that it was really hard to keep it all in your head. SQL experts were not very good at building backend systems with SOA and other bloatware, which was very different from writing Web sites with server-side MVC, which was very different from client side Ajax. Conversely, architects followed suit – they specialized as well.

With the dawn of the ‘JavaScript everywhere’, with NoSQL databases storing and passing around unstructured JSON, Node.js servers with JavaScript templating engines running on both sides of the fence, and MVC client libraries, a developer can theoretically write a whole system – soup to nuts. I say ‘theoretically’ because while the language context switching has been removed, the problem domains are still very different. There is actually a growing need for a breed of architects to ‘make sense of it all’ and devise and evolve a system that does not explode from the very wealth of choices that are now around us at every level.

Ad sense

Look at the ad at the top again – nothing in it surprises me, and I understand the motivation, but boy – that guy or girl is going to be one shiny unicorn, and I don’t know if there are very many prancing around with all those skills.

You know what, lets have fun: let’s see if I could get that job by a very scientific method of dissecting the ad and trying to match it with my blog posts. For this exercise we will be using a simplification that I actually possess in-depth knowledge about topics I write about (which is not a foregone conclusion, but let’s not go there):

As Magnus Pyke would yell – ‘Science!’. If this IBM thing does not work out, I should give them a call. My point was that there is a dire need for ‘making-sense-of-it-all-in-cheif’ in every large team. Again, this is too long, so we will call this job ‘full-stack architect’. If your company is large enough, there could be additional career ladder titles such as ‘Senior Technical Staff Member’, ‘Distinguished Engineer’, ‘Fellow’, but I really dig full-stack architect, so I will call myself that from now on.

Here is hoping that one day soon some future George Constanza will lie to people that he is ‘Art Vandelay, Full-Stack Architect’.

© Dejan Glozic, 2014

Pushy Node.js

Push-fact_Michael_N_Erickson_2011

Last week I hoped to blog-shame Guillermo Rauch into releasing Socket.io v1.0 for my own convenience. Alas, it didn’t work (gotta work on my SEO), but I see a lot of traffic from @rauchg on the corresponding GitHub project, so my spirits are high. Meanwhile, I realized that for my own dabbling, v0.91 is pretty darn good on its own. Good enough, in fact, to socketize our example Node.js app in time for this post.

Why would I need Socket.io in the first place? Because of mobile phones. In a straightforward web server implementation, requests are always originating from the client. Client pulls, server obliges and sends markup and other resources back, and then returns to listening on the port, awaiting further requests (Node.js server does the exact same thing – this is not just ‘old tech’). With Ajax, the nature of requests is different but not the direction. To add liveliness, XHR calls are made to fetch data and update portions of the page without the full refresh, but those XHR calls again originate in the client. Even when it looks as if the server is pushing, it is all a sham – a technique called ‘long polling’ where a connection is kept alive and open and the server never closes it, choosing instead to push data nuggets to the client via the connection originally initiated by the client. Finally, with Web Sockets it is possible to have a true server push, where server initiates the connection when data is good and ready. Enough of the client acting as bored kids on a family trip (“are we there yet? are we there yet?”).

OK, but why mobile phones? Because they conditioned us to expect push notifications. We are so used for our phones telling us when there is something new to observe, that we are almost offended when we need to explicitly refresh an app to get new content. It is no surprise that developers now want that kind of lively experience on the desktops as well. It is also a prerequisite for a mobile Web app hoping to convince customers that it is just as good as a native counterpart.

I decided to add a new page to the example app I keep evolving since the first post on Node.js – too lazy to create a new app. It is also a good way to go beyond ‘Hello, World’ because examples on Socket.io home page are all in app.js. Since I am using express.js and have a few pages with their corresponding controllers and views, I decided to move most of the socket action to a dedicated page. This is a realistic real-world scenario – not all of your pages will have a need for server push. This is all assuming you are not writing a Single Page App (SPA between friends), at which point all bets are off.

Socket.io home page definitely fits this approach, which means that you get the endorphin kick from getting the code to work, but you immediately need to make changes for the real world app. In my case, I was using Require.js, and the page where the client code needs to go is namespaced for jQuery. Here is what I needed to do in the shared Dust.js partial:

   requirejs.config({
      shim: {
         'socketio': {
             exports: 'io'
         }
      },
      paths: {
         socketio: '../socket.io/socket.io'
      },
      baseUrl: "/js"
   });

Now we are ready to write some server push code. I have decided to create a mockup of something we are currently working on in the context of JazzHub – a build happening somewhere on the server that our page is watching. The page is simple – we want a button to start the build, a progress bar to watch it working, and a failure in the build at some point along the way just to mix it up.

We will start by using NPM to fetch socket.io module and hooking it up in app.js. Socket.io is designed to coexist peacefully with express.js, and to piggy-back on the server that it is starting:

var express = require('express')
, routes = require('./routes')
, dust = require('dustjs-linkedin')
, helpers = require('dustjs-helpers')
, cons = require('consolidate')
, user = require('./routes/user')
, simple = require('./routes/simple')
, widgets = require('./routes/widgets')
, http = require('http')
, sockets = require('./routes/sockets')
, io = require('socket.io')
, path = require('path');

We have required another controller for the new page (‘./routes/sockets’) as well as the library itself (‘socket.io’). We can how hook it up to the server:

var server = http.createServer(app);
sockets.io = io.listen(server);

In the last line we have passed the Socket.io root object to the new page’s controller so that we can access it there.

The new page needs a view, and we will again use Dust.js template and Bootstrap for our button and progress bar:

{>layout/}
{<head}
   <script src="/socket.io/socket.io.js"></script>
{/head}
{<content}
	<h2>Web Sockets</h2>
	<p>
		This page demonstrate the use of Sockets.io to push data from the Node.js server.
	</p>
	<p><button type="button" class="btn btn-primary" id="playButton" data-state="start">
		<span class="glyphicon glyphicon-play" id="playButtonIcon"></span></button>
	</p>
	<div class="progress" style="width: 50%">
       <div id="progress" class="progress-bar" role="progressbar" aria-valuenow="100" aria-valuemin="0" aria-valuemax="100" style="width: 100%;">
          <span class="sr-only">100% Complete</span>
       </div>
    </div>

	<p>This page is served by server {pid}</p>
{/content}
{<script}
	<script src="/js/sockets/sockets-page.js"></script>
{/script}

Here is something that took me a while to figure out, and judging by the Stack Overflow questions, it is puzzling to many a developer. If you take a look how we are referencing the client side portion of Socket.io library, it makes no sense:

<script src="/socket.io/socket.io.js"></script>

All we did was install Socket.io using NPM, and the library contains client side portion as well, but we didn’t put it in ‘/public’ where our styles and other static client-side files are. Nevertheless, our server was finding and serving this file to the client somehow. It wasn’t until I looked at the server side console that I noticed this line in the sea of Socket.io debug chatter:

debug: served static content /socket.io.js

Apparently, Socket.io is not only handling requests from its client side code, it is assisting Express in finding and serving the client side code to begin with. A bit magical for my taste but OK.

You may have noticed that I don’t have JavaScript inlined in the Dust template for the page. I did this for cleanliness, but also because curly braces in JavaScript code need to be escaped in Dust.js (because curly braces are special characters), making JavaScript exceedingly ugly. Mental note to talk to Dust.js guys about finding a better way to handle inlined JavaScript.

The content of the referenced file ‘sockets-page.js’ is here:

require(["jquery", "socketio"], function($, io) {
    // connect to socket
    var socket = io.connect('http://localhost'); // this needs to change in the real code
    socket.on('build', function (build) {
    	if (build.progress==0)
           _resetProgress();
    	else {
           $("#progress").attr("aria-valuenow", ""+build.progress)
           .css("width", build.progress+"%");
   	   if (build.errors) {
              $("#progress").removeClass("progress-bar-success")
              .addClass("progress-bar-danger");
           }
        }
    	var state = (build.running)?"stop":"start";
    	if ($("#playButton").data("state")!=state) {
           if (state=="stop") {
              $("#playButtonIcon").removeClass("glyphicon-play")
             .addClass("glyphicon-stop");
           } else {
       	      $("#playButtonIcon").removeClass("glyphicon-stop")
             .addClass("glyphicon-play");
           }
           $("#playButton").data("state", state);
       }
    });

    // bind event listeners
    $("#playButton").on("click", _handleButtonClick);

    // private function
    function _handleButtonClick(evt) {
       var state = $("#playButton").data("state");
       $.post("sockets", { action: state });
    }

    function _resetProgress() {
       $("#progress").removeClass("progress-bar-danger")
       .attr("aria-valuenow", "0")
       .css("width", "0%")
       .removeClass("progress-bar-danger")
       .addClass("progress-bar-success");
    }
});

The code above does the following: it registers a listener for the dual-purpose button we placed on the page. It’s initial function is to start the build. Once the build is in progress, a subsequent click will stop it (we change the icon glyph to reflect this). We handle the button click by POST-ing to the same controller that handles the GET request that renders the page, and passing the action in the request body (it is exceedingly easy to do this in jQuery, and equally easy to access it on the other end in Express).

In order to handle both GET and POST, we will register our controller in app.js thusly:

	app.get('/sockets', sockets.get);
	app.post("/sockets", sockets.post);

If you recall, we shimmed Socket.io so that we can use it with Require.js. In the controller code above we are requiring both jQuery and socket.js. The moment this code runs, it will establish a handshake with socket.io code on the server, and once it does, messages can start flowing in both directions. We will define one custom message ‘build’ and pass the JavaScript object containing build status (running/not running), percentage done (0-100) and whether there are errors. This information will in turn affect how we render Bootstrap button and progress bar.

Meanwhile on the server, the router for the page contains the other half of the code. We will create fake build activity. In the real world, this information will have arrived from another app where the build is actually running. In fact, it is common that some kind of message broker is used for app to app messaging on the server (a topic for a future post). For now, we will fake the build by making it last 10 seconds, with progress sent to the client every second:

exports.get = function(req, res) {
  res.render('sockets', { title: 'Web Sockets', active: 'sockets', pid: process.pid });
};

var build = {
   running: false,
   progress: 0,
   errors: false
};

var _lastTimeout;

exports.post = function(req, res) {
   var action = req.body.action;

   if (action==="stop") {
	   // stop the build.
	   build.running = false;
	   if (_lastTimeout)
		   clearTimeout(_lastTimeout);
	   _pushEvent("build");
   }
   else if (action==="start") {
       // reset the build, start from 0
       build.running = true;
       build.errors=false;
       build.progress = 0;
       _pushEvent("build");
       _lastTimeout = setTimeout(_buildWork, 1000);
   }
};

function _buildWork() {
	build.progress += 10;
	if (build.progress==70)
		build.errors=true;
	if (build.progress < 100) {
		_pushEvent("build");
		_lastTimeout = setTimeout(_buildWork, 1000);
	}
	else {
		build.running = false;
		_pushEvent("build");
	}
}

function _pushEvent(event) {
	exports.io.sockets.emit(event, build);
}

The code above should be fairly easy to read – we are faking the build by setting timeout of 1000ms (our 10% ‘ticks’). We move the ‘build.progress’ property and ’emit’ a message to all the active sockets (if you recall, we are using the ‘io’ object we attached in app.js). Any number of clients looking at this page will see the build in progress and will be able to start it and stop it.

When we start the server and navigate to the newly added ‘Sockets’ page, we can see progress bar and the button, as expected. Pressing the button starts the build and page is updated as the build progresses, turning from green to red at the 70% mark, as expected:

web-sockets-picture

You can observe the the whole dance in action in this animated GIF.

Time for the post-demo discussion. Readers following this blog may remember my concerns about Node.js that kept me on the fence for a while. Node.js and a JavaScript templating library such as Dust.js offer a very fast cycle of experimentation and exploration that Bill Scott from PayPal among others has found instrumental for the process of Lean UX. However, it is hard to make such a tectonic shift for that sole reason. For me, adding server push to the mix is what tipped the scales in Node.js’ favor. It is hard to match the efficiency and scale possible this way, and alternative technologies that consume a process or a thread per request will find a very hard time trying to match the number of simultaneous connections possible with Node. Not to mention how easy and enjoyable the whole coding experience is, if you care about the state of mind of your developers.

Of course, this is not a ground-breaking revelation – the fact that Node.js is particularly suitable for DIRT-y apps was the major driving force for its explosive growth. Nevertheless, I will repeat it here in case you missed all the other mentions. If you are a JEE developer considering moving from servlets and JSPs to Node, a combination of Node.js, express.js and one of the JavaScript-based templating libraries will make for a fairly painless transition. Still, you will find yourself with a nagging feeling that the new stack is not so much better as different, particularly since you will not immediately feel an improvement in scalability as you are testing your new code in isolation. Only when you start adding server push code will you find yourself in a truly new territory and will be able to justify the expense and the effort.

Now I feel bad for halfheartedly ranting against Guillermo Rauch for not shipping Socket.io v1.0 fast enough for my liking. This experiment convinced me that if you don’t do push, you will not get Node.

© Dejan Glozic, 2014