Micro-Services and Page Composition Problem

800px-20121027_0811_Sintra_06

Dispite many desirable properties, micro-services carry two serious penalties to be contended with: authentication (which we covered in the previous post) and Web page composition, which I intend to address now.

Imagine you are writing a Node.js app and use Dust.js for the V of the MVC, as we are doing. Imagine also that several pages have shared content you want to inject. It is really easy to do using partials, and practically every templating library has a variation of that (and not just for Node.js).

However, if you build a micro-service system and your logical site is spread out between several micro-services, you have a complicated problem on your hands. Now partial inclusion needs to happen across the network, and another service needs to serve the shared content. Welcome to the wonderful world of distributed composition.

This topic came into sharp focus during Nodeconf.eu 2014. Clifton Cunningham presented the work of his team in this particular area, and the resulting project Compoxure they have open-sourced and shared with us. Clifton has written about it in his blog and it is a very interesting read.

Why bother?

At this point I would like to step back and look at the general problem of document component model. For all their sophistication and fantastic feature set, browsers are stubbornly single document-oriented. They fight with us all the time when it comes to where the actual content on the page comes from. It is trivially easy to link to a number of stylesheets and JavaScript files in the HEAD section of the document, but you cannot point at a page fragment and later use it in your document (until Web Components become a reality, that is – including page fragments that contain custom element templates and associated styles and scripts is the whole point of this standard).

Large monolithic server-side applications were mostly spared from this problem because it was fairly easy to include shared partials within the same application. More recently, single page apps (SPAs) have dealt with this problem using client side composition. If everything is a widget/plug-in/addon, your shared area can be similarly included into your page from the client. Some people are fine with this, but I see several flaws in this approach:

  1. Since there is no framework-agnostic client side component model, you end up stuck with the framework you picked (e.g. Angular.js headers, footers or navigation areas cannot be consumed in Backbone micro-services)
  2. The pause until the page is assembled in SPAs due to JavaScript downloading and parsing can range from a short blip to a seriously annoying blank page stare. I understand that very dynamic content may need some time to be assembled but shared areas such as headers, footers, sidebars etc. should arrive quickly, and so should the initial content (yeah, I don’t like large SPAs, why do you ask?)

The approach we have taken can be called ‘isomorphic’ – we like to initially render on the server for SEO and fast first content, and later progressively enhance using JavaScript ‘on the fly’, and dynamically load with Require.js. If you use Node.js and JavaScript templating engine such as Dust.js, the same partials can be reused on the client (something Airbnb has demonstrated as a viable option). The problem is – we need to render a complete initial page on the server, and we would like the shared areas such as headers, sidebars and footers to arrive as part of that first page. With a micro-service system, we need a solution for distributed document model on the server.

Alternatives

Clifton and myself talked about options at length and he has a nice breakdown of alternatives at the Compoxure GitHub home page. For your convenience, I will briefly call out some of these alternatives:

  1. Ajax – this is a client-side MVC approach. I already mentioned why I don’t like it – it is bad for SEO, and you need to stare at the blank page while JavaScript is being downloaded and/or parsed. We prefer to use JavaScript after the initial hit.
  2. iFrames – you can fake document component models by using seamless iframes. Bad for SEO again, there is no opportunity for cashing (therefore, performance problems due to latency), content in iFrames is clipped at the edges, and problems for cross-frame communication (although there are window.postMessage workarounds). They do however solve the single-domain restriction browsers impose on Ajax. Nevertheless, they have all the cool factor of re-implementing framesets from the 90s.
  3. Server Side Includes (SSIs) – you can inject content using this approach if you use a proxy such as Nginx. It can work and even provide for some level of caching, but not the programmatic and fine grain control that is desirable when different shared areas need different TTL (time to live) values.
  4. Edge Side Includes (ESIs) – a more complete implementation that unfortunately locks you into Varish or Akamai.

Obviously for Clifton’s team (and ourselves), none of these approaches quite delivers, which is why services like Compoxure exist in the first place.

Direct composition approach

Before I had an opportunity to play with Compoxure, we spent a lot of time wrestling with this problem in our own project. Our current thinking is illustrated in the following diagram:

composition1The key aspects of this approach are:

  1. Common areas are served by individual composition services.
  2. Common area service(s) are proxied by Nginx so that they can later be called by Ajax calls. This allows the same partials to be reused after the initial page has rendered (hence ‘isomorphic’).
  3. Common area service can also serve CSS and JavaScript. Unlike the hoops we need to go through to stitch HTML together, CSS and JavaScript can simply be linked in HEAD of the micro-service page. Nginx helps making the URLs nice, for example ‘/common/header/style.css’ and ‘/common/header/header.js’.
  4. Each micro-service is responsible for making a server-side call, fetching the common area response and passing it into the view for inlining.
  5. Each micro-service takes advantage of shared Redis to cache the responses from each common service. Common services that require authentication and can deliver personalized response are stored in Redis on a per-user basis.
  6. Common areas are responsible for publishing messages to the message broker when something changes. Any dynamic content injected into the response is monitored and if changed, a message is fired to ensure cached values are invalidated. At the minimum, common areas should publish a general ‘drop cache’ message on restart (to ensure new service deployments that contain changes are picked up right away).
  7. Micro-services listen to invalidation messages and drop the cached values when they arrive.

This approach has several things going for it. It uses caching, allowing micro-services to have something to render even when common area services are down. There are no intermediaries – the service is directly responding to the page request, so the performance should be good.

The downside is that each service is responsible for making the network calls and doing it in a resilient manner (circuit breaker, exponential back-off and such). If all services are using Node.js, a module that encapsulates Redis communication, circuit breaker etc. would help abstract out this complexity (and reduce bugs). However, if micro-services are in Java or Go, we would have to duplicate this using language-specific approaches. It is not exactly rocket science, but it is not DRY either.

The Compoxure approach

Clifton and guys have taken a route that mimics ESI/SSI, while addressing their shortcomings. They have their own diagrams but I put together another one to better illustrate the difference to the direct composition diagram above:

composition2In this approach, composition is actually performed in the Compoxure proxy that is inserted between Nginx and the micro-services. Instead of making its own network calls, each micro-service adds special attributes to the DIV where the common area fragment should be injected. These attributes control parameters such as what to include, what cache TTLs to employ, which cache key to use etc. There is a lot of detail in the way these properties are set (RTFM), but suffice to say that Compoxure proxy will serve as an HTML filter that injects the content from the common areas into these DIVs as instructed.

<div cx-url='{{server:local}}/application/widget/{{cookie:userId}}'
     cx-cache-ttl='10s' cx-cache-key='widget:user:{{cookie:userId}}'
     cx-timeout='1s' cx-statsd-key="widget_user">
This content will be replaced on the way through
</div>

This approach has many advantages:

  1. The whole business of calling the common area service(s), caching the response according to TTLs, dealing with network failure etc. is handled by the proxy, not by micro-services.
  2. Content injection is stack-agnostic – it does not matter how the micro-service that serves the HTML is written (in Node.js, Java, Go etc.) as long as the response contains the expected tags
  3. Even in a system written entirely in Node.js, writing micro-services is easier – no special code to add to each controller
  4. Compoxure is used only to render the initial page. After that, Ajax takes over and composition service is hit with Ajax calls directly.

Contrasting the approach with direct composition, we identified the following areas of concern:

  1. Compoxure parses HTML in order to locate DIVs with special tags. This adds a performance hit, although practical results imply it is fairly small
  2. Special tags are not HTML5 compliant (‘data-‘ prefix would work). If this bothers you, you can configure Compoxure to completely replace the DIV with these tags with the injected content, so this is likely a non-issue.
  3. Obviously Compoxure inserts itself in front of the micro-services and must not go down. It goes without saying that you need to run multiple instances and practice ZDD (Zero-Downtime Deployment).
  4. Caching is static i.e. content is cached based on TTLs. This makes picking the TTL values tricky – our approach that involves pub/sub allows us to use higher TTL values because we will be told when to drop the cached value.
  5. When you develop, direct composition approach requires that you have your own micro-service up, as well as common area services. Compoxure adds another process to start and configure locally in order to be able to see your page with all the common areas rendered. If you hit your micro-service directly, all the DIVs with the ‘cx-‘ properties will be empty (or contain the placeholder content).

Discussion

Direct composition and Compoxure proxy are two valid approaches to the server-side document component model problem. They both work well, with different tradeoffs. Compoxure is more comfortable for developers – they just configure a special placeholder div and magic happens on the way to the browser. Direct composition relies on fewer moving parts, but makes each controller repeat the same code (unless that code is encapsulated in a shared Node.js module).

An approach that bridges both worlds and something we are seriously thinking of doing is to write a Dust.js helper that further simplifies inclusion of the common areas. Instead of importing a module, you would import a helper and then just use it in your markup:

<div>
{@import url="{headerUrl}" cache-ttl="10s"
cache-key="widget:user:{userid}" timeout="1s"}
</div>

Of course, Compoxure has some great properties that are not easy to replicate with this approach. For example, it does not pass TTL values to Redis directly because it would cause the cashed content to disappear after the coundown, and Compoxure perfers to keep the last content past TTL in case the service is down (better to serve slightly stale content than no content at all). This is a great feature and would need to be replicated here. I am sure I am missing other great features and Clifton will probably remind me about it.

Conclusion

In the end, I like both approaches for different reasons, and I can see a team use both successfully. In fact, I could see a solution where both are available – a Dust.js helper for Node.js/Dust.js micro-services, and Compoxure for everybody else (as a fallback for services that cannot or do not want to fetch common areas programmatically). Either way, the result is superior to the alternatives – I strongly encourage you to try it in your next micro-service project.

You don’t even have to give up your beloved client-side MVCs – we have examples where direct composition is used in a page with Angular.js apps and another with a Backbone app. These days, we are spoiled for choice.

© Dejan Glozic, 2014

The Year of Blogging Dangerously

391px-Extremely_yummy_raspberry_cheesecake

Wow, has it been a year already? I am faking surprise, of course, because WordPress has notified me well ahead of time that I need to renew my dejanglozic.com domain. So in actuality I said ‘wow, will it soon be a year of me blogging’. Nevertheless, the sentiment is genuine.

It may be worthwhile to look back at the year, if only to reaffirm how quickly things change in this industry of ours, and also to notice some about-faces, changes of direction and mind.

I started blogging in the intent to stay true to the etymological sense of the word ‘blog’ (Web log). As a weekly diary of sorts, it was supposed to chronicle trials and tribulations of our team as it boldly goes into the tumultuous waters of writing Web apps in the cloud. I settled on a weekly delivery, which is at times doable, at other times a nightmare. I could definitely do without an onset of panic when I realize that it is Monday and I forgot to write a new entry.

Luckily we have enough issues we deal with daily in our work to produce enough material for the blog. In that regard, we are like a person who just moved into a new condo after his old apartment went up in flames and went to Ikea. If an eager clerk asks him ‘what do you need in particular’, his genuine answer must be ‘everything – curtains, rugs, new mattress, a table, chairs, a sofa, a coffee table …’.

At least that’s how we felt – we were re-doing everything in our distributed system and we were able to re-use very little from our past lives, having boldly decided to jump ahead as far as possible and start clean.

Getting things out of the system

That does not mean that the blog actually started with a theme or a direction. In the inaugural post The Turtleneck and The Hoodie, I proudly declared that I care both about development AND the design and refuse to choose. But that is not necessarily a direction to sustain a blog. It was not an issue for a while due to all these ideas that were bouncing in my head waiting to be written down. Looking back, I think it sort of worked in a general-purpose, ‘good advice’ kind of way. Posts such as Pulling Back from Extreme AJAX or A Guide to Storage for ADD Types were at least very technical and based on actual research and hands-on experience.

Some of the posts were just accumulated professional experience that I felt the need to share. Don’t Get Attached to Your Code or Dumb Code Good, Smart Code Bad were crowd pleasers, at least in the ‘yeah, it happened to me too’ way. Kind of like reading that in order to lose weight you need to eat smart and go outside. Makes a lot of sense except for the execution, which is the hard part.

344px-'Be_smart..Act_dumb^_-_NARA_-_513932

Old man yells at the cloud

Funnily enough, some of my posts, after using up all the accumulated wisdom to pass on, sound somewhat cranky in hindsight. I guess I disagreed with some ideas and directions I noticed, and the world ignored my disagreement and continued, unimpressed. How dare people do things I don’t approve of!

Two cranky posts that are worth highlighting are Swimming Against the Tide, in which I am cranky regarding client side MVC frameworks, and Sitting on the Node.js Fence, in which I argue with myself on pros and cons of Node.js. While my subsequent posts clearly demonstrate that I resolved the latter dilemma and went down the Node.js route hook, line and sinker, I am still not convinced that all that JavaScript required to write non-trivial Single Page Apps (SPAs) is a very good idea, particularly if you have any ambition to run them on mobile devices. But it definitely sounds funny to me now – as if I was expressing an irritated disbelief that, after publishing all the bad consequences of practicing extreme Ajax, people still keep doing it!

I heart Node.js

Of course, once our team went down Node.js route (egged on and cajoled by me), you could not get me to shut up about it. In fact, the gateway drug to it was my focus on templating solutions, and our choice of Dust.js (LinkedIn fork). By the way, it is becoming annoying to keep adding ‘LinkedIn fork’ all the time – that’s the only version that is actively worked on anyway.

Articles from this period are more-less setting the standard for my subsequent posts: they are about 1500 words long, have a mix of outgoing links, a focused technical topic, illustrative embedded tweets (thanks to @cra who taught me how not to embed tweets as images like a loser). As no story about Node.js apps is complete without Web Sockets and clustering, and both were dully covered.

Schnorr_von_Carolsfeld_Bibel_in_Bildern_1860_006

I know micro-services!

Of course, it was not until I went to attend NodeDay in February that a torrent of posts on micro-services was unleashed. The first half of 2014 was all ablaze with the posts and tweets about micro-services around the world anyway, which my new Internet buddy Adrian Rossouw dully documented in his Wayfinder field guide. It was at times comical to follow food fights about who will provide the bestest definition of them all:

If you follow a micro-services tag for my blog, the list of posts is long and getting longer every week. At some point I will stop tagging posts with it, because if everything is about them, nothing is – I need to be more specific. Nevertheless, I am grateful for the whole topic – it did after all allow me to write the most popular post so far: Node.js and Enterprise – Why Not?

monty-1920-1200-wallpaper

What does the future hold?

Obviously Node.js, messaging and micro-services will continue to dominate our short-term horizon as we are wrestling with them daily. I spoke about them at the recent DevCon5 in NYC and intend to do the same at the upcoming nodeconf.eu in September.

Beyond that, I can see some possible future topics (although I can’t promise anything – it is enough to keep up as it is).

  • Reactive programming – I have recently presented at the first Toronto Reactive meetup, and noticed this whole area of Scala and Akka that is a completely viable alternative to implement micro-services and scalable distributed systems that confirm to the tenets of Reactive Manifesto. I would like to probe further.
  • Go language – not only because TJ decided to go that route, having an alternative to Node.js while implementing individual micro-services is a great thing, particularly for API and back-end services (I still prefer Node.js for Web serving apps).
  • Libchan – Docker’s new project (like Go channels over the network) currently requires Go (duh) but I am sure Node.js version will follow.
  • Famo.us – I know, I know, I have expressed my concerns about their approach, but I did the same with Node.js and look at me now.
  • Swift – I am a registered XCode developer and have the Swift-enabled update to it. If only I could find some time to actually create some native iOS apps. Maybe I will like Swift more than I do Objective-C.

I would like to read this post in a year and see if any of these bullets panned out (or were instead replaced with a completely different list of even newer and cooler things). In this industry, I would not be surprised.

Whatever I am writing about, I would like to thank you for your support and attention so far and hope to keep holding it just a little bit longer. Now if you excuse me, I need to post this – I am already late this week!

© Dejan Glozic, 2014

Release the Kraken.js – Part I

Kraken
Image credit: Yarnington 2011

Of course when you’re a kid, you can be friends with anybody. […] You like Cherry Soda? I like Cherry Soda! We’ll be best friends!

Jerry Seinfeld

We learned from Rene Zellweger that sometimes you can have people at “Hello”. In case of myself and the project Kraken.js, it had me at “Node.js/express.js/Dust.js/jQuery/Require.js/Bootstrap”. OK, not fit for a memorable movie quote but fairly accurate – that list of libraries and platforms was what we at JazzHub eventually arrived at as our new stack to write cloud development tools for BlueMix. Imagine my delight when all these checkboxes were ticked in a presentation describing PayPal’s experience in moving from Java to Node for their Lean UX needs, and announcing project Kraken. You can say I had my ‘Cherry Soda’ moment.

Of course, blessed with ADD as I am, I made a Flash-like scan of the website, never spending time to actually, you know, study it. Part of it is our ingrained fear and loathing of frameworks in general, and also our desire to start clean and very slowly add building blocks as we need them. This approach is also advocated by Christian Heilmann in his article ‘The Vanilla Web Diet’ in the Smashing Magazine Book 4.

Two things changed my mind. First, as part of NodeDay I had a pleasure to meet and talk to Bill Scott, Senior Director of UX Engineering, Jeff Harrell, Director of Engineering Architecture, and Richard Ragan, Principal Engineer and ‘the Dust.js guy’, all from PayPal, and now feel retroactively bad for not giving the fruit of their labor more attention. Second, I had a chat with a friend from another IBM group also working on a Node.js project, and he said “If we were to start again today, I would definitely give Kraken.js a go – I wish we had it back then, because we had to essentially re-implement most of it”. Strike two – a blog post is in order.

So now with a combination of guilt, intra-company encouragement and natural affinity of our technologies, I started making inventory of what Kraken has to offer. At this point, hilarity ensued: the Getting Started page requires installation of Yeoman, and then the Kraken generator. Problem: there is no straightforward way to install Yeoman on Windows. I have noticed during a recent NodeDay that when it comes to Node, Macs outnumber Windows machines 99 to 1, but considering that Kraken is targeting enterprise developers switching to Node, chances are many of them are running Windows on their work machines. A Yeoman for Windows would definitely help spread the good word.

Update: it turns out I was working on the outdated information – after posting the article, I tried again and managed to install Yeoman on a Windows 7 machine.

Using Yeoman is of course convenient to bootstrap your apps – you create a directory, type ‘yo kraken’, answer couple of questions and the scaffolding of your app is set up for you. At this point our team lead Curtis said ‘I would be fine with a zip file and no need to install Yeoman’. He would have been right if using Yeoman was a one-time affair, but it turns out that Kraken Yeoman generator is a gift that keeps on giving. You can return to it to incrementally build up your app – you can add new pages, models, controllers, locales etc. It is a major boost to productivity – I can see why they went through the trouble.

Anyway, switched to my MacBook Pro, I installed Yeoman, then Kraken generator, then typed ‘yo Kraken’ and my app was all there. Once you start analyzing the app structure, you see the Kraken overhead – there are more directories, more files, and Kraken has decided on how things should be structured and tries to stick to it. On the surface of it, it is less clear for the beginner, but I am already past the pure express ‘Hello, World’ apps. I am interested in how to write real world production apps for the enterprise, and Kraken delivers. It addresses the following enterprise-y needs:

  1. Structure – app is clearly structured – models, controllers, templates, client and server side – it is all well labelled and easy to figure out
  2. i18n – Kraken is serious about enterprise needs, so internationalization is baked in right from the start
  3. Errors – it provides translated files for common HTTP errors (404, 500, 503) as a pattern
  4. Templates – Kraken uses Dust, which fits our needs like a glove, and the templates are located in the public folder so that they can be served to both Node.js and the browser, allowing template sharing. Of course, templates are pre-compiled into JS files for performance reasons.
  5. Builds – Kraken uses Grunt to automate all the boring tasks of preparing your app for deployment, and also incorporates Mocha tests as a standard part. Having a consistent approach to testing (and the ability to incrementally add tests for new pages) will improve quality of your apps in the long run.
Kraken-hello
Kraken.js HelloWorld app directory structure

One thing that betrays that PayPal deals with financial transactions is how serious they are about security. Thanks to its Lusca module, Kraken bakes protection against the most prevalent types of attacks right into your app. All enterprise developers know this must be done before going into production, so why tacking it on in a panicky scramble at the last minute when you can bake it into the code from the start.

Of course, the same goes for i18n – Makara module handles translations. At this point I paused looking at the familiar file format from the Java world – *.properties files. My gut reaction would be to make all the supporting files in JSON format (and to Kraken authors’ credit, they did do so for all the configuration files). However, after thinking about the normal translation process, I realized that NLS files are passed to the translation team that is not necessarily very technical. Properties files are brain-dead – key/value pairs with comments. This makes them easy to author – you don’t want to receive JSON NLS files with missing quotes, commas or curly braces.

Makara is designed to work with Dust.js templates, and Kraken provides a Dust helper to inline locale-based substitutions into the template directly (for static cases), as well as an alternative for more dynamic situations.

As for the main file itself, it did give me pause because the familiar express app was nowhere to be found (I absent-mindedly typed ‘node app.js’, not realizing the app is really in index.js):

'use strict';

var kraken = require('kraken-js'),
    app = {};

app.configure = function configure(nconf, next) {
    // Async method run on startup.
    next(null);
};

app.requestStart = function requestStart(server) {
    // Run before most express middleware has been registered.
};

app.requestBeforeRoute = function requestBeforeRoute(server) {
    // Run before any routes have been added.
};

app.requestAfterRoute = function requestAfterRoute(server) {
    // Run after all routes have been added.
};

if (require.main === module) {
    kraken.create(app).listen(function (err) {
        if (err) {
            console.error(err.stack);
        }
    });
}

module.exports = app;

This is where the ‘framework vs library’ debate can be legitimate – in a true library, I would require express, dust etc. in addition to kraken, start the express server and somehow hook up kraken to it. I am sure express server is somewhere under the hood, but I didn’t spend enough time to find it (Lenny Markus is adamant this is possible) . This makes me wonder how I would approach using cluster with kraken – forking workers to use all the machine cores. Another use case – how would I use Socket.io here – Socket.io piggy-backs on the express server and reuses the port. I will report my findings in the second instalment.

I would be remiss not to include Kappa here – it proxies NPM repository allowing control over what is pulled in via NPM and provides for intra-enterprise sharing of internal modules. Every enterprise will want to reuse code but not necessarily immediately make it Open Source on GitHub. Kappa fits the bill, if only for providing an option – I have not tested it and don’t know if it would work in our particular case. If anything, it does not suffer from NIH – it essentially wraps a well regarded npm-delegate module.

Let’s not forget builds: as part of scaffolding, Yeoman creates a Gruntfile.js configured to build your app and do an amazing amount of work you will get for free:

  1. Run JSHint to verify your JS files
  2. Build Require.js modules
  3. Run less compiler on the css files
  4. Build locale-specific versions of your dust templates based on i18n properties
  5. Compile dust templates into *.js files for efficient use on the client
  6. Run Mocha tests

I cannot stress this enough – after the initial enthusiasm with Node.js, you are back to doing all these boring but necessary tasks, and to have them all hooked up in the new app allows the teams to scale while retaining quality standards and consistency. Remember, as a resident Node.js influencer you will deal with many a well intentioned but inexperienced Java, PHP or .NET developer – having some structural reinforcement of good practices is a good insurance against everything unravelling into chaos. There is a tension here – you want to build a modern system based on Node.js micro-services, but don’t want each micro service to be written in a different way.

Kraken is well documented, and there are plenty examples to understand how to do most of the usual real world tasks. For an enterprise developer ready to jump into Node.js, the decision tree would be like this:

  1. If you prefer frameworks and like opinionated software for the sake of consistency and repeatability, use Kraken in its entirety
  2. If you are starting with Node.js, learn to write express.js apps first – you want to understand how things work before adding abstractions on top of them. Then, if you are OK with losing some control to Kraken, go to (1). Otherwise, consider continuing with your express app but cherry-picking Kraken modules as you go. To be fair, if Kraken is a framework at all (a very lightweight one as frameworks go), it falls into the class of harvested frameworks – something useful extracted out of the process of building applications. This is the good kind.
  3. If you ‘almost’ like Kraken but some details bother you or don’t quite work in your particular situation, consider participating in the Open Source project – helping Kraken improve is much cheaper than writing your own framework from scratch.

A friend of mine once told me (seeing me dragging my guitar for a band rehearsal) that she “must sit down one afternoon and learn how to play the guitar already”. I wisely opted out of such a mix of arrogance and cluelessness here and called this ‘Part I’ of the Kraken.js article. I am sure I will be able to get a better idea of its relative strengths and weaknesses after a prolonged use. For now, I wanted to share my first impressions before I lose the innocence and lack of bias.

Meanwhile, in the words of “the most interesting man in the world”: I don’t always use frameworks for Node.js development, but when I do, I prefer Kraken.js.

© Dejan Glozic, 2014

Dust.js: Such Templating

starbucks-doge-550

Last week I started playing with Node.js and LinkedIn’s fork of Dust.js for server side templating. I think I am beginning to see the appeal that made LinkedIn choose Dust.js over a number of alternatives. Back when LinkedIn had a templating throwdown and chose Dust.js, they classified it in the ‘logic-less’ group, in contrast to the ’embedded JavaScript’ group that allows you to write arbitrary JavaScript in your templates. Here is the list of reasons I like Dust.js over the alternatives:

  1. Less logic instead of logic-less. Unlike Mustache, it allows you to have some sensible view logic in your templates (and add more via helpers).
  2. Single instead of double braces. For some reason Mustache and Handlebars decided that if one set of curly braces is good, two sets must be better. It is puzzling that people insisting on DRY see no irony in needing two sets of delimiters for template commands and variables. Typical Dust.js templates look cleaner as a result.
  3. DRY but not to the extreme (I am looking at you, Jade). I can fully see the HTML markup that will be rendered. Life is too short to have to constantly map the shorthand notation of Jade to what it will eventually produce (and bang your head on the table when it barfs at you or spits out wrong things).
  4. Partials and partial injection. Like Mustache or Handlebars, Dust.js has partials. Like Jade, Dust.js has injection of partials, where the partial reserves a slot for the content that the caller will define. This makes it insanely easy to create ‘skeleton’ pages with common areas, then inject payload. One thing I did was set up three injection areas: in HEAD, in BODY where the content should go and at the end of BODY for injections of scripts.
  5. Helpers. In addition to very minimal logic of the core syntax, helpers add more view logic if needed (‘dustjs-helpers’ module provides some useful helpers that I took advantage of; you can write your own helpers easily).
  6. Stuff I didn’t try yet. I didn’t try streaming and asynchronous rendering, as well as complex paths to the data objects, but intend to study how I can take advantage of them. Seems like something that can come in handy once I get to more advanced use cases. That and the fact that JSPs support streaming so we feel we are not giving up anything by moving to Node/Dust.
  7. Client side templating. This also falls under ‘stuff I didn’t try yet’, but I am singling it out because it requires a different approach. So far our goal was to replace our servlet/JSP server side with Node/Dust pair. Having the alternative to render on the client opens up a whole new avenue of use cases. We have been burnt in the past with too much JavaScript on the client, so I want to approach this carefully, but a lot of people made a case for client side rendering. One thing I would like to try out is adding logic in the controller to sniff the user agent and render the same template on the server or on the client (say, render on the server for lesser browsers or search engine bots). We will definitely try this out.

In our current project we use a number of cooperating Java apps using plain servlets for controllers and JSPs for views. We were very disciplined to not use direct Java expressions in JSPs, only JSTL/EL (Tag Library and Expression Language). JSPs took a lot of flack in the dark days of spaghetti JSPs with a lot of Java code embedded in views, essentially driving current aversion to logic in templates to absurd levels in Mustache. It is somewhat ironic that you can easily create similar spaghetti Jade templates with liberal use of embedded JavaScript, so that monster is alive and well.

Because of our discipline, porting our example app to Node.js with Dust.js for views, Express.js for middleware and routing was easy. Our usual client stack (jQuery, Require.js, Bootstrap) was perfectly usable – we just copied the client code over.

Here is the shared ‘layout.dust’ file that is used by each page in the example app:

<!DOCTYPE html>
<html>
  <head>
    <title>{title}</title>
	<meta http-equiv="X-UA-Compatible" content="IE=edge">
	<meta name="viewport" content="initial-scale=1.0, maximum-scale=1.0, user-scalable=0" />
	<link rel="stylesheet" href="/stylesheets/base.css">
	{+head/}
  </head>

  <body class="jp-page">
	{>navbar/}
	<div class="jp-page-content">
	   {+content/}
	</div>

	<script src="/js/base.min.js"></script>
	<script>
		requirejs.config({
			baseUrl: "/js"
		});
	</script>
	{+script/}
</body>
</html>

Note the {+head/}, {+content/} and {+script/} sections – they are placeholders for content that will be injected from templates that include this partial. This ensures the styles, meta properties, content and script are injected in proper places in the template. One thing to note is that you don’t have to define empty placeholders – you can place content between the opening and closing tag of the section, but we didn’t have any default content to provide here. You can view an empty tag as an ‘injection point’ (this is where the stuff will go), whereas a placeholder with some default content will be more like ‘overriding point’ (the stuff in the caller template will override this).

The header is a partial pulled into the shared template. It has been put together quickly – I can easily see the links for the header being passed in as an array of objects (it would make the partial even cleaner). Note the use of the helper for controlling the selected highlight in the header. It is simply comparing the value of the active link to the static values and adds a CSS class ‘active’ if true:

<div class="navbar navbar-inverse navbar-fixed-top jp-navbar" role="navigation">
   <div class="navbar-header">
     <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-ex1-collapse">
        <span class="sr-only">Toggle Navigation</span>
        <span class="icon-bar"></span>
        <span class="icon-bar"></span>
        <span class="icon-bar"></span>
     </button>
	<a class="navbar-brand" href="/">Examples<div class="jp-jazz-logo"></div></a>
   </div>
   <div class="navbar-collapse navbar-ex1-collapse collapse">
      <ul class="nav navbar-nav">
	<li {@eq key=active value="simple"}class="active"{/eq}><a href="/simple">Simple</a></li>
	<li {@eq key=active value="i18n"}class="active"{/eq}><a href="/i18n">I18N</a></li>
	<li {@eq key=active value="paging"}class="active"{/eq}><a href="/paging">Paging</a></li>
	<li {@eq key=active value="tags"}class="active"{/eq}><a href="/tags">Tags</a></li>
	<li {@eq key=active value="widgets"}class="active"{/eq}><a href="/widgets">Widgets</a></li>
	<li {@eq key=active value="opensocial"}class="active"{/eq}><a href="/opensocial">OpenSocial</a></li>
      </ul>
      <div class="navbar-right">
	<ul class="nav navbar-nav">
	   <li><a href="#">Not logged in</a></li>
	</ul>
      </div>
   </div>
</div>

Finally, here is the sample page using these partials:

{>layout/}
{<content}
  <h2>Simple Page</h2>
    <p>
      This is a simple HTML page generated using Node.js and Dust.js, which loads CSS and JavaScript.<br/>
      <a id="myLink" href="#">Click here to say hello</a> 
    </p>
{/content}
{<script}
   <script src="/js/simple/simple-page.js"></script>
{/script}

In this page, we include shared partial ‘layout.dust’, then inject content area into the ‘content’ placeholder and also some script into the ‘script’ placeholder.

The Express router for this page is very short – all it does is render the template. Note how we are passing the title of the page and also the value of the ‘active’ property to ensure the proper link is highlighted in the header partial:

exports.simple = function(req, res){
  res.render('simple', { title: 'Simple', active: 'simple' });
};

Running the Node app gives you the following in the browser:

dust-simple-page

Since we are using Bootstrap, we also get responsive design thrown in for free:

dust-simple-page-responsive

There you go. Sometimes it pays to follow in the footsteps of those who did the hard work before you – LinkedIn’s fork of Dust.js is definitely a very comfortable and capable templating engine and a great companion to Node.js and Express.js. We feel confident using it in our own projects. In fact, we have decided to write one of the apps in our project using this exact stack. As usual, you will be the first to know what we learned as we are going through it.

© Dejan Glozic, 2014

On the LinkedIn’s Dusty Trail

dusty_road_brush

There is an old New Yorker cartoon (from 1993) where a dog working on the computer tells another dog: “On the Internet, nobody knows you are a dog”. That’s how I occasionally feel about the GitHub projects – they could be solid, multi-contributor powerhouses churning out release after release, and there could be great ideas stalled when the single contributor got distracted by something new and shiny. The thing is that you need to inspect all the project artifacts carefully to tell which is which – single contributors can mount a powerful and amplified presence on GitHub (until they leave, that is).

This issue comes with the territory in Open Source (considering how much you pay for all the hard work of others), but nowhere is it more acute than in LinkedIn’s choice of templating library in 2011. In an often quoted blog post, LinkedIn engineering team embarked on a quest to standardise around one templating library that can be run both on the server and on the client, and also check a number of other boxes they put forward. Out of several contenders they finally settled on Dust.js. This library uses curly braces for pattern substition, which puts it in a hipster-friendly category alongside Mustache and Handlebars. But here is the rub: while the library ticks most of LinkedIn’s self-imposed requirements, its community support leaves much to be desired. The sole contributor seemed unresponsive.

Now, if it was me, I would move on, but LinkedIn’s team decided that they will not be deterred by this. In fact, it looks like they kind of liked the fact that they can evolve the library as they learn by doing. The problem was that committer rights work by bootstrapping – only the original committer can accept LinkedIn changes, which apparently didn’t happen with sufficient snap. Long story short, behold the ‘LinkedIn Fork of Dust.js’, or ‘dustjs-linkedin’ as it is known to NPM.

I followed this story with mild amusement and shaking my head until the end of 2013, when PayPal saw the Node.js light. As part of their Node.js conversion, they picked LinkedIn’s fork of Dust.js for their templating needs. This reminded me of how penguins jump into water – they all wait until one of them jumps, then they all follow in a quick succession. Channeling my own inner penguin, I decided the water was fine and started playing with dustjs-linkedin myself.

This is not my first foray into the world of Node.js, but in my first attempt I used Jade, which is just too DRY for my taste. Being a long time Eclipse user, I just could not revert to a command line, so I resorted to a collection of Web Development tools, then added Nodeclipse, mostly for the project creation wizard and the launcher. Eclipse is very important to me because it answers one of the key issues plaguing Node.js developers that go beyond ‘Hello, World’ – how do I control and structure all the JavaScript files (incidentally one of the hard questions that Zef Hemel posed in his blog post on blank canvas projects).

Then again, Nodeclipse is not perfect, and dustjs-linkedin is not one of the rendering engines they cover in the project wizard. I had to create an Express project configured for Jade, turn around and delete Jade from the project, and use NPM to install dustjs-linkedin locally (i.e. in the project tree under ‘node_modules’), like so:

nodeclipse-project

After working with Nodeclipse for a while, and not being able to use their Node launcher (I had to configure my own external tool launcher), I am now questioning its value but at least it got the initial structure set up for me. Now that I have a good handle of the overall structure, I could create new Node projects myself, so general purpose Web tooling with HTML, JSON and JavaScript editors and helpers may be all you need (of course, you need to also install Node.js and NPM but you would need to do it in all scenarios).

Hooking up nodejs-linkedin also requires consolidate.js in a way that is a bit puzzling to me but it seems to work well, so I didn’t question it (the author is TJ of Express fame, and exploring the code I noticed that nodejs-linkedin is actually listed as one of the recognized engines). The change required to pull in dustjs-linkedin and consolidate, declare dust as a new engine and map it as the view engine for the app:

var express = require('express')
, routes = require('./routes')
, dust = require('dustjs-linkedin')
, cons = require('consolidate')
, user = require('./routes/user')
, http = require('http')
, path = require('path');

var app = express();

// all environments
app.set('port', process.env.PORT || 3000);
app.set('views', __dirname + '/views');
app.engine('dust', cons.dust);
app.set('view engine', 'dust');

That was pretty painless so far. We have configured our views to be in the /views directory, so Dust files placed there can be directly found by Express. Since Express is an MVC framework (although much lighter than what you are normally used to in the JEE world), the C part is handled by routers, placed in the /routes directory. Our small example will have just a landing page rendered by /views/index.dust and the controller will be /routes/index.js. However, to add a bit of interest, I will toss in block partials, showing how to create a base template, then override the ‘payload’ in the child template.

We will start by defining the base template in ‘layout.dust’:

<!DOCTYPE html>
<html>
  <head>
    <title>{title}</title>
    <link rel='stylesheet' href='/stylesheets/style.css' />
  </head>
  <body>
    <h1>{title}</h1>
	{+content}
	This is the base content.
	{/content}
  </body>
</html>

We can now include this template in ‘index.dust’ and define the content section:

{>layout/}
{<content}
<p>
This is loaded from a partial.
</p>
<p>
Another paragraph.
</p>
{/content}

We now need to define index.js controller in /routes because the controller invokes the view for rendering:

/*
* GET home page.
*/
exports.index = function(req, res) {
   res.render('index',
           { title: '30% Turtleneck, 70% Hoodie' });
};

In the code above we are sending the result of rendering the view using Dust to the response. We specify the collection of key/value pairs that will be used by Dust for variable substitution. The only part left is to hook up our controller to the site path (our root) in app.js:

app.get('/', routes.index);

And that is it. Running our Node.js app using Nodeclipse launcher will make it start listening on the localhost port 3000:

dusty_road_browser

So far so good, and it is probably best to stop while we are ahead. We have a Node.js project in Eclipse configured to use Express and LinkedIn’s fork of Dust.js for templating, and everything is working. In my next installment, I will dive into Dust.js in earnest. This is going to be fun!

© Dejan Glozic, 2014