Dust.js: Such Templating

starbucks-doge-550

Last week I started playing with Node.js and LinkedIn’s fork of Dust.js for server side templating. I think I am beginning to see the appeal that made LinkedIn choose Dust.js over a number of alternatives. Back when LinkedIn had a templating throwdown and chose Dust.js, they classified it in the ‘logic-less’ group, in contrast to the ’embedded JavaScript’ group that allows you to write arbitrary JavaScript in your templates. Here is the list of reasons I like Dust.js over the alternatives:

  1. Less logic instead of logic-less. Unlike Mustache, it allows you to have some sensible view logic in your templates (and add more via helpers).
  2. Single instead of double braces. For some reason Mustache and Handlebars decided that if one set of curly braces is good, two sets must be better. It is puzzling that people insisting on DRY see no irony in needing two sets of delimiters for template commands and variables. Typical Dust.js templates look cleaner as a result.
  3. DRY but not to the extreme (I am looking at you, Jade). I can fully see the HTML markup that will be rendered. Life is too short to have to constantly map the shorthand notation of Jade to what it will eventually produce (and bang your head on the table when it barfs at you or spits out wrong things).
  4. Partials and partial injection. Like Mustache or Handlebars, Dust.js has partials. Like Jade, Dust.js has injection of partials, where the partial reserves a slot for the content that the caller will define. This makes it insanely easy to create ‘skeleton’ pages with common areas, then inject payload. One thing I did was set up three injection areas: in HEAD, in BODY where the content should go and at the end of BODY for injections of scripts.
  5. Helpers. In addition to very minimal logic of the core syntax, helpers add more view logic if needed (‘dustjs-helpers’ module provides some useful helpers that I took advantage of; you can write your own helpers easily).
  6. Stuff I didn’t try yet. I didn’t try streaming and asynchronous rendering, as well as complex paths to the data objects, but intend to study how I can take advantage of them. Seems like something that can come in handy once I get to more advanced use cases. That and the fact that JSPs support streaming so we feel we are not giving up anything by moving to Node/Dust.
  7. Client side templating. This also falls under ‘stuff I didn’t try yet’, but I am singling it out because it requires a different approach. So far our goal was to replace our servlet/JSP server side with Node/Dust pair. Having the alternative to render on the client opens up a whole new avenue of use cases. We have been burnt in the past with too much JavaScript on the client, so I want to approach this carefully, but a lot of people made a case for client side rendering. One thing I would like to try out is adding logic in the controller to sniff the user agent and render the same template on the server or on the client (say, render on the server for lesser browsers or search engine bots). We will definitely try this out.

In our current project we use a number of cooperating Java apps using plain servlets for controllers and JSPs for views. We were very disciplined to not use direct Java expressions in JSPs, only JSTL/EL (Tag Library and Expression Language). JSPs took a lot of flack in the dark days of spaghetti JSPs with a lot of Java code embedded in views, essentially driving current aversion to logic in templates to absurd levels in Mustache. It is somewhat ironic that you can easily create similar spaghetti Jade templates with liberal use of embedded JavaScript, so that monster is alive and well.

Because of our discipline, porting our example app to Node.js with Dust.js for views, Express.js for middleware and routing was easy. Our usual client stack (jQuery, Require.js, Bootstrap) was perfectly usable – we just copied the client code over.

Here is the shared ‘layout.dust’ file that is used by each page in the example app:

<!DOCTYPE html>
<html>
  <head>
    <title>{title}</title>
	<meta http-equiv="X-UA-Compatible" content="IE=edge">
	<meta name="viewport" content="initial-scale=1.0, maximum-scale=1.0, user-scalable=0" />
	<link rel="stylesheet" href="/stylesheets/base.css">
	{+head/}
  </head>

  <body class="jp-page">
	{>navbar/}
	<div class="jp-page-content">
	   {+content/}
	</div>

	<script src="/js/base.min.js"></script>
	<script>
		requirejs.config({
			baseUrl: "/js"
		});
	</script>
	{+script/}
</body>
</html>

Note the {+head/}, {+content/} and {+script/} sections – they are placeholders for content that will be injected from templates that include this partial. This ensures the styles, meta properties, content and script are injected in proper places in the template. One thing to note is that you don’t have to define empty placeholders – you can place content between the opening and closing tag of the section, but we didn’t have any default content to provide here. You can view an empty tag as an ‘injection point’ (this is where the stuff will go), whereas a placeholder with some default content will be more like ‘overriding point’ (the stuff in the caller template will override this).

The header is a partial pulled into the shared template. It has been put together quickly – I can easily see the links for the header being passed in as an array of objects (it would make the partial even cleaner). Note the use of the helper for controlling the selected highlight in the header. It is simply comparing the value of the active link to the static values and adds a CSS class ‘active’ if true:

<div class="navbar navbar-inverse navbar-fixed-top jp-navbar" role="navigation">
   <div class="navbar-header">
     <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-ex1-collapse">
        <span class="sr-only">Toggle Navigation</span>
        <span class="icon-bar"></span>
        <span class="icon-bar"></span>
        <span class="icon-bar"></span>
     </button>
	<a class="navbar-brand" href="/">Examples<div class="jp-jazz-logo"></div></a>
   </div>
   <div class="navbar-collapse navbar-ex1-collapse collapse">
      <ul class="nav navbar-nav">
	<li {@eq key=active value="simple"}class="active"{/eq}><a href="/simple">Simple</a></li>
	<li {@eq key=active value="i18n"}class="active"{/eq}><a href="/i18n">I18N</a></li>
	<li {@eq key=active value="paging"}class="active"{/eq}><a href="/paging">Paging</a></li>
	<li {@eq key=active value="tags"}class="active"{/eq}><a href="/tags">Tags</a></li>
	<li {@eq key=active value="widgets"}class="active"{/eq}><a href="/widgets">Widgets</a></li>
	<li {@eq key=active value="opensocial"}class="active"{/eq}><a href="/opensocial">OpenSocial</a></li>
      </ul>
      <div class="navbar-right">
	<ul class="nav navbar-nav">
	   <li><a href="#">Not logged in</a></li>
	</ul>
      </div>
   </div>
</div>

Finally, here is the sample page using these partials:

{>layout/}
{<content}
  <h2>Simple Page</h2>
    <p>
      This is a simple HTML page generated using Node.js and Dust.js, which loads CSS and JavaScript.<br/>
      <a id="myLink" href="#">Click here to say hello</a> 
    </p>
{/content}
{<script}
   <script src="/js/simple/simple-page.js"></script>
{/script}

In this page, we include shared partial ‘layout.dust’, then inject content area into the ‘content’ placeholder and also some script into the ‘script’ placeholder.

The Express router for this page is very short – all it does is render the template. Note how we are passing the title of the page and also the value of the ‘active’ property to ensure the proper link is highlighted in the header partial:

exports.simple = function(req, res){
  res.render('simple', { title: 'Simple', active: 'simple' });
};

Running the Node app gives you the following in the browser:

dust-simple-page

Since we are using Bootstrap, we also get responsive design thrown in for free:

dust-simple-page-responsive

There you go. Sometimes it pays to follow in the footsteps of those who did the hard work before you – LinkedIn’s fork of Dust.js is definitely a very comfortable and capable templating engine and a great companion to Node.js and Express.js. We feel confident using it in our own projects. In fact, we have decided to write one of the apps in our project using this exact stack. As usual, you will be the first to know what we learned as we are going through it.

© Dejan Glozic, 2014

Advertisements

On the LinkedIn’s Dusty Trail

dusty_road_brush

There is an old New Yorker cartoon (from 1993) where a dog working on the computer tells another dog: “On the Internet, nobody knows you are a dog”. That’s how I occasionally feel about the GitHub projects – they could be solid, multi-contributor powerhouses churning out release after release, and there could be great ideas stalled when the single contributor got distracted by something new and shiny. The thing is that you need to inspect all the project artifacts carefully to tell which is which – single contributors can mount a powerful and amplified presence on GitHub (until they leave, that is).

This issue comes with the territory in Open Source (considering how much you pay for all the hard work of others), but nowhere is it more acute than in LinkedIn’s choice of templating library in 2011. In an often quoted blog post, LinkedIn engineering team embarked on a quest to standardise around one templating library that can be run both on the server and on the client, and also check a number of other boxes they put forward. Out of several contenders they finally settled on Dust.js. This library uses curly braces for pattern substition, which puts it in a hipster-friendly category alongside Mustache and Handlebars. But here is the rub: while the library ticks most of LinkedIn’s self-imposed requirements, its community support leaves much to be desired. The sole contributor seemed unresponsive.

Now, if it was me, I would move on, but LinkedIn’s team decided that they will not be deterred by this. In fact, it looks like they kind of liked the fact that they can evolve the library as they learn by doing. The problem was that committer rights work by bootstrapping – only the original committer can accept LinkedIn changes, which apparently didn’t happen with sufficient snap. Long story short, behold the ‘LinkedIn Fork of Dust.js’, or ‘dustjs-linkedin’ as it is known to NPM.

I followed this story with mild amusement and shaking my head until the end of 2013, when PayPal saw the Node.js light. As part of their Node.js conversion, they picked LinkedIn’s fork of Dust.js for their templating needs. This reminded me of how penguins jump into water – they all wait until one of them jumps, then they all follow in a quick succession. Channeling my own inner penguin, I decided the water was fine and started playing with dustjs-linkedin myself.

This is not my first foray into the world of Node.js, but in my first attempt I used Jade, which is just too DRY for my taste. Being a long time Eclipse user, I just could not revert to a command line, so I resorted to a collection of Web Development tools, then added Nodeclipse, mostly for the project creation wizard and the launcher. Eclipse is very important to me because it answers one of the key issues plaguing Node.js developers that go beyond ‘Hello, World’ – how do I control and structure all the JavaScript files (incidentally one of the hard questions that Zef Hemel posed in his blog post on blank canvas projects).

Then again, Nodeclipse is not perfect, and dustjs-linkedin is not one of the rendering engines they cover in the project wizard. I had to create an Express project configured for Jade, turn around and delete Jade from the project, and use NPM to install dustjs-linkedin locally (i.e. in the project tree under ‘node_modules’), like so:

nodeclipse-project

After working with Nodeclipse for a while, and not being able to use their Node launcher (I had to configure my own external tool launcher), I am now questioning its value but at least it got the initial structure set up for me. Now that I have a good handle of the overall structure, I could create new Node projects myself, so general purpose Web tooling with HTML, JSON and JavaScript editors and helpers may be all you need (of course, you need to also install Node.js and NPM but you would need to do it in all scenarios).

Hooking up nodejs-linkedin also requires consolidate.js in a way that is a bit puzzling to me but it seems to work well, so I didn’t question it (the author is TJ of Express fame, and exploring the code I noticed that nodejs-linkedin is actually listed as one of the recognized engines). The change required to pull in dustjs-linkedin and consolidate, declare dust as a new engine and map it as the view engine for the app:

var express = require('express')
, routes = require('./routes')
, dust = require('dustjs-linkedin')
, cons = require('consolidate')
, user = require('./routes/user')
, http = require('http')
, path = require('path');

var app = express();

// all environments
app.set('port', process.env.PORT || 3000);
app.set('views', __dirname + '/views');
app.engine('dust', cons.dust);
app.set('view engine', 'dust');

That was pretty painless so far. We have configured our views to be in the /views directory, so Dust files placed there can be directly found by Express. Since Express is an MVC framework (although much lighter than what you are normally used to in the JEE world), the C part is handled by routers, placed in the /routes directory. Our small example will have just a landing page rendered by /views/index.dust and the controller will be /routes/index.js. However, to add a bit of interest, I will toss in block partials, showing how to create a base template, then override the ‘payload’ in the child template.

We will start by defining the base template in ‘layout.dust’:

<!DOCTYPE html>
<html>
  <head>
    <title>{title}</title>
    <link rel='stylesheet' href='/stylesheets/style.css' />
  </head>
  <body>
    <h1>{title}</h1>
	{+content}
	This is the base content.
	{/content}
  </body>
</html>

We can now include this template in ‘index.dust’ and define the content section:

{>layout/}
{<content}
<p>
This is loaded from a partial.
</p>
<p>
Another paragraph.
</p>
{/content}

We now need to define index.js controller in /routes because the controller invokes the view for rendering:

/*
* GET home page.
*/
exports.index = function(req, res) {
   res.render('index',
           { title: '30% Turtleneck, 70% Hoodie' });
};

In the code above we are sending the result of rendering the view using Dust to the response. We specify the collection of key/value pairs that will be used by Dust for variable substitution. The only part left is to hook up our controller to the site path (our root) in app.js:

app.get('/', routes.index);

And that is it. Running our Node.js app using Nodeclipse launcher will make it start listening on the localhost port 3000:

dusty_road_browser

So far so good, and it is probably best to stop while we are ahead. We have a Node.js project in Eclipse configured to use Express and LinkedIn’s fork of Dust.js for templating, and everything is working. In my next installment, I will dive into Dust.js in earnest. This is going to be fun!

© Dejan Glozic, 2014

Rocket Science vs System Integration

By certified su from The Atherton Tablelands, Queensland , Australia (Integration) [CC-BY-2.0 (http://creativecommons.org/licenses/by/2.0) or CC-BY-2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

Note: In case you missed it, there is still a chance to get a free eBook of RESS Essentials from Packt Publishing. All you need to do is check the book out at Packt Publishing web site, drop a line about the book back at the comments section of my article with your email address and enter a draw for one of the three free e-books. And now, back to regular programming.

When I was in the army, my mother told me that she kept noticing soldiers on the streets. I mean, they didn’t suddenly appear, they were always there, but she only started noticing them because of me. Once I returned from my regular 12 month service, soldiers around her receded into oblivion again.

This is a neat trick that you can try yourself. Have you noticed that you start spotting other cars like your own on the road as soon as you buy it, even though you didn’t care less about them before (unless you really, really wanted it, in which case you were spotting it as an object of desire, anticipating the purchase)? If you care to learn another useless phrase, the scholarly term for this phenomenon is perceptual vigilance.

When we were building web apps in the 00’s, a number of technologies were considered a ‘solved problem’. You get yourself an Apache server, install it on a Linux box, strap on Tomcat for servlets (all right, you can use PHP instead) and add MySQL for storage. Of course, there were commercial versions that offered, well, ‘bigger’ but the basic principles were the same. Nobody was excited about serving pages, or storing data, or doing anything commoditized that way. The action was in solving the actual end-user problem, and the sooner you can assemble a system out of the off-the-shelf components and get to the problem solving part, the better.

Well, once you need to serve millions upon millions of users, strike the ‘solved’ part. You start hitting the limits of your components, and you need to start paying attention to the things you stopped noticing years ago. That’s what is now happening across the software development world. The ‘solved’ problems are being solved again, from the very bottom. Web servers, data storage, caching, page templates, message queues – everything is being revisited and reopened. It is actually fascinating to watch – like a total gutting and rebuilding of a house whose owners grew tired of the basement flooding every time it rains. It is as if millions of developers suddenly noticed the components that they were taking for granted all these years and said ‘let’s rebuild everything from scratch’.

When you look at a Node.js ‘Hello, World’ server example, it is fascinating in its simplicity until you consider how much stuff you need to add to get to the level of functionality that is actually needed in production. You are really looking at several dozens of JS modules pulled in by NPM. Using the house paradigm again, these modules are replacing all the stuff you had in your house before you reduced it to a hole in the ground. And in this frenzy of solving the old problems in the new world I see two kinds of approaches – those of idealists and pragmatists.

There is a feeding frenzy on GitHub right now to be the famous author of a great new library or framework. To stay in the Node.js neighborhood, the basement is taken by @ryah, most of the walls are solidly put by @tjholowaychuk (although there are some new walls being added), and so on. Idealists fear that if they do not hurry, the house will be rebuilt and there will be no glory left in the component business – that they will need to go back to be pragmatists, assemblers of parts.

For some people that is almost an insult. Consider the following comment by Bhaskar Ghosh, LinkedIn’s senior director of data infrastructure engineering in a recent article by Gigaom:

Ghosh, well, he shot down the idea of relying too heavily on commercial technologies or existing open source projects almost as soon as he suggests it’s a possibility. “We think very carefully about where we should do rocket science,” he told me, before quickly adding, “[but] you don’t want to become a systems integration shop.”

Of course, Bhaskar is exaggerating to prove the point – LinkedIn is both a heavy provider and a consumer of Open Source projects. Consider this (incomplete) list for example – they are not exactly shy using off the shelf code where it makes sense. Twitter is no stranger to Open Source as well – they share the use of Netty with LinkedIn and many other (albeit heavilly modified) open source projects. Nevertheless, Twitter shares the desire to give back to the Open Source world – my team together with many others are using their awesome Bootstrap toolkit, and their modifications are shared with the community. You can read it all at the Twitter Open Source page, run by our good friend @cra.

Recently, PayPal has switched a lot of their Java production servers to Node.js and documented the process. The keyboards are still hot from the effort and they are already open sourcing many of the libraries they wrote for the effort. They even joined forces with LinkedIn on the fork of the Dust.js templating library that the owner abandoned (kudos to Richard Ragan from PayPal from writing a great tutorial).

There are many other examples that paint a picture of a continuum. Idealists see this as a great time to have their name attached to a popular Open Source project and cement their place in the source code Hall of Fame – it’s 1997 all over again, baby!. On the other hand, startups have no problem being pragmatists – it is very hard to build something from scratch even with off the shelf Open Source software – who has time to write new libraries when existing ones are good enough to drive your startup into the ground 10 times over.

In the space between the two extremes, companies with something to lose and with skin in the game stride the cautious middle road – use what works, modify what ‘almost’ works, write their own when no existing software fits the bill, fully aware of the immediate and ongoing cost. And the grateful among them give back to the Open Source community as a way of saying ‘thanks’ for getting a boost when they themselves where but a fledgling startup.

© Dejan Glozic, 2014

Sitting on the Node.js Fence

Sat_on_the_Fence_-_JM_Staniforth

I am a long standing fan of Garrison Keillor and his Prairie Home Companion. I fondly remember a Saturday in which he delivered his widely popular News From Lake Wobegon, and that contained the following conundrum: “Is ambivalence a bad thing? Well, yes and no”. It also accurately explains my feeling towards Node.js and the exploding Node community.

As I am writing this, I can hear the great late Joe Strummer singing Should I Stay or Should I Go in my head (“If I stay it will be trouble, if I go it will be double”). It is really hard to ignore the passion that Node.js has garnered from its enthusiastic supporters, and if the list of people you are following on Twitter is full of JavaScript lovers, Node.js becomes Kim Kardashian of your Twitter feed (as in, every third tweet is about it).

In case you were somehow preoccupied by, say, living life to the fullest to notice or care about Node.js, it is a server-side framework written in JavaScript sitting on a bedrock of C++, compiled and executed by Google’s Chrome V8 engine. A huge cottage industry sprung around it, and whatever you may need for your Web development, chances are there is a module that does it somewhere on GitHub. Two years ago, in a widely re-tweeted event, Node.js overtook Ruby on Rails as the most watched GitHub repository.

Recently, in a marked trend of maturation from exploration and small side projects, some serious companies started deploying Node into production. I heard everything about how this US Thanksgiving most traffic to Walmart.com was from mobile devices, handled by Node.js servers.

Then I heard from Nick Hart how PayPal switched from JEE to Node.js. A few months ago I stumbled upon an article about Node.js taking over the enterprise (like it or not, to quote the author). I am sure there are more heartwarming stories traded at a recent #NodeSummit. Still, not to be carried away, apart from mobile traffic, Node.js only serves a tiny panel for the Walmart.com web site, so more of a ‘foot in the door’ than a total rewrite for the desktop. Even earlier, LinkedIn engineering team blogged about their use of Node.js for LinkedIn Mobile.

It is very hard to cut through the noise and get to the core of this surge in popularity. When you dig deeper, you find multiple reasons, not all of them technical. That’s why discussions about Node.js sometimes sound to me like the Abott and Costello’s ‘Who’s on First’ routine. You may be discussing technology while other participants may be discussing hiring perks, or skill profile of their employees, or context switching. So lets count the ways why Node.js is popular and note how only one of them is actually technical:

  1. Node.js is written from the ground up to be asynchronous. It is optimized to handle problems where there is a lot of waiting for I/O. Processing tasks not waiting for I/O while others are queued up can increase throughput without adding processing and memory overhead of the traditional ‘one blocking thread per request’ model (or even worse, ‘one process per request’). The sweet spot is when all you need to do is lightly process the result of I/O and pass it along. If you need to spend more time to do some procedural number-crunching, you need to spawn a ‘worker’ child process, at which point you are re-implementing threading.
  2. Node.js is using JavaScript, allowing front-end developers already familiar with it to extend their reach into the server. Most new companies (i.e. not the Enterprise) have a skill pool skewed towards JavaScript developers (compared to the Enterprise world where Java, C/C++, C# etc. and similar are in the majority).
  3. New developers love Node.js and JavaScript and are drawn to it like moths to a candelabra, so having Node.js projects is a significant attraction when competing for resources in a developer-starved new IT economy.

Notice how only the first point is actually engineering-related. This is so prevalent that it can fit into a tweet, like so:

Then there is the issue of storage. There have been many libraries made available for Node.js to connect to SQL databases, but that would be like showing up at an Arcade Fire concert in a suit. Oh, wait, that actually happened:

Never mind then. Back to Node.js: why not going all the way and use something like MongoDB, storing JSON documents and using JavaScript for DB queries? So now not only do front end developers not need to chase down server-side Java guys to change the served HTML or JSON output, they don’t have to chase DBAs to change the DB models. My point is that once you add Node.js, it is highly likely you will revamp your entire stack and architecture, and that is a huge undertaking even without deadlines to spice up your life with a wallop of stress (making you park in a wrong parking spot in your building’s garage, as I did recently).

Now let’s try to apply all this to a talented team in a big corporation:

  1. The team is well versed in both Java (on the server) and JavaScript (on the client), and have no need to chase down Java developers to ‘add a link between pages’ as in the PayPal’s case. The team also knows how to write decent SQL queries and handle the model side of things.
  2. The team uses IDEs chock-full of useful tools, debuggers, incremental builders and other tooling that make for a great workflow, particularly when the going gets tough.
  3. The team needs to deliver something new on a tight schedule and is concerned about adding to the uncertainty of ‘what to build’ by also figuring out ‘how to build it’.
  4. There is a fear that things that the team has grown to expect as gravity (debugging, logging etc.) is still missing or immature.
  5. While there is a big deal made about ‘context switching’, the team does it naturally – they go from Java to JavaScript without missing a beat.
  6. There is a whole slew of complicated licensing reasons why some Open Source libraries cannot be used (a problem startups rarely face).

Mind you, this is a team of very bright developers that eat JavaScript for breakfast and feel very comfortable around jQuery, Require.js, Bootstrap etc. When we overlay the reasons to use Node.js now, the only one that still remains is handling huge number of requests without paying the RAM/CPU tax of blocking I/O, one thread per request.

As if things were not murky enough, Servlets 3.1 spec supported by JEE 7.0 now comes with asynchronous processing of requests (using NIO) and also protocol upgrade (from HTTP/S to WebSockets). These two things together mean that the team above has the option of using non-blocking I/O from the comfort of their Java environment. They also mean that they can write non-blocking I/O code that pushes data to the client using WebSockets. Both things are currently the key technical reasons to jump ship from Java to Node. I am sure Oracle added both things feeling the heat from the Node.js, but now that they are here, they may be just enough reason to not make the transition yet, considering the cost and overhead.

Update: after posting the article, I remembered that Twitter has used Netty for a while now to get the benefit of asynchronous I/O coupled by performance of JVM. Nevertheless, the preceding paragraph is for JEE developers who can now easily start playing with async I/O without changing stacks. Move along, nothing to see here.

Then there is also the ‘Facebook effect’. Social scientists have noticed the emergence of a new kind of depression caused by feeling bad about your life compared to carefully curated projections of other people’s lives on Facebook. I am yet to see a posting like “my life sucks and I did nothing interesting today” or “I think my life is passing me by” (I subsequently learned that Sam Roberts did say that in Brother Down, but he is just pretending to be depressed, so he does not count). Is it possible that I am only hearing about Node.js success stories, while failed Node.js projects are quietly shelved never to be spoken of again?

Well, not everybody is suffering in silence. We all heard about the famous Walmart memory leak that is now thankfully plugged. Or, since I already mentioned MongoDB, how about going knee-deep into BSON to recover your data after your MongoDB had a hardware failure. Or a brouhaha about the MongoDB 100GB scalability warning? Or a sane article by Felix Geisendörfer on when to use, and more importantly, when NOT to use Node.js (as Node.js core alumnus, he should know better than many). These are all stories of the front wave of adopters and the inevitable rough edges that will be filed down over time. The question is simply – should we be the one to do that or somebody can do the filing for us?

In case I sound like I am justifying reasons why we are not using Node.js yet, the situation is quite the opposite. I completely love the all-JavaScript stack and play with it constantly in my pet projects. In a hilarious twist, I am acting like a teenage Justin Bieber fan and the (younger) team lead of the aforementioned team needs to bring me down with a cold dose of reality. I don’t blame him – I like the Node.js-based stack in principle, while he would have to burn the midnight oil making it work at the enterprise scale, and debugging hard problems that are inevitable with anything non-trivial. Leaving aside my bad case of FOMO (Fear Of Missing Out), he will only be persuaded with a problem where Node.js is clearly superior, not just another way of doing the same thing.

Ultimately, it boils down to whether you are trying to solve a unique problem or just play with a new and shiny technology. There is a class of problems that Node.js is perfectly suitable for. Then there are problems where it is not a good match. Leaving your HR and skills reasons aside, the proof is in the pudding (or as a colleague of mine would say, ‘the devil is in the pudding’). There has to be a unique problem where the existing stack is struggling, and where Node.js is clearly superior, to tip the scales.

While I personally think that Node.js is a big part of an exciting future, I have no choice but to agree with Zef Hemel that it is wise to pick your battles. I have to agree with Zef that if we were to rebuild something for the third time and know exactly what we are building, Node.js would be a great way to make that project fun. However, since we are in the ‘white space’ territory, choosing tried and true (albeit somewhat boring) building blocks and focusing the uncertainty on what we want to build is a good tradeoff.

And that’s where we are now – with me sitting on the proverbial fence, on a constant lookout for that special and unique problem that will finally open the Node.js floodgates for us. When that happens, you will be the first to know.

© Dejan Glozic, 2013