Is There Life After TJ?

tjh

What is going to happen now?
Nothing. We will be sad for a while, then we will move on.

 

Mad Men, Don Drapper discusses Kennedy assassination with kids

Every once in a while a event occurs that pushes regular programming aside. If you are CNN, it happens with such annoying regularity that completely obviates the meaning of the phrase “breaking news”. Nevertheless, if you are at least a bit involved with the Node.js community, the news that TJ Holowaychuk turned his back on Node in favour of Go restores breaking news’ original meaning.

Up to this point Node.js had a bit of a problem with TJs as soon as TJ Fontaine assumed the post of Node.js lead. You had to add the last name to disambiguate, leading to a no TJ slide at a recent NodeDay. Before Fontaine, the real slim TJ authored such a monstrous number of Node.js modules (some of which, like Express, Jade and Mocha, being wildly popular), that there is a semi-serous thread on Quora that he is not a real person. Proof: total absence from the conference-industrial complex, no pictures, no videos, and inhumane coding output that suggest an army of ghost coders. Also, an abrupt change in import declaration style in 2013.

My first encounter with TJ (metaphorically speaking) was in the summer of 2012, when we evaluated Node.js for our project. Our team lead noticed that an unusually large cluster of modules needed to convert us from a servlet/JSP back end was authored by a single person. There was something unnerving about moving from software written by an army of big company developers to something written by a kid with an Emo haircut and a colorful t-shirt (look like TJ was too busy coding to notice that the official look is now beard and plaid shirt, and does not need prescription glasses to care about Warby Parker).

Unlike many people, I am also not completely gaga about his coding and documentation style. I never warmed up to Jade and Stylus – I spent many an hour tearing my hair out about some unexpected Jade behavior that turned out to be one tab or space too many. Dust.js we now use for our templating needs is much less susceptible to magic in the name of DRY.

Similarly, Express documentation is wonderfully minimalistic until you need some clarification, at which point you would gladly trend it for verbose but useful. Some of the Express APIs suffer from the same problem. You can ‘use’ all kinds of object types, and while ‘set’ is a setter, ‘get’ defines a route for an HTTP endpoint. Not that it matters anyway – an army of developers wrote the missing manual on Stack Overflow.

All this does not diminish the importance and the imprint that TJ left on the Node.js community. The Hacker News discussion was long and involved, with many participants trying to guess the ‘real reason’ behind the switch. Zef Hemel contemplated what he considers a ‘march toward Go’. Other people with an investment in Node.js commented as well:

[tweet 485132339362529280 align=’center’]

For anyone making a transition to Node.js and feeling the cold sweat of ‘buyer’s remorse’, I am offering the following opinion. In a brazen display of self-promotion, I will quote none other than myself from one of my previous posts:

It was always hard for me to make choices. In the archetypical classification of ‘satisficers’ and ‘maximizers’, I definitely fall into the latter camp. I research ad nauseum, read reviews, measure carefully, take everything into account, and then finally make a move, mentally exhausted. Then I burst into cold sweat of buyer’s remorse, or read a less than glowing review of the product I just purchased, and my happiness is dimished. It sucks to be me.

My point here is as follows: at any point in time, there will be many ways to solve your particular software problem. Chances are, you have spent a lot of time researching before pulling the plug on Node.js. You have made the switch, and your app or site is working well, purring like a kitten. I don’t think your app will suddenly start misbehaving just because TJ got a case of code rage. In fact, the signs were on the wall for quite some time because his twitter feed was all ‘Node.js errors this, Node.js errors that’ – lots of frustration about some very arcane details I didn’t understand at all.

If anything, this case teaches us a lesson that it does not pay to search for a language Messiah – a language/platform to end all languages/platforms. As it became plainly obvious, the future is polyglot, and Go is appearing as a growing alternative for writing apps for distributed systems. Even by TJ’s admission, Node.js is still a great choice for Web sites, as well as API services. For example, a whole class of page rendering template libraries are hard to replicate in Go (Mustache, Handlebars, Dust), and so are the building and testing solutions (Grunt, Mocha, Jasmine).

Node.js was never positioned as a platform for solving all kinds of computational problems, and even before Go, distributed systems had apps written in stacks other than Node.js. I think attempts to render Node.js as a solution for every problem were always misguided and no serious member of the Node.js community made such claims. As long as you use Node.js for what it is very good for – page serving apps and API services with a lot of I/O (and not too many CPU intensive, long running tasks), there is no need for TJH to somehow make your app less ‘correct’.

If, on the other hand, you are impressionable and must check out Go right away, by all means do, but if you keep putting all your eggs in a new and shiny basket, people like TJH will continue to stress you out as they move on to another new and shiny language or platform.

If you feel a need to solve a problem that Node.js is failing to solve or is not suitable for, and Go or Scala or any other solution fit the bill, by all means go ahead and use them. Otherwise, move along, people – nothing to see here. We wish TJ all the best in the Go future, while we continue to focus on our own problems and challenges.

I guess that means a ‘Yes, of course’ to the question in the title.

© Dejan Glozic, 2014

Advertisements

Dust.js: Such Templating

starbucks-doge-550

Last week I started playing with Node.js and LinkedIn’s fork of Dust.js for server side templating. I think I am beginning to see the appeal that made LinkedIn choose Dust.js over a number of alternatives. Back when LinkedIn had a templating throwdown and chose Dust.js, they classified it in the ‘logic-less’ group, in contrast to the ’embedded JavaScript’ group that allows you to write arbitrary JavaScript in your templates. Here is the list of reasons I like Dust.js over the alternatives:

  1. Less logic instead of logic-less. Unlike Mustache, it allows you to have some sensible view logic in your templates (and add more via helpers).
  2. Single instead of double braces. For some reason Mustache and Handlebars decided that if one set of curly braces is good, two sets must be better. It is puzzling that people insisting on DRY see no irony in needing two sets of delimiters for template commands and variables. Typical Dust.js templates look cleaner as a result.
  3. DRY but not to the extreme (I am looking at you, Jade). I can fully see the HTML markup that will be rendered. Life is too short to have to constantly map the shorthand notation of Jade to what it will eventually produce (and bang your head on the table when it barfs at you or spits out wrong things).
  4. Partials and partial injection. Like Mustache or Handlebars, Dust.js has partials. Like Jade, Dust.js has injection of partials, where the partial reserves a slot for the content that the caller will define. This makes it insanely easy to create ‘skeleton’ pages with common areas, then inject payload. One thing I did was set up three injection areas: in HEAD, in BODY where the content should go and at the end of BODY for injections of scripts.
  5. Helpers. In addition to very minimal logic of the core syntax, helpers add more view logic if needed (‘dustjs-helpers’ module provides some useful helpers that I took advantage of; you can write your own helpers easily).
  6. Stuff I didn’t try yet. I didn’t try streaming and asynchronous rendering, as well as complex paths to the data objects, but intend to study how I can take advantage of them. Seems like something that can come in handy once I get to more advanced use cases. That and the fact that JSPs support streaming so we feel we are not giving up anything by moving to Node/Dust.
  7. Client side templating. This also falls under ‘stuff I didn’t try yet’, but I am singling it out because it requires a different approach. So far our goal was to replace our servlet/JSP server side with Node/Dust pair. Having the alternative to render on the client opens up a whole new avenue of use cases. We have been burnt in the past with too much JavaScript on the client, so I want to approach this carefully, but a lot of people made a case for client side rendering. One thing I would like to try out is adding logic in the controller to sniff the user agent and render the same template on the server or on the client (say, render on the server for lesser browsers or search engine bots). We will definitely try this out.

In our current project we use a number of cooperating Java apps using plain servlets for controllers and JSPs for views. We were very disciplined to not use direct Java expressions in JSPs, only JSTL/EL (Tag Library and Expression Language). JSPs took a lot of flack in the dark days of spaghetti JSPs with a lot of Java code embedded in views, essentially driving current aversion to logic in templates to absurd levels in Mustache. It is somewhat ironic that you can easily create similar spaghetti Jade templates with liberal use of embedded JavaScript, so that monster is alive and well.

Because of our discipline, porting our example app to Node.js with Dust.js for views, Express.js for middleware and routing was easy. Our usual client stack (jQuery, Require.js, Bootstrap) was perfectly usable – we just copied the client code over.

Here is the shared ‘layout.dust’ file that is used by each page in the example app:

<!DOCTYPE html>
<html>
  <head>
    <title>{title}</title>
	<meta http-equiv="X-UA-Compatible" content="IE=edge">
	<meta name="viewport" content="initial-scale=1.0, maximum-scale=1.0, user-scalable=0" />
	<link rel="stylesheet" href="/stylesheets/base.css">
	{+head/}
  </head>

  <body class="jp-page">
	{>navbar/}
	<div class="jp-page-content">
	   {+content/}
	</div>

	<script src="/js/base.min.js"></script>
	<script>
		requirejs.config({
			baseUrl: "/js"
		});
	</script>
	{+script/}
</body>
</html>

Note the {+head/}, {+content/} and {+script/} sections – they are placeholders for content that will be injected from templates that include this partial. This ensures the styles, meta properties, content and script are injected in proper places in the template. One thing to note is that you don’t have to define empty placeholders – you can place content between the opening and closing tag of the section, but we didn’t have any default content to provide here. You can view an empty tag as an ‘injection point’ (this is where the stuff will go), whereas a placeholder with some default content will be more like ‘overriding point’ (the stuff in the caller template will override this).

The header is a partial pulled into the shared template. It has been put together quickly – I can easily see the links for the header being passed in as an array of objects (it would make the partial even cleaner). Note the use of the helper for controlling the selected highlight in the header. It is simply comparing the value of the active link to the static values and adds a CSS class ‘active’ if true:

<div class="navbar navbar-inverse navbar-fixed-top jp-navbar" role="navigation">
   <div class="navbar-header">
     <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-ex1-collapse">
        <span class="sr-only">Toggle Navigation</span>
        <span class="icon-bar"></span>
        <span class="icon-bar"></span>
        <span class="icon-bar"></span>
     </button>
	<a class="navbar-brand" href="/">Examples<div class="jp-jazz-logo"></div></a>
   </div>
   <div class="navbar-collapse navbar-ex1-collapse collapse">
      <ul class="nav navbar-nav">
	<li {@eq key=active value="simple"}class="active"{/eq}><a href="/simple">Simple</a></li>
	<li {@eq key=active value="i18n"}class="active"{/eq}><a href="/i18n">I18N</a></li>
	<li {@eq key=active value="paging"}class="active"{/eq}><a href="/paging">Paging</a></li>
	<li {@eq key=active value="tags"}class="active"{/eq}><a href="/tags">Tags</a></li>
	<li {@eq key=active value="widgets"}class="active"{/eq}><a href="/widgets">Widgets</a></li>
	<li {@eq key=active value="opensocial"}class="active"{/eq}><a href="/opensocial">OpenSocial</a></li>
      </ul>
      <div class="navbar-right">
	<ul class="nav navbar-nav">
	   <li><a href="#">Not logged in</a></li>
	</ul>
      </div>
   </div>
</div>

Finally, here is the sample page using these partials:

{>layout/}
{<content}
  <h2>Simple Page</h2>
    <p>
      This is a simple HTML page generated using Node.js and Dust.js, which loads CSS and JavaScript.<br/>
      <a id="myLink" href="#">Click here to say hello</a> 
    </p>
{/content}
{<script}
   <script src="/js/simple/simple-page.js"></script>
{/script}

In this page, we include shared partial ‘layout.dust’, then inject content area into the ‘content’ placeholder and also some script into the ‘script’ placeholder.

The Express router for this page is very short – all it does is render the template. Note how we are passing the title of the page and also the value of the ‘active’ property to ensure the proper link is highlighted in the header partial:

exports.simple = function(req, res){
  res.render('simple', { title: 'Simple', active: 'simple' });
};

Running the Node app gives you the following in the browser:

dust-simple-page

Since we are using Bootstrap, we also get responsive design thrown in for free:

dust-simple-page-responsive

There you go. Sometimes it pays to follow in the footsteps of those who did the hard work before you – LinkedIn’s fork of Dust.js is definitely a very comfortable and capable templating engine and a great companion to Node.js and Express.js. We feel confident using it in our own projects. In fact, we have decided to write one of the apps in our project using this exact stack. As usual, you will be the first to know what we learned as we are going through it.

© Dejan Glozic, 2014

On the LinkedIn’s Dusty Trail

dusty_road_brush

There is an old New Yorker cartoon (from 1993) where a dog working on the computer tells another dog: “On the Internet, nobody knows you are a dog”. That’s how I occasionally feel about the GitHub projects – they could be solid, multi-contributor powerhouses churning out release after release, and there could be great ideas stalled when the single contributor got distracted by something new and shiny. The thing is that you need to inspect all the project artifacts carefully to tell which is which – single contributors can mount a powerful and amplified presence on GitHub (until they leave, that is).

This issue comes with the territory in Open Source (considering how much you pay for all the hard work of others), but nowhere is it more acute than in LinkedIn’s choice of templating library in 2011. In an often quoted blog post, LinkedIn engineering team embarked on a quest to standardise around one templating library that can be run both on the server and on the client, and also check a number of other boxes they put forward. Out of several contenders they finally settled on Dust.js. This library uses curly braces for pattern substition, which puts it in a hipster-friendly category alongside Mustache and Handlebars. But here is the rub: while the library ticks most of LinkedIn’s self-imposed requirements, its community support leaves much to be desired. The sole contributor seemed unresponsive.

Now, if it was me, I would move on, but LinkedIn’s team decided that they will not be deterred by this. In fact, it looks like they kind of liked the fact that they can evolve the library as they learn by doing. The problem was that committer rights work by bootstrapping – only the original committer can accept LinkedIn changes, which apparently didn’t happen with sufficient snap. Long story short, behold the ‘LinkedIn Fork of Dust.js’, or ‘dustjs-linkedin’ as it is known to NPM.

I followed this story with mild amusement and shaking my head until the end of 2013, when PayPal saw the Node.js light. As part of their Node.js conversion, they picked LinkedIn’s fork of Dust.js for their templating needs. This reminded me of how penguins jump into water – they all wait until one of them jumps, then they all follow in a quick succession. Channeling my own inner penguin, I decided the water was fine and started playing with dustjs-linkedin myself.

This is not my first foray into the world of Node.js, but in my first attempt I used Jade, which is just too DRY for my taste. Being a long time Eclipse user, I just could not revert to a command line, so I resorted to a collection of Web Development tools, then added Nodeclipse, mostly for the project creation wizard and the launcher. Eclipse is very important to me because it answers one of the key issues plaguing Node.js developers that go beyond ‘Hello, World’ – how do I control and structure all the JavaScript files (incidentally one of the hard questions that Zef Hemel posed in his blog post on blank canvas projects).

Then again, Nodeclipse is not perfect, and dustjs-linkedin is not one of the rendering engines they cover in the project wizard. I had to create an Express project configured for Jade, turn around and delete Jade from the project, and use NPM to install dustjs-linkedin locally (i.e. in the project tree under ‘node_modules’), like so:

nodeclipse-project

After working with Nodeclipse for a while, and not being able to use their Node launcher (I had to configure my own external tool launcher), I am now questioning its value but at least it got the initial structure set up for me. Now that I have a good handle of the overall structure, I could create new Node projects myself, so general purpose Web tooling with HTML, JSON and JavaScript editors and helpers may be all you need (of course, you need to also install Node.js and NPM but you would need to do it in all scenarios).

Hooking up nodejs-linkedin also requires consolidate.js in a way that is a bit puzzling to me but it seems to work well, so I didn’t question it (the author is TJ of Express fame, and exploring the code I noticed that nodejs-linkedin is actually listed as one of the recognized engines). The change required to pull in dustjs-linkedin and consolidate, declare dust as a new engine and map it as the view engine for the app:

var express = require('express')
, routes = require('./routes')
, dust = require('dustjs-linkedin')
, cons = require('consolidate')
, user = require('./routes/user')
, http = require('http')
, path = require('path');

var app = express();

// all environments
app.set('port', process.env.PORT || 3000);
app.set('views', __dirname + '/views');
app.engine('dust', cons.dust);
app.set('view engine', 'dust');

That was pretty painless so far. We have configured our views to be in the /views directory, so Dust files placed there can be directly found by Express. Since Express is an MVC framework (although much lighter than what you are normally used to in the JEE world), the C part is handled by routers, placed in the /routes directory. Our small example will have just a landing page rendered by /views/index.dust and the controller will be /routes/index.js. However, to add a bit of interest, I will toss in block partials, showing how to create a base template, then override the ‘payload’ in the child template.

We will start by defining the base template in ‘layout.dust’:

<!DOCTYPE html>
<html>
  <head>
    <title>{title}</title>
    <link rel='stylesheet' href='/stylesheets/style.css' />
  </head>
  <body>
    <h1>{title}</h1>
	{+content}
	This is the base content.
	{/content}
  </body>
</html>

We can now include this template in ‘index.dust’ and define the content section:

{>layout/}
{<content}
<p>
This is loaded from a partial.
</p>
<p>
Another paragraph.
</p>
{/content}

We now need to define index.js controller in /routes because the controller invokes the view for rendering:

/*
* GET home page.
*/
exports.index = function(req, res) {
   res.render('index',
           { title: '30% Turtleneck, 70% Hoodie' });
};

In the code above we are sending the result of rendering the view using Dust to the response. We specify the collection of key/value pairs that will be used by Dust for variable substitution. The only part left is to hook up our controller to the site path (our root) in app.js:

app.get('/', routes.index);

And that is it. Running our Node.js app using Nodeclipse launcher will make it start listening on the localhost port 3000:

dusty_road_browser

So far so good, and it is probably best to stop while we are ahead. We have a Node.js project in Eclipse configured to use Express and LinkedIn’s fork of Dust.js for templating, and everything is working. In my next installment, I will dive into Dust.js in earnest. This is going to be fun!

© Dejan Glozic, 2014

Design for the T-Rex Vision

T-rex_by_Lilyu

Note: In case you missed it, there is currently a chance to get a free eBook of RESS Essentials from Packt Publishing. All you need to do is check the book out at Packt Publishing web site, drop a line about the book back at the comments section of my article with your email address and enter a draw for one of the three free books. And now, back to regular programming.

As a proud owner of a rocking home theater system, I think I played Jurassic Park about a gazillion times (sorry, neighbors). There I got some essential life lessons, for example that T-Rex’s vision is based on movement – it does not see you if you don’t move. While I am still waiting to put that knowledge into actual use, it got me thinking recently about design, and the need to update and change the visuals of our Web applications.

Our microwave recently died, and we had to buy a new one. Once installed, the difference was large enough to make me very conscious about its presence in the kitchen. But a funny thing happen – after a few days of me getting used to it, I stopped seeing it. I mean, I did see it, otherwise where did I warm up my cereals in the morning? But I don’t see it – it just blends into the overall kitchen-ness around me. There’s that T-Rex vision in practice.

Recently an article by Addison Duvall – Is Cool Design Really Uncool caught my attention because it posits that new design trends get old much too soon thanks to the quick spreading via the Internet. This puts pressure on the designers to constantly chase the next trend, making them the perennial followers. Addison offers an alternative (the ‘true blue’ approach) in which a designer ‘sticks to his/her guns’ as a personal mark or style, but concedes that only a select few can actually be trend setters instead of followers. Probably due to them turning into purveyors of a ‘niche’ product and their annoying need to eat and clothe themselves, thus nudging them back into the follower crowd.

According to Mashable, Apple reported that 74% of their devices are already running iOS 7. This means that 74% of iPhone and iPad users look at flat, Retina-optimized and non-skeuomorphic designs every day (yours truly included). When I showed the examples of iOS 7 around the office when I updated to it, I got a mixed reaction, with some lovers of the old school Apple design truly shocked, and in a bad way. Not myself though – as a true Steven Spielberg disciple, I knew that it will be just a matter of time until this becomes ‘the new normal’ (whether there will be tears on my cheeks and the realization that ‘I love Big Brother’ is still moot). My vision is based on movement, and after a while, iOS 7 stopped moving and I don’t see it any more. You know what I did see? When iPad Retina arrived with iOS 6 installed and I looked at the horrible, horrible 3D everywhere (yes, I missed iPad Air by a month – on the upside, it is company-provided, so the price is right). I could not update to iOS 7 soon enough.

I guess we established that for many people (most people in fact), the only way around T-Rex vision is movement. That’s why we refresh the designs ever so often. In the words of Don Draper and his ‘Greek mentor’, the most exciting word in advertizing is ‘new’. Not ‘better’, ‘nicer’, ‘more powerful’, just ‘new’ – different, not the same old thing you have been starting at forever (well, for a year anyway, which seems like eternity in hindsight). The designs do not spoil like a (Greek) yogurt in your fridge, they just become old to us. This is not very different from my teenage daughter staring at a full closet and declaring ‘I have nothing to wear’. It is not factually true but in her teenage T-Rex eyes, the closet is empty.

OK then, so we need to keep refreshing the look of our Web apps. What does that mean in practice? Nothing good, it turns out. You would think that in a true spirit of separation of style and semantics, all you need to do is update a few CSS files and off you go.

Not so fast. You will almost certainly hit the following bummers along the way:

  1. Look is not only ‘skin deep’. In the olden days when we used Dojo/Dijit, custom widgets were made out of DOM elements (DIVs, SPANs) and then styled. This was when these widgets needed to run on IE6 (shudder). This means that updating the look means changing the DOM structure of the widgets, and the accompanied JavaScript.
  2. If the widget is reused, there is high likelihood that upstream clients have reached into the DOM and made references to these DOM nodes. Why? Because they can (there is no ‘private’ visibility in JavaScript). I really, really hate that and cannot wait for shadow DOM to become a reality. Until then, try updating one of those reused widgets and wait for the horrified upstream shrieks. Every time we moved to the new MINOR version of Dojo/Dijit, there was blood everywhere. This is no way to live.
  3. Aha, you say, but the newer way is so much better. Look at Bootstrap – it is mostly CSS, with only 8kB of minified gzipped JavaScript. Yes, I agree – the new world we live in is nicer, with native HTML elements directly styleable. Nevertheless, as we now use Boostrap 3.0, we wonder what would happen if we modify the vanilla CSS and then try to move to 4.0 when it arrives – how much work will that be? And please don’t start plugging Zurb Foundation now (as in “ooh, it has more subtle styles, it is easier to skin than Bootstrap”). I know it is nice, but in my experience, nothing is ever easy – it would just be difficult in a somewhat different way.
  4. You cannot move to the new look only partially. That’s the wonder of the new app-based, Cloud-based world that never ceases to amaze me. Yes, you can build a system out of loosely coupled services running in the Cloud, and it will work great as long as these services are giving you JSON or some other data. But if you try to assemble a composite UI, the browser is where the metaphorical rubber meets the road – woe unto you if your components are not all moved to the new look in lockstep – you will have a case of the Gryphon UI on your hands.

I wish I had some reassuring words for you, but I don’t. This is a hard problem, and it is depressing that the federated world of the Cloudy future didn’t change that complexity even a bit. In the end, everything ends up in a single browser window, and needs to match. Unless all your apps adopt something like Bootstrap, don’t change the default styles and move to the each new version diligently, you will suffer the periodic pain of re-skinning so that the T-Rexes that use your UIs can see it again (for a short while). It also helps to have sufficient control over all the moving parts so that coordinated moves can actually be made and followed through, in the way LinkedIn moved from several stacks and server side rendering to shared Dust.js templates on the client. Taking advantage of HTML5, CSS3 and shadow DOM will lessen the pain in the future by increasing separation between style and semantics, but it will never entirely diminish it.

If you think this is limited to the desktop, think again. Yes, I know that in theory you need to be visually consistent within the confines of your mobile app only, which is a much smaller world, but you also need to not look out of place after the major mobile OS update. Some apps on my iPhone caught up, some still look awkward. Facebook app was flat for a while already. Twitter is mostly content anyway, so they didn’t need to do much. Recently Dropbox app refreshed its look and is now all airy with lightweight icons and hair lines everywhere, while slate.com app is still chunky with 3D gradients (I guess they didn’t budget for frequent app refreshes). Oh, well – I guess that’s the price of doing pixel business – you need to budget for the new virtual clothes regularly.

Oh, look – my WordPress updated the Dashboard – looks so pretty, so – new!

© Dejan Glozic, 2013

Sitting on the Node.js Fence

Sat_on_the_Fence_-_JM_Staniforth

I am a long standing fan of Garrison Keillor and his Prairie Home Companion. I fondly remember a Saturday in which he delivered his widely popular News From Lake Wobegon, and that contained the following conundrum: “Is ambivalence a bad thing? Well, yes and no”. It also accurately explains my feeling towards Node.js and the exploding Node community.

As I am writing this, I can hear the great late Joe Strummer singing Should I Stay or Should I Go in my head (“If I stay it will be trouble, if I go it will be double”). It is really hard to ignore the passion that Node.js has garnered from its enthusiastic supporters, and if the list of people you are following on Twitter is full of JavaScript lovers, Node.js becomes Kim Kardashian of your Twitter feed (as in, every third tweet is about it).

In case you were somehow preoccupied by, say, living life to the fullest to notice or care about Node.js, it is a server-side framework written in JavaScript sitting on a bedrock of C++, compiled and executed by Google’s Chrome V8 engine. A huge cottage industry sprung around it, and whatever you may need for your Web development, chances are there is a module that does it somewhere on GitHub. Two years ago, in a widely re-tweeted event, Node.js overtook Ruby on Rails as the most watched GitHub repository.

Recently, in a marked trend of maturation from exploration and small side projects, some serious companies started deploying Node into production. I heard everything about how this US Thanksgiving most traffic to Walmart.com was from mobile devices, handled by Node.js servers.

Then I heard from Nick Hart how PayPal switched from JEE to Node.js. A few months ago I stumbled upon an article about Node.js taking over the enterprise (like it or not, to quote the author). I am sure there are more heartwarming stories traded at a recent #NodeSummit. Still, not to be carried away, apart from mobile traffic, Node.js only serves a tiny panel for the Walmart.com web site, so more of a ‘foot in the door’ than a total rewrite for the desktop. Even earlier, LinkedIn engineering team blogged about their use of Node.js for LinkedIn Mobile.

It is very hard to cut through the noise and get to the core of this surge in popularity. When you dig deeper, you find multiple reasons, not all of them technical. That’s why discussions about Node.js sometimes sound to me like the Abott and Costello’s ‘Who’s on First’ routine. You may be discussing technology while other participants may be discussing hiring perks, or skill profile of their employees, or context switching. So lets count the ways why Node.js is popular and note how only one of them is actually technical:

  1. Node.js is written from the ground up to be asynchronous. It is optimized to handle problems where there is a lot of waiting for I/O. Processing tasks not waiting for I/O while others are queued up can increase throughput without adding processing and memory overhead of the traditional ‘one blocking thread per request’ model (or even worse, ‘one process per request’). The sweet spot is when all you need to do is lightly process the result of I/O and pass it along. If you need to spend more time to do some procedural number-crunching, you need to spawn a ‘worker’ child process, at which point you are re-implementing threading.
  2. Node.js is using JavaScript, allowing front-end developers already familiar with it to extend their reach into the server. Most new companies (i.e. not the Enterprise) have a skill pool skewed towards JavaScript developers (compared to the Enterprise world where Java, C/C++, C# etc. and similar are in the majority).
  3. New developers love Node.js and JavaScript and are drawn to it like moths to a candelabra, so having Node.js projects is a significant attraction when competing for resources in a developer-starved new IT economy.

Notice how only the first point is actually engineering-related. This is so prevalent that it can fit into a tweet, like so:

Then there is the issue of storage. There have been many libraries made available for Node.js to connect to SQL databases, but that would be like showing up at an Arcade Fire concert in a suit. Oh, wait, that actually happened:

Never mind then. Back to Node.js: why not going all the way and use something like MongoDB, storing JSON documents and using JavaScript for DB queries? So now not only do front end developers not need to chase down server-side Java guys to change the served HTML or JSON output, they don’t have to chase DBAs to change the DB models. My point is that once you add Node.js, it is highly likely you will revamp your entire stack and architecture, and that is a huge undertaking even without deadlines to spice up your life with a wallop of stress (making you park in a wrong parking spot in your building’s garage, as I did recently).

Now let’s try to apply all this to a talented team in a big corporation:

  1. The team is well versed in both Java (on the server) and JavaScript (on the client), and have no need to chase down Java developers to ‘add a link between pages’ as in the PayPal’s case. The team also knows how to write decent SQL queries and handle the model side of things.
  2. The team uses IDEs chock-full of useful tools, debuggers, incremental builders and other tooling that make for a great workflow, particularly when the going gets tough.
  3. The team needs to deliver something new on a tight schedule and is concerned about adding to the uncertainty of ‘what to build’ by also figuring out ‘how to build it’.
  4. There is a fear that things that the team has grown to expect as gravity (debugging, logging etc.) is still missing or immature.
  5. While there is a big deal made about ‘context switching’, the team does it naturally – they go from Java to JavaScript without missing a beat.
  6. There is a whole slew of complicated licensing reasons why some Open Source libraries cannot be used (a problem startups rarely face).

Mind you, this is a team of very bright developers that eat JavaScript for breakfast and feel very comfortable around jQuery, Require.js, Bootstrap etc. When we overlay the reasons to use Node.js now, the only one that still remains is handling huge number of requests without paying the RAM/CPU tax of blocking I/O, one thread per request.

As if things were not murky enough, Servlets 3.1 spec supported by JEE 7.0 now comes with asynchronous processing of requests (using NIO) and also protocol upgrade (from HTTP/S to WebSockets). These two things together mean that the team above has the option of using non-blocking I/O from the comfort of their Java environment. They also mean that they can write non-blocking I/O code that pushes data to the client using WebSockets. Both things are currently the key technical reasons to jump ship from Java to Node. I am sure Oracle added both things feeling the heat from the Node.js, but now that they are here, they may be just enough reason to not make the transition yet, considering the cost and overhead.

Update: after posting the article, I remembered that Twitter has used Netty for a while now to get the benefit of asynchronous I/O coupled by performance of JVM. Nevertheless, the preceding paragraph is for JEE developers who can now easily start playing with async I/O without changing stacks. Move along, nothing to see here.

Then there is also the ‘Facebook effect’. Social scientists have noticed the emergence of a new kind of depression caused by feeling bad about your life compared to carefully curated projections of other people’s lives on Facebook. I am yet to see a posting like “my life sucks and I did nothing interesting today” or “I think my life is passing me by” (I subsequently learned that Sam Roberts did say that in Brother Down, but he is just pretending to be depressed, so he does not count). Is it possible that I am only hearing about Node.js success stories, while failed Node.js projects are quietly shelved never to be spoken of again?

Well, not everybody is suffering in silence. We all heard about the famous Walmart memory leak that is now thankfully plugged. Or, since I already mentioned MongoDB, how about going knee-deep into BSON to recover your data after your MongoDB had a hardware failure. Or a brouhaha about the MongoDB 100GB scalability warning? Or a sane article by Felix Geisendörfer on when to use, and more importantly, when NOT to use Node.js (as Node.js core alumnus, he should know better than many). These are all stories of the front wave of adopters and the inevitable rough edges that will be filed down over time. The question is simply – should we be the one to do that or somebody can do the filing for us?

In case I sound like I am justifying reasons why we are not using Node.js yet, the situation is quite the opposite. I completely love the all-JavaScript stack and play with it constantly in my pet projects. In a hilarious twist, I am acting like a teenage Justin Bieber fan and the (younger) team lead of the aforementioned team needs to bring me down with a cold dose of reality. I don’t blame him – I like the Node.js-based stack in principle, while he would have to burn the midnight oil making it work at the enterprise scale, and debugging hard problems that are inevitable with anything non-trivial. Leaving aside my bad case of FOMO (Fear Of Missing Out), he will only be persuaded with a problem where Node.js is clearly superior, not just another way of doing the same thing.

Ultimately, it boils down to whether you are trying to solve a unique problem or just play with a new and shiny technology. There is a class of problems that Node.js is perfectly suitable for. Then there are problems where it is not a good match. Leaving your HR and skills reasons aside, the proof is in the pudding (or as a colleague of mine would say, ‘the devil is in the pudding’). There has to be a unique problem where the existing stack is struggling, and where Node.js is clearly superior, to tip the scales.

While I personally think that Node.js is a big part of an exciting future, I have no choice but to agree with Zef Hemel that it is wise to pick your battles. I have to agree with Zef that if we were to rebuild something for the third time and know exactly what we are building, Node.js would be a great way to make that project fun. However, since we are in the ‘white space’ territory, choosing tried and true (albeit somewhat boring) building blocks and focusing the uncertainty on what we want to build is a good tradeoff.

And that’s where we are now – with me sitting on the proverbial fence, on a constant lookout for that special and unique problem that will finally open the Node.js floodgates for us. When that happens, you will be the first to know.

© Dejan Glozic, 2013

Beam My Model Up, Scotty!

star-trek-transporter

I know, I know. Scotty is from the original series, and the picture above is from TNG. Unlike Leonard from The Big Bang Theory, I prefer TNG over the original, and also Picard over Kirk. Please refrain from hate mail.

Much of both real and virtual ink has been spilled over Star Trek transporter technology and quirks. Who can forget the episode when Scotty has preserved himself as a transporter pattern in an endless loop only to be found and freed 70 years later by the TNG crew (see, there’s your Scotty-TNG link). But this article is about honouring all the hours I spent on various Star Trek instalments by applying the transporter principles to Web development.

As the Web development collective mind matures, a consensus is forming that spaghetti are great on your dining table but bad for your code. MVC design pattern on the server is just accepted as gravity, and several JavaScript frameworks are claiming it is necessary on the client as well. But is it always?

Here is case in point. Our team recently rebuilt a dashboard – a relatively complex peace of interactive technology. This is our third attempt at it (third time’s the charm, right)? Armed with all the experience of the previous two versions, plus our general desire to pull back from the extreme Ajax, we were confident this would be it. Why build a dashboard? Yes, I know, everybody and his uncle has one, and there are perfectly good all singing, all dancing commercial ones to be had. Here is a counter-question: why do you have a kitchen in your house when you can eat at perfectly good restaurants? Everybody has a dashboard because it is much more convenient to have your own (Google Analytics has one, WordPress I am writing this on has one). They can be simple because they don’t need to cater to all the possible scenarios, and much more tightly integrated. And not to mention cheaper (like eating at home vs. eating out). But I digress.

In the previous version, we sent a lot of JavaScript to the client, and after transporting and parsing it, JavaScript turned around and used XHR to fetch the model from the server as JSON. Then we wired up the model and the view and every time users made a change, we would update the model to keep it in sync. Backbone.js or Angular.js would have come in handy for this, only they didn’t exist when we wrote that code. Plus we hate frameworks.

In this version we wanted the dashboard page to arrive mostly assembled from the server. That part was clear. But we wanted to avoid the MV* complexity and performance consequences if possible. Plus it is really hard to even apply the usual data binding with dashboards because you cannot just wire the template with the model and let the framework do its magic. In a dashboard, the template itself is editable – users can drag and drop widgets around, delete them and add new ones. Essentially both the template and the data are built up from the model. This is not your grandmother’s MVC, that’s for sure.

Then we remembered Star Trek transporters and figured – why don’t we disperse the model and embed the model shards into the DOM as we are building the initial HTML on the server? We were waiting to hit a brick wall at some point, but it didn’t happen – this is an actually viable option. Here is why this approach (I will call it M/V) works for us like a charm:

  1. Obviously it is easy to do on the server – when we are building the initial response, embedding model shards into the HTML is trivial using HTML5 custom data properties (those attributes that start with ‘data-‘).
  2. The model arrives ready to use on the client – no need for an extra request to fetch the model after the fact. OK, Backbone.js has a way of bootstrapping models so that they arrive as payload with the initial response. However, that approach qualifies as CrazyHacks™ at best, and you save nothing payload-wise – the sum of the shards equals one whole model.
  3. When the part of the DOM needs to change due to the user interaction, it is easy to keep the associated ‘data-*’ properties in sync.
  4. When moving entire DOM branches (say, when dragging widgets between columns or changing layouts), model shards stay with their DOM elements – this is a real winner in our use case.
  5. We use DOM custom event bubbling to notify about the change – one listener at the top of the DOM branch is all that is needed to keep track of the sharded model’s dirty state. This helps us to stay lean because there is no need for JavaScript pub/sub implementations that increase size. It is laughably trivial to fire and listen to these events using jQuery, and they are fast, being implemented natively by the browser. Bootstrap uses the same approach.
  6. When the time comes to save, we traverse the DOM, collect the shards and assemble them again into a transient JavaScript object, then send it as JSON to the server via XHR. We use PUT/POST when we have the entire model, but more often the not, the view only renders a fraction of it (lazy loading), so we use PATCH instead.

This approach bought us some unreal savings in performance and size. Right now, we brought down total JavaScript from ~245kB to ~70KB minified gzipped (this includes jQuery, Require.js and Bootstrap). The best part is that this JavaScript loads asynchronously, after the initial content has already been displayed. The page is not only objectively fast, it is subjectively even faster because the initial content (what Twitter team calls ‘time to first tweet’) is sent in the first response.

As it is often the case in life, this approach may not work for everybody. A pattern where true MVC really shines is when you have multiple views listening to the same model. Changing the model updates all the views simultaneously, without the need for the pub/sub hell. But if you have exactly one view and one model, using M/V becomes a real option.

I can hear the collective gasp of architects in the audience. There are many things you can say about M/V, but architecturally pure it is not. But here is the problem. Speed and page size should be a valid architectural concern too. In fact, if you want your Web app to be usable on mobile devices, it is practically the only concern that matters. Nobody is going to wait your architecturally pure pig of a page to load while waiting for coffee – life is too short for 1MB+ web pages, pure or not.

Say No to layer cake, embrace small, live long and prosper!

© Dejan Glozic, 2013

Swimming Against The Tide

True story: I visited Ember.js web site and saw three required hipster artifacts: ironic mustaches, cute animals and Ray-Ban Wayfarer glasses (on a cute animal). A tweet was in order and within minutes, it was favored by a competing client side framework by Google (Angular). Who would have guessed client side frameworks are so catty? I can almost picture Angular News guy clicking the ‘Favorite’ button and yelling ‘Oh, Burn!!’ And it wasn’t even a burn, I actually like Ember web site – it is so …. cute.

The reason I visited Ember (and Angular, and Backbone, and Knockout) was to figure out what was going on. There is this scene in a 2002 movie Gangs Of New York where Leonardo DiCaprio leads his gang of Dead Rabbits to fight the competing gang (the Natives), and he has to wade through a river of people running in the opposite direction to avoid cannons fired by the Navy from the harbor. Leonardo and his opponent, a pre-Lincoln Daniel Day-Lewis were so enthralled in their epic fight that they missed the wider context of the New York Draft Riots happening around them. Am I like Leo (minus the looks, fame and fortune), completely missing the wider historic context around me?

Not long ago, I posted a repentant manifesto of a recovered AJAX addict. I swore off the hard stuff and pledged to only consume client-side script in moderation. The good people from Twitter and 37Signals all went through the same trials and tribulations and adopted similar approach (or I adopted theirs). Most recently, Thomas Fuchs, the author of Zepto.js expressed a similar change of heart based on his experiences in getting the fledgling product Charm off the ground. Against that backdrop, the noise of client-side MVC frameworks mentioned above is reaching deafening levels, with all the people apparently not caring about the problems that burned us so much. So what gives?

There are currently two major camps in the Web development right now, and they mostly differ in the role they allocate for the server side. The server side guys (e.g. Twitter, Basecamp, Thomas, yours truly) have been burned by heavy JavaScript clients and want to render initial page on the server, subsequently using PJAX and modest amounts of JavaScript for delicious interactivity and crowd pleasers. Meanwhile, a large population of developers still want to develop one page Web apps that require the script to take over from the browser for the long periods of time, relegating the server to the role of a REST service provider. I don’t want to repeat myself here (kindly read my previous article) but the issues of JavaScript size, parsing time, performance, memory leaks, browser history, SEO didn’t go away – they still exist. Nevertheless, judging by the interest in Angular, Backbone, Ember and other client side JavaScript frameworks, a lot of people think the tradeoffs are worth it.

To be correct, there is a third camp populated mostly by LinkedIn engineering team. They are in a category for themselves because they are definitely not a one page app, yet they do use Dust.js for client side rendering. But they also use a whole mess of in-house libraries for binding pages to services, assembling them, delaying rendering when below the fold, etc. You can read it on their blog – suffice to say that similar to Facebook’s Big Pipe, the chances you can repeat their architecture in your project are fairly slim, so I don’t think their camp is of practical value to this discussion.

Mind you, nobody is arguing the return to the dark ages of Web 1.0. There is no discussion whether JavaScript is needed, only whether all the action is on the client or there is a more balanced division of labor with the server.

I thought long and hard (that is, couple of days tops) about the rise of JavaScript MVC frameworks. So far, this is what I came up with:

  1. Over the last few years, many people have written a lot of crappy, unmaintainable, messy jumble of JavaScript. They now realize the value of architecture, structure and good engineering (client or server).
  2. A lot of people realize that some really smart people writing modern JavaScript frameworks will probably do a better job providing this structure then themselves.
  3. Many projects are simply not large enough to hit the general client side scripting turning point. This puts them in a sweet spot for client side MVC – large enough to be a mess and benefit from structure, not large enough to be a real pig that makes desktop browsers sweat and mobile browsers kill your script execution due to consuming too much RAM.
  4. These projects are also not easily partitioned into smaller contexts that can be loaded as separate Web pages. As a result, they rely on MVC JavaScript frameworks to perform data binding, partitioning, routing and other management.
  5. Modern templating engines such as Mustache or Handlebars can run both on the client and on the server, opening up the option of rendering the initial page server side.
  6. JavaScript community is following the same path that Web 1.0 server side MVC went through: the raise of opinionated and prescriptive MVC frameworks that try to box you into good practices and increase your productivity at the price of control and freedom.
  7. The people using these frameworks don’t really have performance as their first priority.
  8. The people using these frameworks plan to write a separate or native mobile client.

There could be truth in this, or I could be way off base. Either way, my team has no intention of changing course. To begin with, we are allergic to loss of control these frameworks demand – we subscribe to the camp that frameworks are bad. More importantly, we like how snappy our pages are now and want to keep them that way. We intend to keep an eye on the client MVC frameworks and maybe one day we will hit a use case where client side data binding and templates will prove useful (say, if we attempt something like Gmail or Google Docs or Google Calendar). If that happens, we will limit it to that particular use case, instead of going all in.

Meanwhile, @scottjehl perfectly describes my current state of mind thusly:

output-HTML

© Dejan Glozic, 2013