Meanwhile, in Nodeland…

Hieronymus Bosch, "The Garden of Earthly Delights", between 1480 and 1505.
Hieronymus Bosch, “The Garden of Earthly Delights”, between 1480 and 1505.

In India, we don’t call it “Indian food’. We just call it “food”.

Rajesh Koothrappali, “The Big Bang Theory”

It was always hard for me to make choices. In the archetypical classification of ‘satisficers’ and ‘maximizers’, I definitely fall into the latter camp. I research ad nauseum, read reviews, measure carefully, take everything into account, and then finally make a move, mentally exhausted. Then I burst into cold sweat of buyer’s remorse, or read a less than glowing review of the product I just purchased, and my happiness is dimished. It sucks to be me.

It appears it can be passed on as well. My 14-year-old daughter recently went to the mall to buy clothes (alone, a rite of passage of sorts). She returned empty-handed – ‘there was nothing nice’. I think we confirmed paternity right then and there.

Of course, this affliction carries over into my professional life. Readers who follow me for a while can attest my hemming and hawing about Node.js – should I stay or should I go, what if it is not ready, what if it turns out to be just a fad, what if I read a bad review after we port everything.

At least in my professional choices I appear to be more decisive than when looking at Banana Republic sweaters. After we jumped into Node.js waters in January this year, there was no turning back. It is almost April now and my mid-term report is due.

One of the beneficial things of ending the agony and giving Node a chance is that you stop worrying about HOW things are done and focus on GETTING them done. And once you do it, like Rajesh above, Node.js recedes into the background and the actual problems you are trying to solve become your focus (you are not solving problems in Node, you are just, you know, solving problems). Xenofobic people who get to know ‘the other’ are often surprised that under the surface of difference there is a sea of sameness – people around the world have similar worries, hopes, fears and joys. In Node, you still have to worry about security, i18n, performance, code structure, user experience, and Node is not the only player involved in this – just one tool in your toolbox.

One of the most important roles in converting an organization of any size to Node.js is that of influencers. People like myself who push for it, look for opportunities to get ‘camel’s nose under the tent’ (to borrow from Bill Scott), find a crevice where Node can squeeze and then expand like ice in the road potholes (I should know, I keep avoiding them this cold March in Toronto all the time). It is a startup of sorts, an inverted pyramid that can easily fall sideways if the founder flinches or loses hope and vision.

On the other hand, it can remind you of the locked potential of the people. There is a dark lake of passion in all of us, often trapped under a layer of dealing with day to day stress and ‘things that need to be done’. It is heart-warming to see that when you succeed in winning people over to the Node.js side, the passion bursts out and they jump in with enthusiasm and vigor that is great to experience.

Example: a member of the JazzHub team heard my pitch for Node.js and started experimenting with tutorials written in markdown. Like everything else, these tutorials are currently served using JSPs and client-side JavaScript. He wrote a small Node app that takes the tutorials, converts them to HTML using a nice little module and renders them using Dust.js. He did it in one afternoon and proudly demoed it the next time we met – it is one page of JavaScript – you don’t even need to scroll to see the entire code. Needless to say, he was hooked.

One of the most important jobs of an influencer is to prevent failure on stupid technicalities. People ready to yell ‘I told you Node was not ready’ are everywhere. Node is awesome,  but we all know that a single uncaught exception can crash the process. A JEE container can throw exceptions like a leaky barge, filling up the logs, but it will stay up. In Node, it is your responsibility to prop Node up, a Weekend at Bernies’s of sorts. Judging by the frequency of questions about this behaviour, it is hard to stomach by people moving over from the other platforms.

JEE is in fact not that different. When an uncaught exception is thrown in a simple Java program with ‘main’, it exits. When the same happens in a JEE container, the runnable running in a thread exits. The thread is reset, returned to the thread pool, and the next request is passed on to another thread. In a production system with Node.js, it is your responsibility to replicate this behaviour.

It is somewhat easier to end up with an uncaught exception in Node.js than it is in Java. Due to the asynchronous nature of Node, trying to surround a piece of code in a try/catch block is often futile. Exceptions can and will escape. This is such a tricky problem that it warranted the high priests of Node to huddle recently and release a detailed and super useful document that anybody trying to write production quality Node.js system should read and internalize.

It goes without saying then that you will never run ‘node myApp.js’ except during development. In production, you want to spread out to cover all the cores of the machine or VM you are running on. You want to restart the process when it exits (and it will, repeatedly). And finally, it would help to monitor CPU, memory and throughput of each running process. We are currently looking at PM2 for this, but as always with Node, there are other ways of addressing this need.

There are other things we discovered while getting serious with Node. For example:

  1. WebSockets – one of the first things you will try to do after getting the CRUD down is server push – after all, Data Intensive Real Time (DIRT) is what initially catapulted Node into our consciousness. After you get the code working on your machine, inspect the entire path to the browser on your system – proxies are surprisingly cranky when it comes to support for Web Sockets. NGINX didn’t have it until version 1.3. If you are using Apache, note that it will not let the Web Socket connections through unless you install and configure mod_proxy_wstunnel. Of course, this is not Node-specific, but chances are you are most likely to hit it with Node because of its affinity to server push.
  2. DevOps – make them your friend. If you thought Continous Integration and Continous Deployment was a neat idea, you will find it absolutely indispensable once you start getting real with Node. A well established pattern here is a system composed of micro-services, and it is not uncommon to juggle dozens upon dozens of separate Node apps performing a role in the system. You will go mad trying to manage micro-services by hand. Of course, once you get your DevOps going, the ability to deliver new features into a micro-service with surgical precision will bring no end of joy, particularly if you are replacing a JEE pig that takes minutes to restart. Zero downtime is a reachable goal in a micro-service based system comprised of Node processes. In IBM Rational, we use Urban Code, which is natural to us since we bought them recently. Do what you need to do to go automatic.
  3. Storage – Node will make you re-evaluate the rest of your system. When I started this blog, I was characteristically wishy-washy about databases. With Node.js, we noticed that we naturally lean towards JSON-based NoSQL databases. Many people use MongoDB with success due to the smooth transition from SQL, but if you don’t like its scalability characteristics or your lawyers don’t like its AGPL license, CouchDB is an easy choice. IBM recently bought Clodant (hosted CouchDB fork with Apache Lucene tossed in) so it is hard for us to say ‘no’ to a free, Cloud-based and professionally managed NoSQL JSON DB. Again, YMMV – use what works for you, but give JSON-based NoSQL databases a try – going JavaScript all the way down is liberating.
  4. Authentication and Single Sign-On – if there is anything that will remind you of the Ford Model T (you could order it in any colour as long as it was black), it will be authentication and SSO in your system. Nothing will uncover unnecessary complexity of the authentication schemes used in the system like trying to move it to something that is not JEE-based. The complexity is normally hidden in the JEE filters, and is uncovered as soon as you try to replicate it in Node.js. It is technology agnostic as long as you use Java. The good thing is that with a sympathetic team on the other end, this serves as a catalyst to finally simplify and clean up the protocol or switch to something saner.
  5. Legal – if you are in an enterprise of any size, you will not be allowed to just use whatever you find on the Internet (even startups find software promiscuity to have side-effects as soon as they grow enough to attract legal attention). Our legal department probably hates us – the amount of Node.js modules they need to vet is staggering, but the mountain of requests is tapering down. For example, the Markdown app I mentioned earlier in the post needed to vet only the markdown module – all the other modules were already encountered by the apps before it. At this rate, the flow will slow down to a trickle soon.

So there you go – three months after our foray into Nodeland, we are lining up several new Node micro-services, the sky didn’t fall, we are learning new things every day and worry more about what we are building than how we are building it. Every once in a while we are forced to solve a problem for the first time. We agree on the best practice together, write it down for others coming after us, and move on. Where we can, we rely on the efforts of people before us, allowing suites such as Kraken.js to let us focus on what is really unique about our code, rather than the boilerplate. Our lawyers hate us, but if ‘tried and true’ was the only criterium when choosing our tools and languages, we would be still writing everything in Cobol.

If only I was this decisive choosing my last Home Theatre receiver.

© Dejan Glozic, 2014

Node.js and Enterprise – Why Not?


My daughter is 14. Like her father, she loves and plays music. As with all the restless teens, sitting through multiple hours of Grammys 2014 was a non-starter. Therefore, she installed me as her push notification service. My role was to sit on the sofa (iPad in hand – I am not crazy to just watch TV for hours either), and alert her when the acts she was interested in arrived. Her alerts: Lorde, Lindsey Buckingham, Trent Reznor.

In her room she created a ‘shrine’ on the drawer chest, consisting of the CD jewel cases of Mellon Collie and the Infinite Sadness by Smashing Pumpkins, The Greatest Hits by Fleetwood Mac, all Vampire Weekend CDs, Pure Heroine by Lorde and The Bell Jar by Sylvia Plath (the book, not the CD). Couple of years ago she exited the Beatles phase, where nothing mattered if it was not about John and Paul.

Meanwhile, my 17 year old son likes to play multiplayer shooters while listening to Mozart’s Requiem. On other days he makes methodical passes through the entire Depeche Mode back catalog. Besides Game Of Thrones, his bedtime reading consists of chronicles of some of the most obscure periods in Persian, Egyptian and Chinese history few have heard of.

Time means nothing to my kids, only quality does. For my children, Mozart, Stevie Nicks, David Gahan, Ezra Koenig and Lorde all exist simultaneously in a vast random-access buffet.

Which brings me to the topic of the blog post. A young colleague from the vicinity of my old country commented about my blog thusly:

Are there multiple ways of looking at Node.js? Is there a ‘startup-y’ way of looking at it, and an ‘enterprise-y’ way? Is the former more cool, is the latter the more boring, adult, ‘taking all the fun out of it’ way? More importantly, should enterprises only use older, proven technology, while startups and individuals play with the new and shiny?

It is no secret that I work for IBM, the biggest and oldest of all the IT enterprises out there (it is 103 years old!). Should I even be allowed to write about, play with, and (God forbid) push for Node.js in IBM, let alone make it the center piece of the new micro-service architecture for JazzHub?

Let’s see how my kids would approach it: because they are young and unaware of stereotypes, they would totally use Node.js for the front end, because they have little patience for tedium and Node.js would allow them to iterate through the interface ideas much faster. If they had some high performance/CPU task to accomplish, they would not hesitate to write it in Java, or even  better, C or C++. Or maybe they would look at Go. They would add i18n support so that they can show what they built to their relatives in the old country that are not very good in English. They would add security because some bored hackers would pounce on their site as soon as it became live (my daughter’s Club Penguin account was repeatedly hacked when she was 9). They would do all these things because they are common sense, they are important, they matter and you cannot be ‘done’ until you have them.

The moment you have customers and feel responsibility for them, you are doing ‘enterprise-y’ stuff. NPM just became a company (they raised 2.6m in seed funding less then two months ago), and you know what one of the first orders of business were? Ordering external security audit. That sounds very ‘enterprise’ to me. Uber business model may sound weird to you* but they are a serious business (serious enough to create a pushback from lobbyists in each new market they expand to), and they were running Node.js in their dispatching system since 2011. The same for AirBNB, although they jumped into Node in 2013.

I am using these two companies as an example because they are relatively new and don’t fit the stereotype of a classic ‘enterprise’. And yet, look at their web sites carefully – both sport a dropdown to choose the language. I can paraphrase a classic joke ‘you know you are a redneck…’ as ‘you know you write enterprise-class software when you need to deal with i18n’. Of course, If you are an enterprise that wants to sell to the US government, it’s not a stretch goal – you must meet regulation 508 to even be in the running. There is an equivalent set of requirements for the European Commission. But you don’t have to be beaten with a government stick to add i18n and accessibility to your business’ web application – it just makes sense because you can reach more people that way. And it is not limited to Node.js – go through the examples of the wildly popular Bootstrap toolkit and search for the word ‘aria’ – you will see 43 hits because all the components have wai-aria accessibility support.

Uber (left) and Airbnb (right) language picker.
Uber (left) and Airbnb (right) language picker.

We are now entering the most exciting phase of Node.js’ young life – Node.js in the enterprise. The signs are everywhere, from testimonials of companies reinventing their systems using Node.js during Node Summit 2013 and NodeDay 2014, to the surge in articles and discussion on enterprise micro-service architecture (Martin Fowler creating a Downton Abbey event by releasing the article on micro-services in mini-episodes, only to be challenged to a post-off with an alternative article – man, this is going to be fun!).

I have seen a marked change in tone in the recent weeks around Node.js and enterprise. In the past, the discussions were heated as if there was a smidgen of insecurity in the pro Node.js camp, so the general discourse was ‘is Node.js ready for the enterprise?’ with a verdict left hanging in the balance. The tone has shifted now, coming from rapid maturation of the Node.js community bolstered by some significant success stories. The discussion is not any more if Node.js is ready for the enterprise – it IS in the enterprise, and that segment provides the most significant growth now. In a sense, the discourse now is ‘why are we even having this discussion?’ coming from mild annoyance of raising a topic that has been settled in many minds already, and with a resounding ‘Yes’. People are now busy having fun with Node.js at scale, for example:

Sometimes, a historic perspective helps, so here is one. In 1995, Sun made Java and JVM public as an alternative to C++ for desktop applications. It was interpreted, it was slow, it was buggy but it was new, exciting, promised ‘write once, run anywhere’ without recompilation, and the community was growing rapidly. In 1996, I was asked to try to write a prototype of a framework that would allow an IBM middleware development tool with a significant GUI to run on AIX, OS/2 and Windows NT without maintaining three separate code bases. I wrote it in Java and called it JFace. This framework eventually ended up in a much larger body of code (albeit totally rewritten) in what was later known as the Eclipse Platform.

A three-pane window showing three alternative representations of the same model circa 1998.
A three-pane JFace window showing three alternative representations of the same model, circa 1998.

The point of this story is that I was asked to solve a problem using Java when Java was only 2 years old, and the huge Eclipse platform effort was based on Java when it was 4 years old. At that time, Java was buggier and slower than Node.js is today (performance inflation-adjusted). In fact, Java was consistently slower than the incumbents, whereas Node.js shows performance improvements when used as designed (for systems with a significant I/O activity). I am sure people had their doubts about Java in its time, but that didn’t stop them from forging ahead and fixing whatever had to be fixed to dispel those doubts.

In a sense, some people in the enterprise sound like parents who had a wild youth, then forgot all about it and turned into old fuddy-duddies. As you see from my example above, we were fearless before, we can be fearless again. In five years, something new will come along, and we will have this discussion again.

So to wrap up, it is very unlikely that any Node.js discussion today will not be ‘enterprise-y’ – we have great HWPS (Hello World Per Second) numbers and now need to solve real problems that involve i18n, security, scale, independent evolution of micro-services by large teams, inter-service messaging, clustering, continuous integration and deployment with zero downtime etc. Let’s approach it the way my kids approach music – open minded, without prejudice, without preconceived notions, on merits only.

Node.js is but a tool in the tool box, use it as intended and it will repay handsomely. Feeling reinvigorated, loving your job again, being excited to get your hands dirty and code is tossed into the package for free as a reword for your open mind.

*Correction – the original article had an erroneous claim that Uber cars have silly pink moustaches on their grills. In fact, they belong to their competition Lyft. Thanks to @bdickason for spotting the error.

© Dejan Glozic, 2014

Release the Kraken.js – Part I

Image credit: Yarnington 2011

Of course when you’re a kid, you can be friends with anybody. […] You like Cherry Soda? I like Cherry Soda! We’ll be best friends!

Jerry Seinfeld

We learned from Rene Zellweger that sometimes you can have people at “Hello”. In case of myself and the project Kraken.js, it had me at “Node.js/express.js/Dust.js/jQuery/Require.js/Bootstrap”. OK, not fit for a memorable movie quote but fairly accurate – that list of libraries and platforms was what we at JazzHub eventually arrived at as our new stack to write cloud development tools for BlueMix. Imagine my delight when all these checkboxes were ticked in a presentation describing PayPal’s experience in moving from Java to Node for their Lean UX needs, and announcing project Kraken. You can say I had my ‘Cherry Soda’ moment.

Of course, blessed with ADD as I am, I made a Flash-like scan of the website, never spending time to actually, you know, study it. Part of it is our ingrained fear and loathing of frameworks in general, and also our desire to start clean and very slowly add building blocks as we need them. This approach is also advocated by Christian Heilmann in his article ‘The Vanilla Web Diet’ in the Smashing Magazine Book 4.

Two things changed my mind. First, as part of NodeDay I had a pleasure to meet and talk to Bill Scott, Senior Director of UX Engineering, Jeff Harrell, Director of Engineering Architecture, and Richard Ragan, Principal Engineer and ‘the Dust.js guy’, all from PayPal, and now feel retroactively bad for not giving the fruit of their labor more attention. Second, I had a chat with a friend from another IBM group also working on a Node.js project, and he said “If we were to start again today, I would definitely give Kraken.js a go – I wish we had it back then, because we had to essentially re-implement most of it”. Strike two – a blog post is in order.

So now with a combination of guilt, intra-company encouragement and natural affinity of our technologies, I started making inventory of what Kraken has to offer. At this point, hilarity ensued: the Getting Started page requires installation of Yeoman, and then the Kraken generator. Problem: there is no straightforward way to install Yeoman on Windows. I have noticed during a recent NodeDay that when it comes to Node, Macs outnumber Windows machines 99 to 1, but considering that Kraken is targeting enterprise developers switching to Node, chances are many of them are running Windows on their work machines. A Yeoman for Windows would definitely help spread the good word.

Update: it turns out I was working on the outdated information – after posting the article, I tried again and managed to install Yeoman on a Windows 7 machine.

Using Yeoman is of course convenient to bootstrap your apps – you create a directory, type ‘yo kraken’, answer couple of questions and the scaffolding of your app is set up for you. At this point our team lead Curtis said ‘I would be fine with a zip file and no need to install Yeoman’. He would have been right if using Yeoman was a one-time affair, but it turns out that Kraken Yeoman generator is a gift that keeps on giving. You can return to it to incrementally build up your app – you can add new pages, models, controllers, locales etc. It is a major boost to productivity – I can see why they went through the trouble.

Anyway, switched to my MacBook Pro, I installed Yeoman, then Kraken generator, then typed ‘yo Kraken’ and my app was all there. Once you start analyzing the app structure, you see the Kraken overhead – there are more directories, more files, and Kraken has decided on how things should be structured and tries to stick to it. On the surface of it, it is less clear for the beginner, but I am already past the pure express ‘Hello, World’ apps. I am interested in how to write real world production apps for the enterprise, and Kraken delivers. It addresses the following enterprise-y needs:

  1. Structure – app is clearly structured – models, controllers, templates, client and server side – it is all well labelled and easy to figure out
  2. i18n – Kraken is serious about enterprise needs, so internationalization is baked in right from the start
  3. Errors – it provides translated files for common HTTP errors (404, 500, 503) as a pattern
  4. Templates – Kraken uses Dust, which fits our needs like a glove, and the templates are located in the public folder so that they can be served to both Node.js and the browser, allowing template sharing. Of course, templates are pre-compiled into JS files for performance reasons.
  5. Builds – Kraken uses Grunt to automate all the boring tasks of preparing your app for deployment, and also incorporates Mocha tests as a standard part. Having a consistent approach to testing (and the ability to incrementally add tests for new pages) will improve quality of your apps in the long run.
Kraken.js HelloWorld app directory structure

One thing that betrays that PayPal deals with financial transactions is how serious they are about security. Thanks to its Lusca module, Kraken bakes protection against the most prevalent types of attacks right into your app. All enterprise developers know this must be done before going into production, so why tacking it on in a panicky scramble at the last minute when you can bake it into the code from the start.

Of course, the same goes for i18n – Makara module handles translations. At this point I paused looking at the familiar file format from the Java world – *.properties files. My gut reaction would be to make all the supporting files in JSON format (and to Kraken authors’ credit, they did do so for all the configuration files). However, after thinking about the normal translation process, I realized that NLS files are passed to the translation team that is not necessarily very technical. Properties files are brain-dead – key/value pairs with comments. This makes them easy to author – you don’t want to receive JSON NLS files with missing quotes, commas or curly braces.

Makara is designed to work with Dust.js templates, and Kraken provides a Dust helper to inline locale-based substitutions into the template directly (for static cases), as well as an alternative for more dynamic situations.

As for the main file itself, it did give me pause because the familiar express app was nowhere to be found (I absent-mindedly typed ‘node app.js’, not realizing the app is really in index.js):

'use strict';

var kraken = require('kraken-js'),
    app = {};

app.configure = function configure(nconf, next) {
    // Async method run on startup.

app.requestStart = function requestStart(server) {
    // Run before most express middleware has been registered.

app.requestBeforeRoute = function requestBeforeRoute(server) {
    // Run before any routes have been added.

app.requestAfterRoute = function requestAfterRoute(server) {
    // Run after all routes have been added.

if (require.main === module) {
    kraken.create(app).listen(function (err) {
        if (err) {

module.exports = app;

This is where the ‘framework vs library’ debate can be legitimate – in a true library, I would require express, dust etc. in addition to kraken, start the express server and somehow hook up kraken to it. I am sure express server is somewhere under the hood, but I didn’t spend enough time to find it (Lenny Markus is adamant this is possible) . This makes me wonder how I would approach using cluster with kraken – forking workers to use all the machine cores. Another use case – how would I use here – piggy-backs on the express server and reuses the port. I will report my findings in the second instalment.

I would be remiss not to include Kappa here – it proxies NPM repository allowing control over what is pulled in via NPM and provides for intra-enterprise sharing of internal modules. Every enterprise will want to reuse code but not necessarily immediately make it Open Source on GitHub. Kappa fits the bill, if only for providing an option – I have not tested it and don’t know if it would work in our particular case. If anything, it does not suffer from NIH – it essentially wraps a well regarded npm-delegate module.

Let’s not forget builds: as part of scaffolding, Yeoman creates a Gruntfile.js configured to build your app and do an amazing amount of work you will get for free:

  1. Run JSHint to verify your JS files
  2. Build Require.js modules
  3. Run less compiler on the css files
  4. Build locale-specific versions of your dust templates based on i18n properties
  5. Compile dust templates into *.js files for efficient use on the client
  6. Run Mocha tests

I cannot stress this enough – after the initial enthusiasm with Node.js, you are back to doing all these boring but necessary tasks, and to have them all hooked up in the new app allows the teams to scale while retaining quality standards and consistency. Remember, as a resident Node.js influencer you will deal with many a well intentioned but inexperienced Java, PHP or .NET developer – having some structural reinforcement of good practices is a good insurance against everything unravelling into chaos. There is a tension here – you want to build a modern system based on Node.js micro-services, but don’t want each micro service to be written in a different way.

Kraken is well documented, and there are plenty examples to understand how to do most of the usual real world tasks. For an enterprise developer ready to jump into Node.js, the decision tree would be like this:

  1. If you prefer frameworks and like opinionated software for the sake of consistency and repeatability, use Kraken in its entirety
  2. If you are starting with Node.js, learn to write express.js apps first – you want to understand how things work before adding abstractions on top of them. Then, if you are OK with losing some control to Kraken, go to (1). Otherwise, consider continuing with your express app but cherry-picking Kraken modules as you go. To be fair, if Kraken is a framework at all (a very lightweight one as frameworks go), it falls into the class of harvested frameworks – something useful extracted out of the process of building applications. This is the good kind.
  3. If you ‘almost’ like Kraken but some details bother you or don’t quite work in your particular situation, consider participating in the Open Source project – helping Kraken improve is much cheaper than writing your own framework from scratch.

A friend of mine once told me (seeing me dragging my guitar for a band rehearsal) that she “must sit down one afternoon and learn how to play the guitar already”. I wisely opted out of such a mix of arrogance and cluelessness here and called this ‘Part I’ of the Kraken.js article. I am sure I will be able to get a better idea of its relative strengths and weaknesses after a prolonged use. For now, I wanted to share my first impressions before I lose the innocence and lack of bias.

Meanwhile, in the words of “the most interesting man in the world”: I don’t always use frameworks for Node.js development, but when I do, I prefer Kraken.js.

© Dejan Glozic, 2014

Pushy Node.js II: The Mullet Architecture


The post title unintentionally sounds like a movie franchise (e.g. Star Trek II: The Wrath of Khan), but in my defense I DID promise I will return to the topic of pushing events to the browser using and Node.js in the post about message queues. The idea was to take the example I wrote about in Pushy Node.js and modify it so that the server messages are received from another service using a message queue.

I got the idea for the sequel title from an awesome tweet I saw the other day:

Alas, nothing can be hidden under the Google administration – a quick search yields an article from 2010 on mullet architecture (an actual one, with houses, not software). There is even such a thing as ‘reverse mullet’ – party in the front and business in the back. Frank Lloyd Wright aside, I did mean ‘software architecture’ in this particular post. I wanted to mimic the situation we face these days, where Node.js front end typically communicates with the legacy services in the back that nobody dares to touch. As a result, this post will mix Node.js,, MQTT protocol and Java app (posing as the mullet’s backside). I am all for accuracy in advertising.

To refresh your memory, the example app in the previous article was creating a hypothetical system in which builds are running on the server, and as they are going through their execution, we use Web Sockets to drive the Bootstrap progress bar in the browser. To make the original sample palatable, we faked the build by simply using the timeouts. We will continue with the fakery, but remove one layer by moving the fake build into a dedicated Java app. This Java app (if you squint, you can imagine it is in fact a Jenkins server running builds) will react to the POST request to start or stop the build, but the build service will publish messages using the message broker. Our Node.js app will subscribe to the ‘builds’ topic and pass the information about the build progress to the client using

This all provides the ‘how’, but not the ‘why’. Recall that the power of message queues is in avoiding polling (our Node.js app does not need to constantly nag the Java app “are we there yet?”), and also in providing for flexible architecture. Imagine, for example, that in the future we want to capture trend data on build average times and success/failure rates. We could easily write another service that will subscribe to the ‘builds’ topic and store the data in a DB, and also provide a nice data visualization. I can easily see a nice Node.js app storing data into a NoSQL database like as MongoDB or CouchDB, and a nice page rendered using Dust.js template and D3.js for data visualization. Why, the app is practically writing itself!

The key point here is that you can easily add this feature tomorrow by simply adding another subscriber to the ‘builds’ message topic – you don’t need to modify any of the existing apps. This low friction way of growing the system by adding micro-services is very powerful, and has also been reinforced during the recent NodeDay in the presentation by Richard Roger, founder of NearForm.

All right then, we know why and we know how, so let’s get busy. First, we will install RabbitMQ as our message broker. I have chosen RabbitMQ because it is open source, supports multiple protocols, scales well and is available as a service in PaaSes such as CloudFoundry or Heroku. As for the protocol, I will use MQTT, a lightweight pub/sub protocol particularly popular for anything M2M. If you want to play with MQTT but don’t want to install RabbitMQ, a lightweight open source MQTT broker called Mosquitto is also available, although I don’t know how well it can hold up in production.

For our Java app, both AMQP and MQTT clients are readily available. RabbitMQ provides the AMQP client, while Eclipse project Paho has the MQTT client. For this example, I will use the latter. When you have time, read the article REST is for sleeping, MQTT is for mobile by J Spee.

Let’s look at the diagram of what the control flow in the original example app looked like:


A Node.js app handles the initial GET request from the browser by rendering the page with a button and a progress bar. Button click issues a jQuery Ajax POST request to the app, initiating a fake build. As the build is going through the paces, it uses and Web Sockets to push the progress to the brower, moving the progress bar.

Now let’s take a look of what the new topology will look like:


The flow starts as in the original example, but the Node.js app just proxies the build start/stop command to the Java app listening on a different port. The Java app will kick a fake build on a worker thread. As the build is running, it will periodically publish messages of its progress to the message broker on a predetermined topic using MQTT client. Node.js app registered as a subscriber to the same topic will receive the messages and pass them on the browser using Web Sockets.

In order to save space, I will not post the code snippets that didn’t change – check them out in the original article. The first change here is how we react to POST request from the browser in the Node.js app. Instead of kicking off or stopping the fake build, we will just proxy the call to the Java app: = function(req, res) {
	var options = {
       hostname: 'localhost',
       port: 9080,
       path: '/builds/build',
       method: 'POST'
	var rreq = http.request(options, function (rres) {
		console.log('Redirect status: '+rres.statusCode);

On the Java end, we will write a JEE servlet to handle the POST request:


import javax.servlet.ServletInputStream;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.apache.wink.json4j.JSON;
import org.apache.wink.json4j.JSONException;
import org.apache.wink.json4j.JSONObject;
import org.eclipse.paho.client.mqttv3.MqttClient;
import org.eclipse.paho.client.mqttv3.MqttException;
import org.eclipse.paho.client.mqttv3.MqttMessage;
import org.eclipse.paho.client.mqttv3.MqttPersistenceException;

public class BuildService extends HttpServlet {
	private static final String TOPIC = "builds";
	private static FakeBuild build;

	class FakeBuild implements Runnable {
		boolean running;
		boolean errors;

		public void run() {
			MqttClient client;
			try {
				client = new MqttClient("tcp://localhost:1883", "pushynodeII");
			} catch (MqttException e) {
			running = true;
			errors = false;

			for (int i = 0; i <= 100; i += 10) {
				if (i == 70)
					errors = true;
				if (i == 100)
					running = false;
				// notify state
				publishState(client, running, i, errors);
				if (running == false)
				try {
				} catch (InterruptedException e) {
			try {
			} catch (MqttException e) {

		private void publishState(MqttClient client, boolean running,
				int progress, boolean errors) {
			try {
				JSONObject body = new JSONObject();
				body.put("running", running);
				body.put("progress";, progress);
				body.put("errors", errors);
				MqttMessage message = new MqttMessage(body.toString()
				try {
					client.publish(TOPIC, message);
				} catch (MqttPersistenceException e) {
				} catch (MqttException e) {

			} catch (JSONException e) {

	protected void doPost(HttpServletRequest req, HttpServletResponse res) {
		try {
			ServletInputStream bodyStream = req.getInputStream();

			JSONObject body = (JSONObject) JSON.parse(bodyStream);
		} catch (Exception e) {

	private void doToggleBuild(boolean start) {
		if (start) {
			build = new FakeBuild();
			Thread worker = new Thread(build);
		} else {
			if (build!=null)
				build.running = false;

In the code above, a ‘start’ command will kick the build on the worker thread, which will sleep for 1 second (as in the JavaScript case), and then send a MQTT message to update the subscribers on its status. In case of a ‘stop’ command, we will just turn the ‘running’ flag to false and let the build to naturally stop and exit, and also update the status. We will pass the build state in message body as JSON (duh).

Once the Java app is up and running and messages are flowing, it is time to add a MQTT message subscriber in Node.js app. To that end, I installed the mqtt module using NPM. All we need to do to start listening is create a client in app.js, subscribe to the topic and register a callback function to handle incoming messages:

var mqtt = require('mqtt')
   ,io = require('');


    var server = http.createServer(app);
    io = io.listen(server);

    var mqttClient = mqtt.createClient(1883, 'localhost');

    mqttClient.on('message', function (topic, message) {
       io.sockets.emit('build', JSON.parse(message));

The code above will create a client for localhost and the default tcp port 1883. In a real world situation, you will most likely supply these as environment or config variables. The client listens on the topic and simply passes on the body of the message as a JavaScript object by passing the message to all the currently opened sockets (clients). One important meta-comment highlighting difference in approaches between Node and Java: because Node is asynhconous, I was able to create a client and bind it to a TCP port, then proceed to bind the app to another port for HTTP requests. In a JEE app, we would need to handle incoming messages in a separate thread, and possibly manage a thread pool if one thread cannot handle all incoming message traffic. As if you needed more reasons to switch to Node.js.

In terms of behavior, this code in action looks the same in the browser as the previous example, but we made an important step towards an architecture that will work well in todays’ mixed language architectures and have room for future growth. What I particularly like about it is that Java and Node.js apps can pass the messages in an implementation-agnostic way, using JSON. Note that the Java app in this example is not as far-fetched as it may seem: you can make the example even more real by using a Jenkins service. Jenkins has REST API that our Node app can call to start and stop build jobs, and once a job is underway, we can install an MQTT notifier plug-in to let us know about the status. The plug-in is simple and uses the same Paho client I used here, so if you don’t like what it does, it is trivial to roll your own.

The intent of this post was to demonstrate an architecture you will most likely work with these days, even if you are a total Node-head. Apart from legacy back-end services, even some newly written ones may need to be in some other language (Java, C++, Go). Nobody in his right mind would write a video transcoding app in JavaScript (if that is even possible). Using message brokers will help you arrive at an architecture that looks like you actually meant it that way, rather than fell backwards into it (what I call ‘accidental architecture’). Even if it is a mullet, wear it like you mean(t) it!

© Dejan Glozic, 2014

NodeDay 2014

'Unicorns farting rainbows' from Eran Hammer's presentation 'Node Inc.', artwork by @ChrisMCarrasco
‘Unicorns farting rainbows’ from Eran Hammer’s presentation ‘Node Inc.’, artwork by @ChrisMCarrasco

This post is the continuation of my trip report that started with part 1 on Node.js on the Road. NodeDay was organized and hosted by PayPal in their building in San Jose, CA, on Feb 28th 2014. The focus of the one-day conference was Node.js in the enterprise, which was a great fit for what we are now doing with Node.js in IBM Rational. Add in the opportunity to talk with Bill Scott, Richard Ragan and Jeff Harrell about their use of Dust.js in production, and it all added up to a promising day.


I drove down 101 South from San Francisco to my hotel in San Jose in a calm evening. Last thing I remember before I fell asleep was muted howling of unusually strong wind outside my hotel window. Then I woke up into a morning of storm reports, flash floods, mud slides and falling rocks throughout the region. I must have been very tired.

Luckily, this was largely over by the time I finished breakfast, and PayPal was only 2 miles away.

We got badges and bracelets at the entrance. Badges doubled as agenda – simple and minimalistic, as you would expect from a Node.js event.


The event was kicked off by Bill Scott, Senior Director of UI Engineering, PayPal and our host for the day. The first keynote was by Eran Hammer, Senior Architect of Walmart Labs, who was singlehandedly responsible for pushing me off the fence and into Node.js development. The watershed event was Black Friday 2013, when 55% of Walmart traffic was from mobile devices, and handled by Node.js servers. The uneventful day for the pager-carrying developers cemented their trust that Node.js has ‘arrived’, resulting in Walmart doubling down on Node and spreading it to the rest of the enterprise.

Eran’s focus was on saving the typical enterprise Java developer from living a double life – passionately using Node at home, and wrestling with the legacy projects behind the firewall at work (illustrated with a contrast between unicorns and rainbows at the top of the post, and the Intranet wall of dead puppies – see the rest of the presentation). A common myth is that Open Source is all done by little guys, individuals and startups – just technology and no politics. In reality, Open Source is sustained and flourishing in the enterprise – nobody is using the Open Source more than the enterprise (and Walmart is not exactly a startup either). Why? Because people see the leverage that working for large companies can give them. Hence, we need more ‘Inc.’ in our Node, the community to be one, instead of Enterprise people being on the side.

We (the enterprise) should not feel like we are on the fringes. For example, Joyent has been the best supporter of the project, and one of the biggest consumers of Node. The business of paying for support for Open Source projects is still alive and a valid one. With NPM in the news recently, it is clear that profitability is not a dirty word, it is the only way to sustain the community. Issues such as private NPM repositories that are so important for enterprises were initially hard to take seriously, but it is obvious it is understood now, judging by half a dozen solutions already available. It shows that in many cases, building on what is already available and perhaps sponsoring/adopting core contributors is a cost-effective way to get things done and ensure critical support. Doing everything in-house is an anti-pattern.

Eran Hammer, Senior Architect - Walmart Labs
Eran Hammer, Senior Architect – Walmart Labs

TJ Fountaine, Node.js Project Lead from Joyent has mostly repeated his talk from ‘Node.js on the Road’, which was a good thing to do since there was no overlap in the crowd between the two events (not counting myself of course :-). You can read my report in the previous post with no loss of data.

TJ Fontaine, Node.js Core Lead, Joyent and Bill Scott, Senior Director, UI Engineering - PayPal
TJ Fontaine, Node.js Core Lead – Joyent and Bill Scott, Senior Director, UI Engineering – PayPal

The funniest part of TJ’s keynote is the extent he went to in order to ensure that he is not mixed up with the express.js author TJ Holowaychuk:

Adam Baldwin, Chief Security Officer – &yet, has covered a touchy subject – Node.js security. With all the enthusiasm for the new way of working Node provides, enterprise is very touchy around security and it is a big topic when considering Node adoption. Adam suggests protecting what makes your money, and consider availability as an aspect of security (if the app is down, who cares if it is secure). The process of how you handle the vulnerability is the key (but you will screw it up anyway).

When it comes to security, nodejs-sec mailing list is important because all security-related announcements go there (very few so far, knock on wood). In reality, most of the issues will be in Node modules, not the core – security is built into core, it is not a bolt-on module. The enterprise is responsible for what you require() – there has to be a process in place for vetting modules, and private NPM repositories. Linting is recommended (using npm install precommit-hook).

Security-related test cases are important (some orgs only write test cases for the happy path). One useful library that can help – retire.js (scans your web app or Node app for use of vulnerable JS libraries).

With all said and done, the greatest vulnerability in the enterprise?

Adam Baldwin, Chief Security Officer – &yet
Adam Baldwin, Chief Security Officer – &yet

Joe McCann, COO of The Node Firm has addressed the business case for Node. There are many signs of Node.js maturity – exponential growth in NPM, number of books published, enterprises going into production with Node. He posited that a mission statement for Node should read: “An easy way to build fast and scalable applications”. As to ‘Why Now’, he suggested that legacy stacks are impending innovation and make companies less competitive.

Joe suggested five business tenets when making the case for Node: innovation, productivity, developer joy, hiring/retaining talent and cost savings. He focused on a few success stories:

PayPal had velocity as the key driving point. Node provided boost to the workflow, allowing faster iteration and innovation. After a long history of Java/C++, developers are energized by their jobs again. A frequently re-twitted quote:

Groupon moved from a monolithic application that became a burden and a blocker to a service-oriented architecture. Node.js now handles the same amount of traffic using less hardware (direct cost savings – a music to every CFO’s ears). At the same time, page load times decreased by 50%. The use of a common layout service allowed them to retain the ability to make site-wide design changes quickly.

Walmart achieved Black Friday CPU utilization hovering around 1% (you want to be bored when on pager duty). In addition, quality of candidates after switching to Node.js increased – Node is a magnet for talent. This is important – Node could be a differentiator in hiring/retaining.

In Yahoo!, Node.js services are handling 2M requests/minute, 200 developers write Node, there are 500 internal and 800 external Node modules. Node brought speed and ease of development.

Back to business tenets, innovation is helped by building things faster and more efficiently. Productivity is helped by terse syntax and absence of thick layers of bureaucracy that slow developers down. “Productive developer is a happy developer”, which helps with developer joy and helps stability and retention. As for hiring/retaining talent, there is a magnetism of working on Node that is unique. Finally, cost savings are real and palpable: fewer developers writing apps in less amount of time, using less hardware. Using Node in multiple projects multiplies the savings.

This topics generated lively discussion, and a member of the audience claimed that similar enthusiasm was heard 10 years ago around Ruby on Rails. So what is different with Node? The consensus seems to be that since JavaScript is gravity on the client, using JavaScript ‘all the way down’ provides benefits of simplification and ‘single stack’ that cannot be duplicated in other stacks.

The reality of enterprise architecture is that mixed stacks (Node.js + Java/PHP) are everywhere. Right now, front end in Node.js is a safe bet, but there is growth in systems around Node.

At the request to cover some failures (to counter success stories for fairness), Sam from offered a counter-argument of why some shops cannot get there just yet – some libraries just do not exist yet (but time can help plug this gap). In the same vein, LinkedIn is still a huge Scala/Java shop, and in some use cases Play framework is easier to move their developers forward than to make a massive switch to Node (LinkedIn still uses Node for their mobile services, and they are very fast).

The takeaway is that while Node.js is obviously at an inflection point and it is not hard to find success stories, but it will never be a magic bullet that will solve all the problems. It is imperative to ensure that the problem to be solved is a good fit for Node.

Joe McCann, COO - The Node Firm
Joe McCann, COO – The Node Firm

After lunch and an opportunity to socialize with actual people in the 3D world and full HD (what a concept!), Erik Toth, Principal Software Engineer at PayPal took us through the fuzzy and complicated world of moving a large developer workforce to NodeJS. He provided a set of hard won tips on how to handle resistance and problems along the road. First off, accepting that your rationale might be non-technical is healthy – Node is as much a way of life as a technology. There will be critics, they will parrot questions, represent opinions they’ve read as their own, use outdated information, micro benchamrks, assume you have ulterior motives, they will underestimate you (real engineers use something else). They will assume that because they haven’t heard of something, it doesn’t exist, and all of this without even looking at the technology.

The secret to overcome this? Influencers. You need to have an advocate that absolutely believes in it, and things will start happening. Setting limits is key – focus and avoid some things, avoid inventing problems to have something to solve, solving other peoples’ problems, or solving solved problems. Only fix what is broken. For PayPal, it was primarily velocity and productivity, so there was a need to scale people, product and technology. This requires an ecosystem: GitHub, npm, tooling.

Erik pointed at silos as a serious impediment (enterprises like Game of Thrones). They breed distrust, create shallow alliances based on selfish goals, black-boxes teams and code. He suggested elimination of backlog as a concept (where features go to die).

A key takeaway for individual developers?

Individuals rarely solve real problems – benefit from the work of others. This is not revolutionary, just harder in the enterprise. Finally, the quality of code should not depend on the access rights (“The only thing that should differentiate public and private modules is access rights”). Be proud of your code, expect copypasta, learn semver and teach others.

Oh, and the key takeaway from the Q/A: “We treated Node as something that will go away in 5 years, and we didn’t want to impose custom solutions to those that will come after us”.

Erik Toth, Principal Software Engineer - PayPal
Erik Toth, Principal Software Engineer – PayPal

Ian Livingstone, VP of Engineering of GoInstant told a war story of using Node in two apps, and how the second time was informed by all the mistakes made in the first app. The first app was written in the time when Node was instable. More importantly, growing code base is a maintenance nightmare, which greatly underscores the important of writing small modules instead of large monolithic apps (a common theme throughout the NodeDay).

Ian offered many tips collected the hard way. For example, state must live in a class instance, not in modules. Furthermore, when a class has a state, it must have both ‘initialize’ and ‘destruct’ methods. Another trick that helped was the use of JSON schema, which makes it easy to exhaustively test code. This includes schema for Request and Response (run only in development, limited in production for performance reasons).

On the topic of UI testing, Ian pointed out that multi-user integration tests are required to excersize user interfaces. Selenium was an obvious first choice, but the tests ended up verbose and very fragile, and took a long time to run. NPM to the rescue: a tool called ‘Barista’ (extension to Mocha) proved to be a better solution. Lesson: the power at your finger tips through the Node eco-system is incredible.

The need to avoid one giant code base cannot be stressed enough: like in Java’s “God object”, leaked responsibilities between modules made the code base into a giant ‘house of cards’ that was fragile and very hard to test, greatly diminishing trust in the code. Solution: small, simple and independent modules.

The second time around, applying all these hard-won lessons had a hugely positive effect – it changed the conversation. A small downside – longer install times via NPM (solved by using a proxy backed by S3). Node works better when it has one thing to do, favoring an architecture with many small and simple services. As a convenience, each service has a client library that wraps the transport concerns, making it easy to consume and test. Using upstart and supervisord provided for process lifecycle management.

Takaway – Node.js is power, and many problems have already been solved by the community.

Ian Livingstone, VP of Engineering - GoInstant
Ian Livingstone, VP of Engineering – GoInstant

Trevor Norris, NodeJS Core Maintainer from Mozilla took us on a journey much closer to the metal, under the layers of abstractions we are so used to in our daily work. The aim of the presentation was not to prove that all abstractions are bad, just to make their cost clear. He offered a number of performance tips related to intimate knowledge of V8 engine that powers Node (example: inlined functions can’t contain more than 600 characters, including comments; inlined functions must be declared close to the call site etc.). This level of detail is not very practical to everyday Node developers writing modules at higher level of the stack, but are a reality for Node Core developers that must worry about the performance envelope of the bedrock on which everything else resides.

In that spirit, Trevor continued to provide advise on the most obvious ‘sins of our abstractions’ – the fact that some events like DNS and FS are in fact pseudo-async (they tie up a thread in the background), and again sternly warned about using the new execSync function that is a very odd animal in the ansync world of Node.

Key takeways: write code that you can maintain, write code that doesn’t crash – the slowest code is the code that doesn’t run. And finally, code for your case. Very few developers actually code with Node Core performance requirements in mind.

Trevor Norris, NodeJS Core Maintainer - Mozilla
Trevor Norris, NodeJS Core Maintainer – Mozilla

Richard Rodger, Founder of Near Form really drove the point of keeping Node processes small (a motif of the entire conference). This ensures that services are easy to restart, which makes for a fault-tolerant system. With a number of services running in the system, a communication mechanism is require (messages). Processes need to be small (100 lines of code), a few screens to scroll, that can fit inside your head. Frameworks are discouraged – use modules and composition.

A good message broker that supports pub/sub is very handy in this architecture – it makes the system loosely coupled and extendable. Micro-services also make rewrites cheap, avoiding project management ceremony. They localize bad coding choices, making them less of an issue because they are confined to a single micro-service.

Not all services are created equal – some are critical and require a lot of attention, some do not. By grouping services by quality, focus can be applied where it counts. As always, fixing a bug later in the cycle carries a higher cost, but with micro-services, the curve is less steep.

Richard provided some real-world examples of approaching a problem using micro-service architecture. The approach flows from user stories, to capabilities, services and messages. After that, decisions need to be made on how to run in production: what performance, scale, behavior and message transport are needed. Of course, production rollout need not be extensive – a good start can be to run all the micro-services on one machine and cluster only the ones that actually need it.

As a final comment, Richard agreed that micro-services can be built in any language – Node.js is simply a very suitable option, but not the only one.

Richard Rodger, Founder - NearForm
Richard Rodger, Founder – NearForm
NodeDay audience - full house
NodeDay audience – full house
Mr. JavaScript himself - Douglas Crockford, unaware of my paparazzi tendencies.
Mr. JavaScript himself – Douglas Crockford, unaware of my paparazzi tendencies.

All in all, it was a well organized event and an amazing opportunity to talk to a number of key people in the Node.js community all in the same day. In effect, it was like its topic – I was able to talk to more people and learn more in a shorter period of time and virtually no downtime – like a true Node app. Kudos to Bill Scott for being a gracious host and Lenny Markus for ensuring it all went without a hitch.

© Dejan Glozic, 2014

Node.js on the Road – San Francisco


This post is a first of sorts – as with my review of RESS: The Essentials that opened my Reviews category, it will be the first in the Events. In this context, plural is actually justified because it covers not one but two events: Node.js on the Road on Feb 27th in San Francisco’s hosted by New Relic and NodeDay on Feb 28th in San Jose hosted by PayPal. Both are now available – you can read the NodeDay post here.

I ended up at ‘Node.js on the Road’ by a stroke of luck. Ever since the days when EclipseCon was being hosted in Santa Clara Convention Center, my preferred travel plan was to fly to SFO, then drive down 101 South for the event. The 11:45am Air Canada YYZ-SFO flight is just perfect for this, and it gives me a free afternoon and evening in San Francisco. Just when I was trying to figure out what to do with the available time, along came a tweet about ‘Node.js on the Road’ to be held 6-9pm PT in the New Relic’s downtown SF location on Feb 27th. Seemed like a perfect way to spend the evening before heading South.

Looking for parking on a work day in downtown San Francisco is not for the faint of heart. I ended up parking in an the underground garage of the nearby building. I arrived a bit ahead of time but it turned out that 6pm was ‘when the food was about to arrive, and also the people’, not the actual start of the event. Just as well, because my jet-lagged mind welcomed some time to pipe down and also solved my dinner problem.

Node.js on the Road Agenda
Node.js, food and beer - a perfect combination
Node.js, food and beer – a perfect combination

The even was opened by Bryan Cantrill, SVP Engineering of Joyent, the corporate steward of Node.js. It was a welcome introduction, explaining the format of the evening which was a bit vague to me. He also said something counter-intuitive for many people skeptical of ‘JavaScript on the server’: “Node.js is the most debuggable dynamic environment”. Veterans of Java and JVM would love to disagree, but everybody knows that huge VM dumps created when a real-world production Java program gives up the ghost are a nightmare to analyze, so it is not hard to imagine where he is coming from.

Bryan Cantrill, SVP Engineering, Joyent

First on the list of presenters was TJ Fontaine, the newly minted Node.js Core team lead, replacing Isaac Schlueter who left to become the CEO of the new NPM startup (short for ‘Node Package Manager’, a registry largely responsible for the thriving Node.js ecosystem). His talk was centered around the road ahead – what is left to finish in Node before it can be declared v1.0, how companies and individuals using it can contribute and inform the process. There are multiple ways to make Node better at this point: reporting bugs as we are hitting them in production, contributing source for either enhancements or bug fixes, helping with the documentation (Node currently has a bit of a documentation problem). There are also multiple ways to ensure the community continues to thrive – Stack Overflow, IRCs, mailing lists, meetups. Not all of these options are equally appealing to all (TJ is not a fan of mailing lists or IRCs) – YMMV.

TJ also spent some time to announce what is coming in Node 0.12 which is ‘very immanent’. It was largely a rehash of what was already mentioned in a related blog post. Some are expected (performance improvements), some are too detailed for my current knowledge of Node.js (TLS enhancements, buffers, streams) although I am sure there are developers which are expecting them with anticipation. And then some are controversial. Take execSync – after all the effort to explain that Node.js is asynchronous to the core, being able to run blocking execution seems like a compromise with the devil (to TJ’s defense, he recommended extreme caution when using this function for all the easily understandable reasons).

TJ Fontaine, Node.js Core team lead
TJ Fontaine, Node.js Core Team Lead

Next on the agenda was Dav Glass, Node.js architect from Yahoo!. In my chat with the presenters prior to the event, I learned that the idea is to bring different presenters at each stop to tell the story of their own use of Node.js in real world projects. Yahoo! seems to be turning into a massive Node.js shop, and Dav illustrated it with the ‘We love Node’ slide. Yahoo! runs Node both in production and internally, using a huge Continuous Integration (CI) system called Manhattan. At the moment, some of the boxes in Yahoo! pages are served by Node (others use Java or PHP). As expected, transition to Node in running systems is often a gradual transition, not a big switch.

Dav’s war stories? In his experience, it is rarely Node’s problem – most of the time issues end up being caused by downstream modules deep in the bowels of the running system. Again, since the system needs to keep going, Node needs to be inserted into the giant old infrastructure. Yahoo! is using Node largely unmodified – the only minor changes being done around security and authentication.

Another common theme is the use of NPM to serve internal modules – not all the Node modules are open sourced, requiring a way to serve them internally. One of the internal code quality requirement is that you are not allowed to share a library unless it has 80% test code coverage.

Dav demonstrated some impressive throughput numbers, from processes serving > 1000 rps (requests per second), to some extreme examples of hot rod modules hitting insane figures of 25000-36000 rps.

Dave Glass, Node.js architect, Yahoo
Dav Glass, Node.js Architect, Yahoo! in front of ‘We Love Node!’ slide.

Jeff Harrell is a new hire who came to PayPal 2 years ago as a new Director of UI Engineering to ‘disrupt the PayPal system’. His story has already been somewhat familiar to me due to the write up on the Project Kraken. Jeff added new observations to this story, explaining that the goal of their wholesale move from Java to Node was every bit as driven by developer performance as app performance. In a what is turning into a familiar pattern, their topology has Node front end hooked up to the legacy infrastructure that is either difficult to change, or is not a good fit for Node. PayPal is currently running 20+ Node apps, written by 200 developers (both new hires and Java converts).

According to Jeff, converting Java developers was an interesting process. He observed stages not very different from Kubler-Ross stages of grief, from denial, anger and bargaining to acceptance and finally enthusiasm for Node at the end of the curve. Of course, not everything was perfect: PayPal, being a payment processing company, is a heavy user of SSL, and in Node v0.10, SSL cipher time left much to be desired. This was just a point in time problem, however: in v0.11 and the upcoming v0.12 SSL performance is much improved, and will continue to be a high priority.

Jeff Harrell, Director of UI Engineering, PayPal
Jeff Harrell, Director of UI Engineering, PayPal

The last part of the event was a Q/A panel with all the participants. A lot of questions were for Jeff Harrell – it seems that audience uses PayPal as a good model of a traditional Java shop moving to Node with a clear head, with a desire to improve productivity and make Lean UX real and practical, whereas they have a harder time identifying with the more complex Yahoo!’s systems (a lot of participants work for startups that are years away from reaching Yahoo! scale). Some participants expressed the familiar concern about type safety (or lack thereof) in JavaScript. According to panelists, smaller modules in Node tend to make type safety less of an issue. In fact, Node.js inherits from Unix philosophy – it forces you to think small and combine (and write tests – smaller components are more testable).

As for real-world robustness, Yahoo! has provided Open Source launchers (they are on GitHub) which run one process per core, and restart process if there are problems.

Some participants wanted to know how to get Node introduced into a skeptical organization. Panelists agreed that doing it instead of asking for permission is the way to go (and also making it provable – you can argue technology all day long, but when business people realize you can do it faster, they like the outcome).

Yahoo! is definitely not in that phase right now – Yahoo! has thousands of engineers that all know JavaScript, so convincing them is an easier task. With such a brain power, it is easy to go crazy with inter-process communication and try to squeeze the last ounce of performance using binary protocols, but the recommendation is to start with HTTP and see if anything else is needed.

Q/A panel - TJ, Jeff and Dave
Q/A panel – TJ, Jeff and Dav

All in all, it was an enjoyable and not over-long evening, and a perfect prequel to the NodeDay on Friday. Read about NodeDay in my next post.

Nowhere to park on Spear street
Nowhere to park on Spear street

© Dejan Glozic, 2014