Of Mice and Men

icestorm

The best laid schemes of mice and men
Often go awry.

Robert Burns, 1785.

Dear readers of this blog, you may have noticed a lapse in the rhythm that I faithfully followed from the very beginning (a new post every Tuesday, more recently switching to Monday). You may have inferred that I am ‘recharging’, away or simply taking a holiday break.

Far from it. I had a plan to look back at the blog and the topics I have covered over the last six months. For example, looking back at the post on databases, I have second thoughts about using SQL database in a way that is a better fit for something like MongoDB. Our practical experience is that schema changes in a rapid evolution environment are a real drag and being able to evolve the data on demand in a schema-less database would be really beneficial to us.

Other posts are holding up. We are as averse to the Extreme AJAX as ever, but we are now seeing use cases where some client-side structure (possibly using Backbone.js) would be beneficial, particularly when using Web Sockets to push updates to the client. We are still allergic to frameworks and their aggressive takeover of your life and control over code. And we are still unhappy with the horrible hacks required to implement client side templating. One day when Web Components are supported by all the modern browsers, we will remember this time as an ugly intermezzo when we had to resort to a lot of JavaScript in lieu of first class Web composition that Web Components provide (you can bookmark this claim and come to laugh at me if Web Components do not deliver on their promise).

As I said, I planned to cover all that, and also lined up a holiday project. We had recently implemented nice dashboards for our needs in Jazz Platform and Jazz Hub, and I was curious how hard it would be to re-implement the server side using Node.js and MongoDB (instead of JEE and SQL DB). The client side is already fine, using jQuery, Bootstrap and Require.js, and does not need to change. I also wanted to see how hard it would be to configure Express.js to use LinkedIn fork of the Dust.js templating library. PayPal team had a lot of fun recently moving their production applications to exactly that stack, and I was inspired by Bill Scott’s account of that effort.

The plan was to write the blog post on Sunday 22nd, and play with Node.js over the next two weeks. On Saturday 21st, eastern US and Canada were hit by a nasty ice storm. The storm deposited huge amounts of ice on the electrical wires and trees in midtown Toronto where I live. Ice is very heavy and after a while, huge mature trees that are otherwise great to have around started giving up, losing limbs and knocking down power lines in the process. By Sunday 22nd 3am we lost power, along with about a million people in Toronto at the peak.

Losing power in a low rise means no heat, no Internet or cable, no cooking and no hot water (add ‘no water’ for a high rise). We conserved our phones, following the progress of Toronto Hydro power restoration using the #darkTO hashtag on Twitter (everything else was useless). Toronto Hydro outage map crashed under load, resulting in Toronto Hydro posting the map via Twitter picture updates (kudos for Twitter robust infrastructure, which is more than I can say about Toronto Hydro servers). After a while, we drained our phones. I charged my phone from a fully charged MacBook Pro (an operation I was able to repeat twice before MacBook Pro lost its juice). We had four laptops to use as door stops/phone chargers. I could read some eBooks on a fully charged iPad, but somehow was not in the mood. Dinner was cold sandwiches by the candle light. Not as romantic in a cold room.

By Sunday night, the temperature in the apartment dropped to 19.5C (that’s 67F for my American friends). We slept fully clothed. On Monday morning we packed up and went to IBM building in Markham that had power to shower, get some core temperature back, eat a warm meal and charge all the devices. We also used the opportunity to book a hotel room in Toronto downtown (no big trees to knock down power lines – yey for the big soul-less downtown buildings). When we went back home to pack, the room temperature dropped to 18C. The temperature outside dropped to bitterly cold -10C, and going to -14C over night.

Over Monday night, the power was restored. We returned on Tuesday morning, only to find Internet and cable inoperative. Estimated time for repair – 22 hours. In addition, our building is somewhat old and its hot water furnace does not like going from 0 to full blast quickly, resulting in a temperature that was 2-3 degrees less than usual. It was a matter of time until I succumbed to common cold.

Internet and cable was restored on Wednesday, 4 days after the outage started. Over the last couple of days the winter outside let up a bit, allowing the ice on the trees to melt and furnace to bring the temperature to the normal levels. My cold is on the downswing, enough for me to write this blog post. I will still need to wait for the cold-induced watery eyes to return to normal before taking the planned photos for my 2014 Internet profiles.

Why am I writing all this? Just to show you that the real takeaway message for 2013 is not horrible first world problems we grapple with daily (should I use Node.js or not, should I use MongoDB or Couch DB or just stay with RDBs), but that most of us are privileged to live with the trappings of civilization allowing us to not worry about warm water, heat, food and clean clothing. On Wednesday when my daughter was seriously unhappy that internet was not back yet, I felt a bit like ‘the most ungrateful generation’ in the Louis CK show (“Everything is amazing and nobody is happy”). As I am writing this, there are still my fellow Torontonians without power. Their houses are probably icicles by now. My heart goes to them and hope they get power as soon as possible.

As for myself, I can tell you that the fact that when you write Node.js code you may find yourself in a callback hell didn’t mean much when I was sitting in a cold room, lit by candle light and frantically refreshing the #darkTO thread, while my battery was slowly draining. A lesson in humility and a reason to count your blessings delivered in the time of the year we normally see the re-runs of It’s a Wonderful Life on TV (if you still watch TV, that is).

Therefore, all of you reading this, have a great 2014! If you are in a warm room, have clean clothes and a warm meal, and can read this (i.e. your wifi is operational), your life is amazing and you are better off than many of your fellow human beings. Now go back to the most pressing topics of your lives, like this one:

© Dejan Glozic, 2013

Design for the T-Rex Vision

T-rex_by_Lilyu

Note: In case you missed it, there is currently a chance to get a free eBook of RESS Essentials from Packt Publishing. All you need to do is check the book out at Packt Publishing web site, drop a line about the book back at the comments section of my article with your email address and enter a draw for one of the three free books. And now, back to regular programming.

As a proud owner of a rocking home theater system, I think I played Jurassic Park about a gazillion times (sorry, neighbors). There I got some essential life lessons, for example that T-Rex’s vision is based on movement – it does not see you if you don’t move. While I am still waiting to put that knowledge into actual use, it got me thinking recently about design, and the need to update and change the visuals of our Web applications.

Our microwave recently died, and we had to buy a new one. Once installed, the difference was large enough to make me very conscious about its presence in the kitchen. But a funny thing happen – after a few days of me getting used to it, I stopped seeing it. I mean, I did see it, otherwise where did I warm up my cereals in the morning? But I don’t see it – it just blends into the overall kitchen-ness around me. There’s that T-Rex vision in practice.

Recently an article by Addison Duvall – Is Cool Design Really Uncool caught my attention because it posits that new design trends get old much too soon thanks to the quick spreading via the Internet. This puts pressure on the designers to constantly chase the next trend, making them the perennial followers. Addison offers an alternative (the ‘true blue’ approach) in which a designer ‘sticks to his/her guns’ as a personal mark or style, but concedes that only a select few can actually be trend setters instead of followers. Probably due to them turning into purveyors of a ‘niche’ product and their annoying need to eat and clothe themselves, thus nudging them back into the follower crowd.

According to Mashable, Apple reported that 74% of their devices are already running iOS 7. This means that 74% of iPhone and iPad users look at flat, Retina-optimized and non-skeuomorphic designs every day (yours truly included). When I showed the examples of iOS 7 around the office when I updated to it, I got a mixed reaction, with some lovers of the old school Apple design truly shocked, and in a bad way. Not myself though – as a true Steven Spielberg disciple, I knew that it will be just a matter of time until this becomes ‘the new normal’ (whether there will be tears on my cheeks and the realization that ‘I love Big Brother’ is still moot). My vision is based on movement, and after a while, iOS 7 stopped moving and I don’t see it any more. You know what I did see? When iPad Retina arrived with iOS 6 installed and I looked at the horrible, horrible 3D everywhere (yes, I missed iPad Air by a month – on the upside, it is company-provided, so the price is right). I could not update to iOS 7 soon enough.

I guess we established that for many people (most people in fact), the only way around T-Rex vision is movement. That’s why we refresh the designs ever so often. In the words of Don Draper and his ‘Greek mentor’, the most exciting word in advertizing is ‘new’. Not ‘better’, ‘nicer’, ‘more powerful’, just ‘new’ – different, not the same old thing you have been starting at forever (well, for a year anyway, which seems like eternity in hindsight). The designs do not spoil like a (Greek) yogurt in your fridge, they just become old to us. This is not very different from my teenage daughter staring at a full closet and declaring ‘I have nothing to wear’. It is not factually true but in her teenage T-Rex eyes, the closet is empty.

OK then, so we need to keep refreshing the look of our Web apps. What does that mean in practice? Nothing good, it turns out. You would think that in a true spirit of separation of style and semantics, all you need to do is update a few CSS files and off you go.

Not so fast. You will almost certainly hit the following bummers along the way:

  1. Look is not only ‘skin deep’. In the olden days when we used Dojo/Dijit, custom widgets were made out of DOM elements (DIVs, SPANs) and then styled. This was when these widgets needed to run on IE6 (shudder). This means that updating the look means changing the DOM structure of the widgets, and the accompanied JavaScript.
  2. If the widget is reused, there is high likelihood that upstream clients have reached into the DOM and made references to these DOM nodes. Why? Because they can (there is no ‘private’ visibility in JavaScript). I really, really hate that and cannot wait for shadow DOM to become a reality. Until then, try updating one of those reused widgets and wait for the horrified upstream shrieks. Every time we moved to the new MINOR version of Dojo/Dijit, there was blood everywhere. This is no way to live.
  3. Aha, you say, but the newer way is so much better. Look at Bootstrap – it is mostly CSS, with only 8kB of minified gzipped JavaScript. Yes, I agree – the new world we live in is nicer, with native HTML elements directly styleable. Nevertheless, as we now use Boostrap 3.0, we wonder what would happen if we modify the vanilla CSS and then try to move to 4.0 when it arrives – how much work will that be? And please don’t start plugging Zurb Foundation now (as in “ooh, it has more subtle styles, it is easier to skin than Bootstrap”). I know it is nice, but in my experience, nothing is ever easy – it would just be difficult in a somewhat different way.
  4. You cannot move to the new look only partially. That’s the wonder of the new app-based, Cloud-based world that never ceases to amaze me. Yes, you can build a system out of loosely coupled services running in the Cloud, and it will work great as long as these services are giving you JSON or some other data. But if you try to assemble a composite UI, the browser is where the metaphorical rubber meets the road – woe unto you if your components are not all moved to the new look in lockstep – you will have a case of the Gryphon UI on your hands.

I wish I had some reassuring words for you, but I don’t. This is a hard problem, and it is depressing that the federated world of the Cloudy future didn’t change that complexity even a bit. In the end, everything ends up in a single browser window, and needs to match. Unless all your apps adopt something like Bootstrap, don’t change the default styles and move to the each new version diligently, you will suffer the periodic pain of re-skinning so that the T-Rexes that use your UIs can see it again (for a short while). It also helps to have sufficient control over all the moving parts so that coordinated moves can actually be made and followed through, in the way LinkedIn moved from several stacks and server side rendering to shared Dust.js templates on the client. Taking advantage of HTML5, CSS3 and shadow DOM will lessen the pain in the future by increasing separation between style and semantics, but it will never entirely diminish it.

If you think this is limited to the desktop, think again. Yes, I know that in theory you need to be visually consistent within the confines of your mobile app only, which is a much smaller world, but you also need to not look out of place after the major mobile OS update. Some apps on my iPhone caught up, some still look awkward. Facebook app was flat for a while already. Twitter is mostly content anyway, so they didn’t need to do much. Recently Dropbox app refreshed its look and is now all airy with lightweight icons and hair lines everywhere, while slate.com app is still chunky with 3D gradients (I guess they didn’t budget for frequent app refreshes). Oh, well – I guess that’s the price of doing pixel business – you need to budget for the new virtual clothes regularly.

Oh, look – my WordPress updated the Dashboard – looks so pretty, so – new!

© Dejan Glozic, 2013

Free Copies of RESS Essentials by Packt Publishing

6944OT_RESS Essentials

As my blog claims in the title, I care 70% about Web development and 30% about Web design. Nevertheless, 100% of me is always hungry for knowledge, and I devour large quantities of articles, books, blogs and tweets as part of my balanced diet. I was thinking: there is no reason my blogging and my reading should be two separate compartments – I could blog about what I read. As a result, I am starting a ‘Reviews’ category of my blog. And what better way to start this category in the midst of the holiday season than with a free book giveaway!

As I have recently blogged about RESS, the good people from Packt Publishing have sent me a copy of RESS Essentials to review. While I am busily reading the book, they have more free copies for a book giveaway.

Three (3) lucky winners stand a chance to win digital copies of the book. According to Packt, in the book you will find:

  • Easy-to-follow tutorials on implementing RESS application patterns
  • Information flow diagrams which will help you understand various RESS architectures with ease
  • Perform browser feature detection and store this information on server side

How to Enter?

All you need to do is head on over to the book page and look through the product description of the book. When done, come back and drop a line via the comments below this post to let us know what interests you the most about this book. And that’s it.

Deadline:

The contest will close in 1 weeks or 2 weeks time depending on the response. Winners will be contacted by email, so be sure to use your real email address!

© Dejan Glozic, 2013

Sitting on the Node.js Fence

Sat_on_the_Fence_-_JM_Staniforth

I am a long standing fan of Garrison Keillor and his Prairie Home Companion. I fondly remember a Saturday in which he delivered his widely popular News From Lake Wobegon, and that contained the following conundrum: “Is ambivalence a bad thing? Well, yes and no”. It also accurately explains my feeling towards Node.js and the exploding Node community.

As I am writing this, I can hear the great late Joe Strummer singing Should I Stay or Should I Go in my head (“If I stay it will be trouble, if I go it will be double”). It is really hard to ignore the passion that Node.js has garnered from its enthusiastic supporters, and if the list of people you are following on Twitter is full of JavaScript lovers, Node.js becomes Kim Kardashian of your Twitter feed (as in, every third tweet is about it).

In case you were somehow preoccupied by, say, living life to the fullest to notice or care about Node.js, it is a server-side framework written in JavaScript sitting on a bedrock of C++, compiled and executed by Google’s Chrome V8 engine. A huge cottage industry sprung around it, and whatever you may need for your Web development, chances are there is a module that does it somewhere on GitHub. Two years ago, in a widely re-tweeted event, Node.js overtook Ruby on Rails as the most watched GitHub repository.

Recently, in a marked trend of maturation from exploration and small side projects, some serious companies started deploying Node into production. I heard everything about how this US Thanksgiving most traffic to Walmart.com was from mobile devices, handled by Node.js servers.

Then I heard from Nick Hart how PayPal switched from JEE to Node.js. A few months ago I stumbled upon an article about Node.js taking over the enterprise (like it or not, to quote the author). I am sure there are more heartwarming stories traded at a recent #NodeSummit. Still, not to be carried away, apart from mobile traffic, Node.js only serves a tiny panel for the Walmart.com web site, so more of a ‘foot in the door’ than a total rewrite for the desktop. Even earlier, LinkedIn engineering team blogged about their use of Node.js for LinkedIn Mobile.

It is very hard to cut through the noise and get to the core of this surge in popularity. When you dig deeper, you find multiple reasons, not all of them technical. That’s why discussions about Node.js sometimes sound to me like the Abott and Costello’s ‘Who’s on First’ routine. You may be discussing technology while other participants may be discussing hiring perks, or skill profile of their employees, or context switching. So lets count the ways why Node.js is popular and note how only one of them is actually technical:

  1. Node.js is written from the ground up to be asynchronous. It is optimized to handle problems where there is a lot of waiting for I/O. Processing tasks not waiting for I/O while others are queued up can increase throughput without adding processing and memory overhead of the traditional ‘one blocking thread per request’ model (or even worse, ‘one process per request’). The sweet spot is when all you need to do is lightly process the result of I/O and pass it along. If you need to spend more time to do some procedural number-crunching, you need to spawn a ‘worker’ child process, at which point you are re-implementing threading.
  2. Node.js is using JavaScript, allowing front-end developers already familiar with it to extend their reach into the server. Most new companies (i.e. not the Enterprise) have a skill pool skewed towards JavaScript developers (compared to the Enterprise world where Java, C/C++, C# etc. and similar are in the majority).
  3. New developers love Node.js and JavaScript and are drawn to it like moths to a candelabra, so having Node.js projects is a significant attraction when competing for resources in a developer-starved new IT economy.

Notice how only the first point is actually engineering-related. This is so prevalent that it can fit into a tweet, like so:

Then there is the issue of storage. There have been many libraries made available for Node.js to connect to SQL databases, but that would be like showing up at an Arcade Fire concert in a suit. Oh, wait, that actually happened:

Never mind then. Back to Node.js: why not going all the way and use something like MongoDB, storing JSON documents and using JavaScript for DB queries? So now not only do front end developers not need to chase down server-side Java guys to change the served HTML or JSON output, they don’t have to chase DBAs to change the DB models. My point is that once you add Node.js, it is highly likely you will revamp your entire stack and architecture, and that is a huge undertaking even without deadlines to spice up your life with a wallop of stress (making you park in a wrong parking spot in your building’s garage, as I did recently).

Now let’s try to apply all this to a talented team in a big corporation:

  1. The team is well versed in both Java (on the server) and JavaScript (on the client), and have no need to chase down Java developers to ‘add a link between pages’ as in the PayPal’s case. The team also knows how to write decent SQL queries and handle the model side of things.
  2. The team uses IDEs chock-full of useful tools, debuggers, incremental builders and other tooling that make for a great workflow, particularly when the going gets tough.
  3. The team needs to deliver something new on a tight schedule and is concerned about adding to the uncertainty of ‘what to build’ by also figuring out ‘how to build it’.
  4. There is a fear that things that the team has grown to expect as gravity (debugging, logging etc.) is still missing or immature.
  5. While there is a big deal made about ‘context switching’, the team does it naturally – they go from Java to JavaScript without missing a beat.
  6. There is a whole slew of complicated licensing reasons why some Open Source libraries cannot be used (a problem startups rarely face).

Mind you, this is a team of very bright developers that eat JavaScript for breakfast and feel very comfortable around jQuery, Require.js, Bootstrap etc. When we overlay the reasons to use Node.js now, the only one that still remains is handling huge number of requests without paying the RAM/CPU tax of blocking I/O, one thread per request.

As if things were not murky enough, Servlets 3.1 spec supported by JEE 7.0 now comes with asynchronous processing of requests (using NIO) and also protocol upgrade (from HTTP/S to WebSockets). These two things together mean that the team above has the option of using non-blocking I/O from the comfort of their Java environment. They also mean that they can write non-blocking I/O code that pushes data to the client using WebSockets. Both things are currently the key technical reasons to jump ship from Java to Node. I am sure Oracle added both things feeling the heat from the Node.js, but now that they are here, they may be just enough reason to not make the transition yet, considering the cost and overhead.

Update: after posting the article, I remembered that Twitter has used Netty for a while now to get the benefit of asynchronous I/O coupled by performance of JVM. Nevertheless, the preceding paragraph is for JEE developers who can now easily start playing with async I/O without changing stacks. Move along, nothing to see here.

Then there is also the ‘Facebook effect’. Social scientists have noticed the emergence of a new kind of depression caused by feeling bad about your life compared to carefully curated projections of other people’s lives on Facebook. I am yet to see a posting like “my life sucks and I did nothing interesting today” or “I think my life is passing me by” (I subsequently learned that Sam Roberts did say that in Brother Down, but he is just pretending to be depressed, so he does not count). Is it possible that I am only hearing about Node.js success stories, while failed Node.js projects are quietly shelved never to be spoken of again?

Well, not everybody is suffering in silence. We all heard about the famous Walmart memory leak that is now thankfully plugged. Or, since I already mentioned MongoDB, how about going knee-deep into BSON to recover your data after your MongoDB had a hardware failure. Or a brouhaha about the MongoDB 100GB scalability warning? Or a sane article by Felix Geisendörfer on when to use, and more importantly, when NOT to use Node.js (as Node.js core alumnus, he should know better than many). These are all stories of the front wave of adopters and the inevitable rough edges that will be filed down over time. The question is simply – should we be the one to do that or somebody can do the filing for us?

In case I sound like I am justifying reasons why we are not using Node.js yet, the situation is quite the opposite. I completely love the all-JavaScript stack and play with it constantly in my pet projects. In a hilarious twist, I am acting like a teenage Justin Bieber fan and the (younger) team lead of the aforementioned team needs to bring me down with a cold dose of reality. I don’t blame him – I like the Node.js-based stack in principle, while he would have to burn the midnight oil making it work at the enterprise scale, and debugging hard problems that are inevitable with anything non-trivial. Leaving aside my bad case of FOMO (Fear Of Missing Out), he will only be persuaded with a problem where Node.js is clearly superior, not just another way of doing the same thing.

Ultimately, it boils down to whether you are trying to solve a unique problem or just play with a new and shiny technology. There is a class of problems that Node.js is perfectly suitable for. Then there are problems where it is not a good match. Leaving your HR and skills reasons aside, the proof is in the pudding (or as a colleague of mine would say, ‘the devil is in the pudding’). There has to be a unique problem where the existing stack is struggling, and where Node.js is clearly superior, to tip the scales.

While I personally think that Node.js is a big part of an exciting future, I have no choice but to agree with Zef Hemel that it is wise to pick your battles. I have to agree with Zef that if we were to rebuild something for the third time and know exactly what we are building, Node.js would be a great way to make that project fun. However, since we are in the ‘white space’ territory, choosing tried and true (albeit somewhat boring) building blocks and focusing the uncertainty on what we want to build is a good tradeoff.

And that’s where we are now – with me sitting on the proverbial fence, on a constant lookout for that special and unique problem that will finally open the Node.js floodgates for us. When that happens, you will be the first to know.

© Dejan Glozic, 2013