Book Review: RESS Essentials

6944OT_RESS Essentials

I have decided to start reviewing books as part of my blog activity with mixed emotions. Our industry is so fast moving, so ephemeral that books seem to fight a losing battle on being relevant the day they are published, never mind 6 months or a year down the road. Books on actual concrete products are very often little more than the missing manuals (so much so that O’Reilly actually has a ‘Missing Manuals‘ series, with a ‘The book that should have been in the box’ tagline – burn!). Just check the new titles – ‘Windows 8.1’, ‘HTML5’, ‘iPad’!!

Somewhat better fate is reserved for books that attempt to describe an approach or technique rather than any particular platform, language or library. Sound principles tend to outlive the actual building blocks used to implement them. The book I am covering in this review falls into that category, since it attempts to address a fairly new approach to wrestling with the explosion of Web clients.

The book’s title is ‘RESS Essentials’ by Joanna Krenz-Kurowska and Jerzy Kurowski and is brought to us by Packt Publishing. I already wrote about RESS in RESS to the Rescue. In a nutshell, responsive Web design (or RWD) is used to make a Web application adapt to the various aspects of the client, and reformat the experience to best fit each client in question (be it desktop, tablet or phone). RESS (or REsponsive design Server Side) goes further by involving the server. The rationale is that CSS can hide or reformat content depending on the client, but everything is still sent by the server, whether it will be needed or not. RESS attempts to address the performance side of the responsive design by choosing what to send before any formatting decisions are made on the client.

Even in Internet years, RESS is a young technique, and a very good illustration of that is that the authors felt the need to devote most of chapter 1 to controversies surrounding RESS. To put it simply, not everybody is convinced that RESS is a good idea. Nevertheless, enough people are (including the godfather of RESS Luke Wroblewski, incidentally another Pole – in a bit of Polish trivia, my photographer told me that his last name means ‘sparrow-like’). As such, the rest of the book assumes you have been sufficiently convinced RESS is a good and useful thing and you want to get on with it.

Chapter 1 also brought one of the unexpected insights into RESS and responsive design, where it can actually go to the other side of the client spectrum. When it comes to RWD, most of the focus is on how your site reacts to non-desktop clients. However, for eons the desktop design assumed certain maximum width and happily ignored the side bands, making browser window maximization futile. On high quality, large monitors these side bands seems like a waste of real estate (compared to how desktop applications or games use it). RESS and responsive Web design can unlock full potential of your high end monitor – no reason why we cannot add rules to keep adding more content if you maximize your browser on a 27” or 30” monitor.

In chapter 2, authors pay homage to the client side, by diving into the real world example using Gridpak for the responsive grid and Twitter’s Bootstrap for everything else.

Real action starts in chapter 3, where focus is moved to the server – the key value add of the RESS technique. In order to determine what to send to the client, server-side device detection is used. Two most popular solutions discussed in the chapter are WURFL and DeviceAtlas. There is a problem – these services are not available for free, or if they are, the free (or even cheap Cloud-based) version is not a viable option for anything but a test site. For completeness, authors cover some Open Source alternatives such as YABFDL (also known as Dave Olsen’s Detector) – more modest in its database but still usable and free with permissive license (with coverage that keeps up with new devices through the use of the modernizr.js library).

Chapter 4 puts server side detection into action. Different markups for different clients (to optimize for ability, support for modern features and JavaScript), different media sizes based on screen width, and manual selection of page versions. This chapter mostly deals with the first of the three goals, with plenty of code snippets using both WURFL and Dave Olsen’s Detector.

Chapter 5 moves on to dealing with image resolution on the server. Images comprise a large portion of the overall site payload and sending images that are too large or of unnecessarily high resolution is costly both performance-wise, but also by eating into the monthly bandwidth allowance for mobile users. The RESS solution for handling image sizes and resolution is in line with the overall approach and has some interesting properties compared to the known alternatives.

As said, all this effort around sending the right-sized images to the client is at least partially driven by performance considerations. In chapter 6, authors note that client screen size is not the only parameter to take into account – connection bandwidth is as important. Unfortunately, as of today there are no media queries for connection quality and bandwidth that could be used to tailor image size and compression to use. In their absence, authors offer a number of techniques to optimize images or reduce the need to use them in the first place.

One thing I really liked in this chapter was a graph showing that with the reduction of screen size (and with RESS solution for images enabled), the size of CSS and JavaScript files started to overtake images, to the point where images were only 1/3 of the CSS/JavaScript files for iPhone portrait resolutions (320px). This was a good reminder that RESS needs to be combined with other good practices if performance was a concern. I particularly liked a tidbit that the average webpage reached 1585 KB on September 15, 2013. That’s a lot of KBs to send to any client, particularly mobile phones, and underlines the need for techniques such as RESS to improve Web performance.

Changing gears, chapter 7 switched back to the client and focused on using and extending jQuery and Bootstrap to make your site responsive when dealing with elements such as tables and carousels. jQuery has essentially become the equivalent of gravity for JavaScript libraries (i.e. you cannot avoid it and must play nice with it), and Bootstrap is insanely popular in its third incarnation. While not necessarily limited to RESS (the code in this chapter could be easily used in a client-only RWD solution), we should not forget that RESS is a combination of server and client side techniques for responsive design, and the examples in this chapter are definitely useful for Web developers that seek responsive design solutions.

I am a bit puzzled with chapter 8, which introduces REST and combines it with RWD. It is not strictly RESS in that it simply uses Ajax to call REST services. There is nothing wrong with what is proposed in this chapter, but REST is another kind of ‘gravity’ for many teams including mine, and tossing it here on the pile with RESS and RWD seems odd, particularly because it involves PHP in code examples. Since this was the last chapter in the book, I was left wanting for a chapter that is more fit for wrapping it all up and reinforcing the key messages of the book (an epilogue, as it were).

In the end, where does it left me, the reader and the reviewer? I would say that RESS Essentials is part of the new breed of ‘mini-books’, considering its size (120 pages including index). It is definitely not the first of its kind – Responsive Web Design by Ethan Marcotte is 143 pages, and Mobile First by Luke Wroblewski is 123 pages. This class of books covers areas and topics that are too complex to be covered in one blog post or article but perhaps not as complex to require a book that can double as a weapon and will consume years of author’s life. The format definitely allows hot topics to be covered in a time frame that at least gives them a fighting chance of having some decent use before the inevitable progress renders them obsolete.

I think that the authors did a good job covering the ground on the topic of RESS and I definitely learned a lot by reading it. The book is part of the ‘Community Experience Distilled’ series, and it definitely fits the description, since the authors provided a number of examples and code snippets that look like something they arrived at through personal experience while working on real world projects. Whether RESS itself has a staying power is left to be seen. I am concerned about the limited choices when it comes to server side agent detection (at least in the Open Source world), so Dave Olsen’s Detector effort should be commended.

When it comes to code snippets, I was more interested in general concepts than the server side code examples because they were universally in PHP. There is nothing wrong with PHP but I would prefer examples in Java, or even better, in JavaScript using Node.js. Using Java or Node.js would work better with the final chapter about combining REST with RESS.

As far as the format and language is concerned, I have two minor nits to pick. While the book is generally well written, Joanna’s and Jerzy’s Polish background occasionally creeps into their sentence structure. Not being a native speaker myself, I cannot quite put my finger on why I was able to notice, I just felt that way as I was reading. Another mild nit is that the PDF version of the eBook results in too small a font, and when I tap to zoom in, the text is single-spaced, making it less than perfect for reading on iPads. On the flip side, single spacing made for faster vertical scrolling. YMMV if you use a different reader or format.

That’s it for my first review – next on the docket is Smashing Magazine Book #4 – now that’s a book you can use for self defense if need be!

© Dejan Glozic, 2014

Rocket Science vs System Integration

By certified su from The Atherton Tablelands, Queensland , Australia (Integration) [CC-BY-2.0 (http://creativecommons.org/licenses/by/2.0) or CC-BY-2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

Note: In case you missed it, there is still a chance to get a free eBook of RESS Essentials from Packt Publishing. All you need to do is check the book out at Packt Publishing web site, drop a line about the book back at the comments section of my article with your email address and enter a draw for one of the three free e-books. And now, back to regular programming.

When I was in the army, my mother told me that she kept noticing soldiers on the streets. I mean, they didn’t suddenly appear, they were always there, but she only started noticing them because of me. Once I returned from my regular 12 month service, soldiers around her receded into oblivion again.

This is a neat trick that you can try yourself. Have you noticed that you start spotting other cars like your own on the road as soon as you buy it, even though you didn’t care less about them before (unless you really, really wanted it, in which case you were spotting it as an object of desire, anticipating the purchase)? If you care to learn another useless phrase, the scholarly term for this phenomenon is perceptual vigilance.

When we were building web apps in the 00’s, a number of technologies were considered a ‘solved problem’. You get yourself an Apache server, install it on a Linux box, strap on Tomcat for servlets (all right, you can use PHP instead) and add MySQL for storage. Of course, there were commercial versions that offered, well, ‘bigger’ but the basic principles were the same. Nobody was excited about serving pages, or storing data, or doing anything commoditized that way. The action was in solving the actual end-user problem, and the sooner you can assemble a system out of the off-the-shelf components and get to the problem solving part, the better.

Well, once you need to serve millions upon millions of users, strike the ‘solved’ part. You start hitting the limits of your components, and you need to start paying attention to the things you stopped noticing years ago. That’s what is now happening across the software development world. The ‘solved’ problems are being solved again, from the very bottom. Web servers, data storage, caching, page templates, message queues – everything is being revisited and reopened. It is actually fascinating to watch – like a total gutting and rebuilding of a house whose owners grew tired of the basement flooding every time it rains. It is as if millions of developers suddenly noticed the components that they were taking for granted all these years and said ‘let’s rebuild everything from scratch’.

When you look at a Node.js ‘Hello, World’ server example, it is fascinating in its simplicity until you consider how much stuff you need to add to get to the level of functionality that is actually needed in production. You are really looking at several dozens of JS modules pulled in by NPM. Using the house paradigm again, these modules are replacing all the stuff you had in your house before you reduced it to a hole in the ground. And in this frenzy of solving the old problems in the new world I see two kinds of approaches – those of idealists and pragmatists.

There is a feeding frenzy on GitHub right now to be the famous author of a great new library or framework. To stay in the Node.js neighborhood, the basement is taken by @ryah, most of the walls are solidly put by @tjholowaychuk (although there are some new walls being added), and so on. Idealists fear that if they do not hurry, the house will be rebuilt and there will be no glory left in the component business – that they will need to go back to be pragmatists, assemblers of parts.

For some people that is almost an insult. Consider the following comment by Bhaskar Ghosh, LinkedIn’s senior director of data infrastructure engineering in a recent article by Gigaom:

Ghosh, well, he shot down the idea of relying too heavily on commercial technologies or existing open source projects almost as soon as he suggests it’s a possibility. “We think very carefully about where we should do rocket science,” he told me, before quickly adding, “[but] you don’t want to become a systems integration shop.”

Of course, Bhaskar is exaggerating to prove the point – LinkedIn is both a heavy provider and a consumer of Open Source projects. Consider this (incomplete) list for example – they are not exactly shy using off the shelf code where it makes sense. Twitter is no stranger to Open Source as well – they share the use of Netty with LinkedIn and many other (albeit heavilly modified) open source projects. Nevertheless, Twitter shares the desire to give back to the Open Source world – my team together with many others are using their awesome Bootstrap toolkit, and their modifications are shared with the community. You can read it all at the Twitter Open Source page, run by our good friend @cra.

Recently, PayPal has switched a lot of their Java production servers to Node.js and documented the process. The keyboards are still hot from the effort and they are already open sourcing many of the libraries they wrote for the effort. They even joined forces with LinkedIn on the fork of the Dust.js templating library that the owner abandoned (kudos to Richard Ragan from PayPal from writing a great tutorial).

There are many other examples that paint a picture of a continuum. Idealists see this as a great time to have their name attached to a popular Open Source project and cement their place in the source code Hall of Fame – it’s 1997 all over again, baby!. On the other hand, startups have no problem being pragmatists – it is very hard to build something from scratch even with off the shelf Open Source software – who has time to write new libraries when existing ones are good enough to drive your startup into the ground 10 times over.

In the space between the two extremes, companies with something to lose and with skin in the game stride the cautious middle road – use what works, modify what ‘almost’ works, write their own when no existing software fits the bill, fully aware of the immediate and ongoing cost. And the grateful among them give back to the Open Source community as a way of saying ‘thanks’ for getting a boost when they themselves where but a fledgling startup.

© Dejan Glozic, 2014

Free Copies of RESS Essentials by Packt Publishing

6944OT_RESS Essentials

As my blog claims in the title, I care 70% about Web development and 30% about Web design. Nevertheless, 100% of me is always hungry for knowledge, and I devour large quantities of articles, books, blogs and tweets as part of my balanced diet. I was thinking: there is no reason my blogging and my reading should be two separate compartments – I could blog about what I read. As a result, I am starting a ‘Reviews’ category of my blog. And what better way to start this category in the midst of the holiday season than with a free book giveaway!

As I have recently blogged about RESS, the good people from Packt Publishing have sent me a copy of RESS Essentials to review. While I am busily reading the book, they have more free copies for a book giveaway.

Three (3) lucky winners stand a chance to win digital copies of the book. According to Packt, in the book you will find:

  • Easy-to-follow tutorials on implementing RESS application patterns
  • Information flow diagrams which will help you understand various RESS architectures with ease
  • Perform browser feature detection and store this information on server side

How to Enter?

All you need to do is head on over to the book page and look through the product description of the book. When done, come back and drop a line via the comments below this post to let us know what interests you the most about this book. And that’s it.

Deadline:

The contest will close in 1 weeks or 2 weeks time depending on the response. Winners will be contacted by email, so be sure to use your real email address!

© Dejan Glozic, 2013

Sitting on the Node.js Fence

Sat_on_the_Fence_-_JM_Staniforth

I am a long standing fan of Garrison Keillor and his Prairie Home Companion. I fondly remember a Saturday in which he delivered his widely popular News From Lake Wobegon, and that contained the following conundrum: “Is ambivalence a bad thing? Well, yes and no”. It also accurately explains my feeling towards Node.js and the exploding Node community.

As I am writing this, I can hear the great late Joe Strummer singing Should I Stay or Should I Go in my head (“If I stay it will be trouble, if I go it will be double”). It is really hard to ignore the passion that Node.js has garnered from its enthusiastic supporters, and if the list of people you are following on Twitter is full of JavaScript lovers, Node.js becomes Kim Kardashian of your Twitter feed (as in, every third tweet is about it).

In case you were somehow preoccupied by, say, living life to the fullest to notice or care about Node.js, it is a server-side framework written in JavaScript sitting on a bedrock of C++, compiled and executed by Google’s Chrome V8 engine. A huge cottage industry sprung around it, and whatever you may need for your Web development, chances are there is a module that does it somewhere on GitHub. Two years ago, in a widely re-tweeted event, Node.js overtook Ruby on Rails as the most watched GitHub repository.

Recently, in a marked trend of maturation from exploration and small side projects, some serious companies started deploying Node into production. I heard everything about how this US Thanksgiving most traffic to Walmart.com was from mobile devices, handled by Node.js servers.

Then I heard from Nick Hart how PayPal switched from JEE to Node.js. A few months ago I stumbled upon an article about Node.js taking over the enterprise (like it or not, to quote the author). I am sure there are more heartwarming stories traded at a recent #NodeSummit. Still, not to be carried away, apart from mobile traffic, Node.js only serves a tiny panel for the Walmart.com web site, so more of a ‘foot in the door’ than a total rewrite for the desktop. Even earlier, LinkedIn engineering team blogged about their use of Node.js for LinkedIn Mobile.

It is very hard to cut through the noise and get to the core of this surge in popularity. When you dig deeper, you find multiple reasons, not all of them technical. That’s why discussions about Node.js sometimes sound to me like the Abott and Costello’s ‘Who’s on First’ routine. You may be discussing technology while other participants may be discussing hiring perks, or skill profile of their employees, or context switching. So lets count the ways why Node.js is popular and note how only one of them is actually technical:

  1. Node.js is written from the ground up to be asynchronous. It is optimized to handle problems where there is a lot of waiting for I/O. Processing tasks not waiting for I/O while others are queued up can increase throughput without adding processing and memory overhead of the traditional ‘one blocking thread per request’ model (or even worse, ‘one process per request’). The sweet spot is when all you need to do is lightly process the result of I/O and pass it along. If you need to spend more time to do some procedural number-crunching, you need to spawn a ‘worker’ child process, at which point you are re-implementing threading.
  2. Node.js is using JavaScript, allowing front-end developers already familiar with it to extend their reach into the server. Most new companies (i.e. not the Enterprise) have a skill pool skewed towards JavaScript developers (compared to the Enterprise world where Java, C/C++, C# etc. and similar are in the majority).
  3. New developers love Node.js and JavaScript and are drawn to it like moths to a candelabra, so having Node.js projects is a significant attraction when competing for resources in a developer-starved new IT economy.

Notice how only the first point is actually engineering-related. This is so prevalent that it can fit into a tweet, like so:

Then there is the issue of storage. There have been many libraries made available for Node.js to connect to SQL databases, but that would be like showing up at an Arcade Fire concert in a suit. Oh, wait, that actually happened:

Never mind then. Back to Node.js: why not going all the way and use something like MongoDB, storing JSON documents and using JavaScript for DB queries? So now not only do front end developers not need to chase down server-side Java guys to change the served HTML or JSON output, they don’t have to chase DBAs to change the DB models. My point is that once you add Node.js, it is highly likely you will revamp your entire stack and architecture, and that is a huge undertaking even without deadlines to spice up your life with a wallop of stress (making you park in a wrong parking spot in your building’s garage, as I did recently).

Now let’s try to apply all this to a talented team in a big corporation:

  1. The team is well versed in both Java (on the server) and JavaScript (on the client), and have no need to chase down Java developers to ‘add a link between pages’ as in the PayPal’s case. The team also knows how to write decent SQL queries and handle the model side of things.
  2. The team uses IDEs chock-full of useful tools, debuggers, incremental builders and other tooling that make for a great workflow, particularly when the going gets tough.
  3. The team needs to deliver something new on a tight schedule and is concerned about adding to the uncertainty of ‘what to build’ by also figuring out ‘how to build it’.
  4. There is a fear that things that the team has grown to expect as gravity (debugging, logging etc.) is still missing or immature.
  5. While there is a big deal made about ‘context switching’, the team does it naturally – they go from Java to JavaScript without missing a beat.
  6. There is a whole slew of complicated licensing reasons why some Open Source libraries cannot be used (a problem startups rarely face).

Mind you, this is a team of very bright developers that eat JavaScript for breakfast and feel very comfortable around jQuery, Require.js, Bootstrap etc. When we overlay the reasons to use Node.js now, the only one that still remains is handling huge number of requests without paying the RAM/CPU tax of blocking I/O, one thread per request.

As if things were not murky enough, Servlets 3.1 spec supported by JEE 7.0 now comes with asynchronous processing of requests (using NIO) and also protocol upgrade (from HTTP/S to WebSockets). These two things together mean that the team above has the option of using non-blocking I/O from the comfort of their Java environment. They also mean that they can write non-blocking I/O code that pushes data to the client using WebSockets. Both things are currently the key technical reasons to jump ship from Java to Node. I am sure Oracle added both things feeling the heat from the Node.js, but now that they are here, they may be just enough reason to not make the transition yet, considering the cost and overhead.

Update: after posting the article, I remembered that Twitter has used Netty for a while now to get the benefit of asynchronous I/O coupled by performance of JVM. Nevertheless, the preceding paragraph is for JEE developers who can now easily start playing with async I/O without changing stacks. Move along, nothing to see here.

Then there is also the ‘Facebook effect’. Social scientists have noticed the emergence of a new kind of depression caused by feeling bad about your life compared to carefully curated projections of other people’s lives on Facebook. I am yet to see a posting like “my life sucks and I did nothing interesting today” or “I think my life is passing me by” (I subsequently learned that Sam Roberts did say that in Brother Down, but he is just pretending to be depressed, so he does not count). Is it possible that I am only hearing about Node.js success stories, while failed Node.js projects are quietly shelved never to be spoken of again?

Well, not everybody is suffering in silence. We all heard about the famous Walmart memory leak that is now thankfully plugged. Or, since I already mentioned MongoDB, how about going knee-deep into BSON to recover your data after your MongoDB had a hardware failure. Or a brouhaha about the MongoDB 100GB scalability warning? Or a sane article by Felix Geisendörfer on when to use, and more importantly, when NOT to use Node.js (as Node.js core alumnus, he should know better than many). These are all stories of the front wave of adopters and the inevitable rough edges that will be filed down over time. The question is simply – should we be the one to do that or somebody can do the filing for us?

In case I sound like I am justifying reasons why we are not using Node.js yet, the situation is quite the opposite. I completely love the all-JavaScript stack and play with it constantly in my pet projects. In a hilarious twist, I am acting like a teenage Justin Bieber fan and the (younger) team lead of the aforementioned team needs to bring me down with a cold dose of reality. I don’t blame him – I like the Node.js-based stack in principle, while he would have to burn the midnight oil making it work at the enterprise scale, and debugging hard problems that are inevitable with anything non-trivial. Leaving aside my bad case of FOMO (Fear Of Missing Out), he will only be persuaded with a problem where Node.js is clearly superior, not just another way of doing the same thing.

Ultimately, it boils down to whether you are trying to solve a unique problem or just play with a new and shiny technology. There is a class of problems that Node.js is perfectly suitable for. Then there are problems where it is not a good match. Leaving your HR and skills reasons aside, the proof is in the pudding (or as a colleague of mine would say, ‘the devil is in the pudding’). There has to be a unique problem where the existing stack is struggling, and where Node.js is clearly superior, to tip the scales.

While I personally think that Node.js is a big part of an exciting future, I have no choice but to agree with Zef Hemel that it is wise to pick your battles. I have to agree with Zef that if we were to rebuild something for the third time and know exactly what we are building, Node.js would be a great way to make that project fun. However, since we are in the ‘white space’ territory, choosing tried and true (albeit somewhat boring) building blocks and focusing the uncertainty on what we want to build is a good tradeoff.

And that’s where we are now – with me sitting on the proverbial fence, on a constant lookout for that special and unique problem that will finally open the Node.js floodgates for us. When that happens, you will be the first to know.

© Dejan Glozic, 2013

Beam My Model Up, Scotty!

star-trek-transporter

I know, I know. Scotty is from the original series, and the picture above is from TNG. Unlike Leonard from The Big Bang Theory, I prefer TNG over the original, and also Picard over Kirk. Please refrain from hate mail.

Much of both real and virtual ink has been spilled over Star Trek transporter technology and quirks. Who can forget the episode when Scotty has preserved himself as a transporter pattern in an endless loop only to be found and freed 70 years later by the TNG crew (see, there’s your Scotty-TNG link). But this article is about honouring all the hours I spent on various Star Trek instalments by applying the transporter principles to Web development.

As the Web development collective mind matures, a consensus is forming that spaghetti are great on your dining table but bad for your code. MVC design pattern on the server is just accepted as gravity, and several JavaScript frameworks are claiming it is necessary on the client as well. But is it always?

Here is case in point. Our team recently rebuilt a dashboard – a relatively complex peace of interactive technology. This is our third attempt at it (third time’s the charm, right)? Armed with all the experience of the previous two versions, plus our general desire to pull back from the extreme Ajax, we were confident this would be it. Why build a dashboard? Yes, I know, everybody and his uncle has one, and there are perfectly good all singing, all dancing commercial ones to be had. Here is a counter-question: why do you have a kitchen in your house when you can eat at perfectly good restaurants? Everybody has a dashboard because it is much more convenient to have your own (Google Analytics has one, WordPress I am writing this on has one). They can be simple because they don’t need to cater to all the possible scenarios, and much more tightly integrated. And not to mention cheaper (like eating at home vs. eating out). But I digress.

In the previous version, we sent a lot of JavaScript to the client, and after transporting and parsing it, JavaScript turned around and used XHR to fetch the model from the server as JSON. Then we wired up the model and the view and every time users made a change, we would update the model to keep it in sync. Backbone.js or Angular.js would have come in handy for this, only they didn’t exist when we wrote that code. Plus we hate frameworks.

In this version we wanted the dashboard page to arrive mostly assembled from the server. That part was clear. But we wanted to avoid the MV* complexity and performance consequences if possible. Plus it is really hard to even apply the usual data binding with dashboards because you cannot just wire the template with the model and let the framework do its magic. In a dashboard, the template itself is editable – users can drag and drop widgets around, delete them and add new ones. Essentially both the template and the data are built up from the model. This is not your grandmother’s MVC, that’s for sure.

Then we remembered Star Trek transporters and figured – why don’t we disperse the model and embed the model shards into the DOM as we are building the initial HTML on the server? We were waiting to hit a brick wall at some point, but it didn’t happen – this is an actually viable option. Here is why this approach (I will call it M/V) works for us like a charm:

  1. Obviously it is easy to do on the server – when we are building the initial response, embedding model shards into the HTML is trivial using HTML5 custom data properties (those attributes that start with ‘data-‘).
  2. The model arrives ready to use on the client – no need for an extra request to fetch the model after the fact. OK, Backbone.js has a way of bootstrapping models so that they arrive as payload with the initial response. However, that approach qualifies as CrazyHacks™ at best, and you save nothing payload-wise – the sum of the shards equals one whole model.
  3. When the part of the DOM needs to change due to the user interaction, it is easy to keep the associated ‘data-*’ properties in sync.
  4. When moving entire DOM branches (say, when dragging widgets between columns or changing layouts), model shards stay with their DOM elements – this is a real winner in our use case.
  5. We use DOM custom event bubbling to notify about the change – one listener at the top of the DOM branch is all that is needed to keep track of the sharded model’s dirty state. This helps us to stay lean because there is no need for JavaScript pub/sub implementations that increase size. It is laughably trivial to fire and listen to these events using jQuery, and they are fast, being implemented natively by the browser. Bootstrap uses the same approach.
  6. When the time comes to save, we traverse the DOM, collect the shards and assemble them again into a transient JavaScript object, then send it as JSON to the server via XHR. We use PUT/POST when we have the entire model, but more often the not, the view only renders a fraction of it (lazy loading), so we use PATCH instead.

This approach bought us some unreal savings in performance and size. Right now, we brought down total JavaScript from ~245kB to ~70KB minified gzipped (this includes jQuery, Require.js and Bootstrap). The best part is that this JavaScript loads asynchronously, after the initial content has already been displayed. The page is not only objectively fast, it is subjectively even faster because the initial content (what Twitter team calls ‘time to first tweet’) is sent in the first response.

As it is often the case in life, this approach may not work for everybody. A pattern where true MVC really shines is when you have multiple views listening to the same model. Changing the model updates all the views simultaneously, without the need for the pub/sub hell. But if you have exactly one view and one model, using M/V becomes a real option.

I can hear the collective gasp of architects in the audience. There are many things you can say about M/V, but architecturally pure it is not. But here is the problem. Speed and page size should be a valid architectural concern too. In fact, if you want your Web app to be usable on mobile devices, it is practically the only concern that matters. Nobody is going to wait your architecturally pure pig of a page to load while waiting for coffee – life is too short for 1MB+ web pages, pure or not.

Say No to layer cake, embrace small, live long and prosper!

© Dejan Glozic, 2013