The Year of Blogging Dangerously

391px-Extremely_yummy_raspberry_cheesecake

Wow, has it been a year already? I am faking surprise, of course, because WordPress has notified me well ahead of time that I need to renew my dejanglozic.com domain. So in actuality I said ‘wow, will it soon be a year of me blogging’. Nevertheless, the sentiment is genuine.

It may be worthwhile to look back at the year, if only to reaffirm how quickly things change in this industry of ours, and also to notice some about-faces, changes of direction and mind.

I started blogging in the intent to stay true to the etymological sense of the word ‘blog’ (Web log). As a weekly diary of sorts, it was supposed to chronicle trials and tribulations of our team as it boldly goes into the tumultuous waters of writing Web apps in the cloud. I settled on a weekly delivery, which is at times doable, at other times a nightmare. I could definitely do without an onset of panic when I realize that it is Monday and I forgot to write a new entry.

Luckily we have enough issues we deal with daily in our work to produce enough material for the blog. In that regard, we are like a person who just moved into a new condo after his old apartment went up in flames and went to Ikea. If an eager clerk asks him ‘what do you need in particular’, his genuine answer must be ‘everything – curtains, rugs, new mattress, a table, chairs, a sofa, a coffee table …’.

At least that’s how we felt – we were re-doing everything in our distributed system and we were able to re-use very little from our past lives, having boldly decided to jump ahead as far as possible and start clean.

Getting things out of the system

That does not mean that the blog actually started with a theme or a direction. In the inaugural post The Turtleneck and The Hoodie, I proudly declared that I care both about development AND the design and refuse to choose. But that is not necessarily a direction to sustain a blog. It was not an issue for a while due to all these ideas that were bouncing in my head waiting to be written down. Looking back, I think it sort of worked in a general-purpose, ‘good advice’ kind of way. Posts such as Pulling Back from Extreme AJAX or A Guide to Storage for ADD Types were at least very technical and based on actual research and hands-on experience.

Some of the posts were just accumulated professional experience that I felt the need to share. Don’t Get Attached to Your Code or Dumb Code Good, Smart Code Bad were crowd pleasers, at least in the ‘yeah, it happened to me too’ way. Kind of like reading that in order to lose weight you need to eat smart and go outside. Makes a lot of sense except for the execution, which is the hard part.

344px-'Be_smart..Act_dumb^_-_NARA_-_513932

Old man yells at the cloud

Funnily enough, some of my posts, after using up all the accumulated wisdom to pass on, sound somewhat cranky in hindsight. I guess I disagreed with some ideas and directions I noticed, and the world ignored my disagreement and continued, unimpressed. How dare people do things I don’t approve of!

Two cranky posts that are worth highlighting are Swimming Against the Tide, in which I am cranky regarding client side MVC frameworks, and Sitting on the Node.js Fence, in which I argue with myself on pros and cons of Node.js. While my subsequent posts clearly demonstrate that I resolved the latter dilemma and went down the Node.js route hook, line and sinker, I am still not convinced that all that JavaScript required to write non-trivial Single Page Apps (SPAs) is a very good idea, particularly if you have any ambition to run them on mobile devices. But it definitely sounds funny to me now – as if I was expressing an irritated disbelief that, after publishing all the bad consequences of practicing extreme Ajax, people still keep doing it!

I heart Node.js

Of course, once our team went down Node.js route (egged on and cajoled by me), you could not get me to shut up about it. In fact, the gateway drug to it was my focus on templating solutions, and our choice of Dust.js (LinkedIn fork). By the way, it is becoming annoying to keep adding ‘LinkedIn fork’ all the time – that’s the only version that is actively worked on anyway.

Articles from this period are more-less setting the standard for my subsequent posts: they are about 1500 words long, have a mix of outgoing links, a focused technical topic, illustrative embedded tweets (thanks to @cra who taught me how not to embed tweets as images like a loser). As no story about Node.js apps is complete without Web Sockets and clustering, and both were dully covered.

Schnorr_von_Carolsfeld_Bibel_in_Bildern_1860_006

I know micro-services!

Of course, it was not until I went to attend NodeDay in February that a torrent of posts on micro-services was unleashed. The first half of 2014 was all ablaze with the posts and tweets about micro-services around the world anyway, which my new Internet buddy Adrian Rossouw dully documented in his Wayfinder field guide. It was at times comical to follow food fights about who will provide the bestest definition of them all:

If you follow a micro-services tag for my blog, the list of posts is long and getting longer every week. At some point I will stop tagging posts with it, because if everything is about them, nothing is – I need to be more specific. Nevertheless, I am grateful for the whole topic – it did after all allow me to write the most popular post so far: Node.js and Enterprise – Why Not?

monty-1920-1200-wallpaper

What does the future hold?

Obviously Node.js, messaging and micro-services will continue to dominate our short-term horizon as we are wrestling with them daily. I spoke about them at the recent DevCon5 in NYC and intend to do the same at the upcoming nodeconf.eu in September.

Beyond that, I can see some possible future topics (although I can’t promise anything – it is enough to keep up as it is).

  • Reactive programming – I have recently presented at the first Toronto Reactive meetup, and noticed this whole area of Scala and Akka that is a completely viable alternative to implement micro-services and scalable distributed systems that confirm to the tenets of Reactive Manifesto. I would like to probe further.
  • Go language – not only because TJ decided to go that route, having an alternative to Node.js while implementing individual micro-services is a great thing, particularly for API and back-end services (I still prefer Node.js for Web serving apps).
  • Libchan – Docker’s new project (like Go channels over the network) currently requires Go (duh) but I am sure Node.js version will follow.
  • Famo.us – I know, I know, I have expressed my concerns about their approach, but I did the same with Node.js and look at me now.
  • Swift – I am a registered XCode developer and have the Swift-enabled update to it. If only I could find some time to actually create some native iOS apps. Maybe I will like Swift more than I do Objective-C.

I would like to read this post in a year and see if any of these bullets panned out (or were instead replaced with a completely different list of even newer and cooler things). In this industry, I would not be surprised.

Whatever I am writing about, I would like to thank you for your support and attention so far and hope to keep holding it just a little bit longer. Now if you excuse me, I need to post this – I am already late this week!

© Dejan Glozic, 2014

Advertisements

Beam My Model Up, Scotty!

star-trek-transporter

I know, I know. Scotty is from the original series, and the picture above is from TNG. Unlike Leonard from The Big Bang Theory, I prefer TNG over the original, and also Picard over Kirk. Please refrain from hate mail.

Much of both real and virtual ink has been spilled over Star Trek transporter technology and quirks. Who can forget the episode when Scotty has preserved himself as a transporter pattern in an endless loop only to be found and freed 70 years later by the TNG crew (see, there’s your Scotty-TNG link). But this article is about honouring all the hours I spent on various Star Trek instalments by applying the transporter principles to Web development.

As the Web development collective mind matures, a consensus is forming that spaghetti are great on your dining table but bad for your code. MVC design pattern on the server is just accepted as gravity, and several JavaScript frameworks are claiming it is necessary on the client as well. But is it always?

Here is case in point. Our team recently rebuilt a dashboard – a relatively complex peace of interactive technology. This is our third attempt at it (third time’s the charm, right)? Armed with all the experience of the previous two versions, plus our general desire to pull back from the extreme Ajax, we were confident this would be it. Why build a dashboard? Yes, I know, everybody and his uncle has one, and there are perfectly good all singing, all dancing commercial ones to be had. Here is a counter-question: why do you have a kitchen in your house when you can eat at perfectly good restaurants? Everybody has a dashboard because it is much more convenient to have your own (Google Analytics has one, WordPress I am writing this on has one). They can be simple because they don’t need to cater to all the possible scenarios, and much more tightly integrated. And not to mention cheaper (like eating at home vs. eating out). But I digress.

In the previous version, we sent a lot of JavaScript to the client, and after transporting and parsing it, JavaScript turned around and used XHR to fetch the model from the server as JSON. Then we wired up the model and the view and every time users made a change, we would update the model to keep it in sync. Backbone.js or Angular.js would have come in handy for this, only they didn’t exist when we wrote that code. Plus we hate frameworks.

In this version we wanted the dashboard page to arrive mostly assembled from the server. That part was clear. But we wanted to avoid the MV* complexity and performance consequences if possible. Plus it is really hard to even apply the usual data binding with dashboards because you cannot just wire the template with the model and let the framework do its magic. In a dashboard, the template itself is editable – users can drag and drop widgets around, delete them and add new ones. Essentially both the template and the data are built up from the model. This is not your grandmother’s MVC, that’s for sure.

Then we remembered Star Trek transporters and figured – why don’t we disperse the model and embed the model shards into the DOM as we are building the initial HTML on the server? We were waiting to hit a brick wall at some point, but it didn’t happen – this is an actually viable option. Here is why this approach (I will call it M/V) works for us like a charm:

  1. Obviously it is easy to do on the server – when we are building the initial response, embedding model shards into the HTML is trivial using HTML5 custom data properties (those attributes that start with ‘data-‘).
  2. The model arrives ready to use on the client – no need for an extra request to fetch the model after the fact. OK, Backbone.js has a way of bootstrapping models so that they arrive as payload with the initial response. However, that approach qualifies as CrazyHacks™ at best, and you save nothing payload-wise – the sum of the shards equals one whole model.
  3. When the part of the DOM needs to change due to the user interaction, it is easy to keep the associated ‘data-*’ properties in sync.
  4. When moving entire DOM branches (say, when dragging widgets between columns or changing layouts), model shards stay with their DOM elements – this is a real winner in our use case.
  5. We use DOM custom event bubbling to notify about the change – one listener at the top of the DOM branch is all that is needed to keep track of the sharded model’s dirty state. This helps us to stay lean because there is no need for JavaScript pub/sub implementations that increase size. It is laughably trivial to fire and listen to these events using jQuery, and they are fast, being implemented natively by the browser. Bootstrap uses the same approach.
  6. When the time comes to save, we traverse the DOM, collect the shards and assemble them again into a transient JavaScript object, then send it as JSON to the server via XHR. We use PUT/POST when we have the entire model, but more often the not, the view only renders a fraction of it (lazy loading), so we use PATCH instead.

This approach bought us some unreal savings in performance and size. Right now, we brought down total JavaScript from ~245kB to ~70KB minified gzipped (this includes jQuery, Require.js and Bootstrap). The best part is that this JavaScript loads asynchronously, after the initial content has already been displayed. The page is not only objectively fast, it is subjectively even faster because the initial content (what Twitter team calls ‘time to first tweet’) is sent in the first response.

As it is often the case in life, this approach may not work for everybody. A pattern where true MVC really shines is when you have multiple views listening to the same model. Changing the model updates all the views simultaneously, without the need for the pub/sub hell. But if you have exactly one view and one model, using M/V becomes a real option.

I can hear the collective gasp of architects in the audience. There are many things you can say about M/V, but architecturally pure it is not. But here is the problem. Speed and page size should be a valid architectural concern too. In fact, if you want your Web app to be usable on mobile devices, it is practically the only concern that matters. Nobody is going to wait your architecturally pure pig of a page to load while waiting for coffee – life is too short for 1MB+ web pages, pure or not.

Say No to layer cake, embrace small, live long and prosper!

© Dejan Glozic, 2013

Swimming Against The Tide

True story: I visited Ember.js web site and saw three required hipster artifacts: ironic mustaches, cute animals and Ray-Ban Wayfarer glasses (on a cute animal). A tweet was in order and within minutes, it was favored by a competing client side framework by Google (Angular). Who would have guessed client side frameworks are so catty? I can almost picture Angular News guy clicking the ‘Favorite’ button and yelling ‘Oh, Burn!!’ And it wasn’t even a burn, I actually like Ember web site – it is so …. cute.

The reason I visited Ember (and Angular, and Backbone, and Knockout) was to figure out what was going on. There is this scene in a 2002 movie Gangs Of New York where Leonardo DiCaprio leads his gang of Dead Rabbits to fight the competing gang (the Natives), and he has to wade through a river of people running in the opposite direction to avoid cannons fired by the Navy from the harbor. Leonardo and his opponent, a pre-Lincoln Daniel Day-Lewis were so enthralled in their epic fight that they missed the wider context of the New York Draft Riots happening around them. Am I like Leo (minus the looks, fame and fortune), completely missing the wider historic context around me?

Not long ago, I posted a repentant manifesto of a recovered AJAX addict. I swore off the hard stuff and pledged to only consume client-side script in moderation. The good people from Twitter and 37Signals all went through the same trials and tribulations and adopted similar approach (or I adopted theirs). Most recently, Thomas Fuchs, the author of Zepto.js expressed a similar change of heart based on his experiences in getting the fledgling product Charm off the ground. Against that backdrop, the noise of client-side MVC frameworks mentioned above is reaching deafening levels, with all the people apparently not caring about the problems that burned us so much. So what gives?

There are currently two major camps in the Web development right now, and they mostly differ in the role they allocate for the server side. The server side guys (e.g. Twitter, Basecamp, Thomas, yours truly) have been burned by heavy JavaScript clients and want to render initial page on the server, subsequently using PJAX and modest amounts of JavaScript for delicious interactivity and crowd pleasers. Meanwhile, a large population of developers still want to develop one page Web apps that require the script to take over from the browser for the long periods of time, relegating the server to the role of a REST service provider. I don’t want to repeat myself here (kindly read my previous article) but the issues of JavaScript size, parsing time, performance, memory leaks, browser history, SEO didn’t go away – they still exist. Nevertheless, judging by the interest in Angular, Backbone, Ember and other client side JavaScript frameworks, a lot of people think the tradeoffs are worth it.

To be correct, there is a third camp populated mostly by LinkedIn engineering team. They are in a category for themselves because they are definitely not a one page app, yet they do use Dust.js for client side rendering. But they also use a whole mess of in-house libraries for binding pages to services, assembling them, delaying rendering when below the fold, etc. You can read it on their blog – suffice to say that similar to Facebook’s Big Pipe, the chances you can repeat their architecture in your project are fairly slim, so I don’t think their camp is of practical value to this discussion.

Mind you, nobody is arguing the return to the dark ages of Web 1.0. There is no discussion whether JavaScript is needed, only whether all the action is on the client or there is a more balanced division of labor with the server.

I thought long and hard (that is, couple of days tops) about the rise of JavaScript MVC frameworks. So far, this is what I came up with:

  1. Over the last few years, many people have written a lot of crappy, unmaintainable, messy jumble of JavaScript. They now realize the value of architecture, structure and good engineering (client or server).
  2. A lot of people realize that some really smart people writing modern JavaScript frameworks will probably do a better job providing this structure then themselves.
  3. Many projects are simply not large enough to hit the general client side scripting turning point. This puts them in a sweet spot for client side MVC – large enough to be a mess and benefit from structure, not large enough to be a real pig that makes desktop browsers sweat and mobile browsers kill your script execution due to consuming too much RAM.
  4. These projects are also not easily partitioned into smaller contexts that can be loaded as separate Web pages. As a result, they rely on MVC JavaScript frameworks to perform data binding, partitioning, routing and other management.
  5. Modern templating engines such as Mustache or Handlebars can run both on the client and on the server, opening up the option of rendering the initial page server side.
  6. JavaScript community is following the same path that Web 1.0 server side MVC went through: the raise of opinionated and prescriptive MVC frameworks that try to box you into good practices and increase your productivity at the price of control and freedom.
  7. The people using these frameworks don’t really have performance as their first priority.
  8. The people using these frameworks plan to write a separate or native mobile client.

There could be truth in this, or I could be way off base. Either way, my team has no intention of changing course. To begin with, we are allergic to loss of control these frameworks demand – we subscribe to the camp that frameworks are bad. More importantly, we like how snappy our pages are now and want to keep them that way. We intend to keep an eye on the client MVC frameworks and maybe one day we will hit a use case where client side data binding and templates will prove useful (say, if we attempt something like Gmail or Google Docs or Google Calendar). If that happens, we will limit it to that particular use case, instead of going all in.

Meanwhile, @scottjehl perfectly describes my current state of mind thusly:

output-HTML

© Dejan Glozic, 2013

Feed Your Web UI Without Choking It

Today’s topic is penguins, how they feed their young and how to apply that to Web development. Now watch my improbable feat of connecting these two topics. Come for the penguins, stay for the Web UI!

I am a father and as all parents know, part of the job description when your kids reach certain age includes suffering through many an animated movie (I am looking at you, Hoodwinked!). On one occasion though, I actually enjoyed the movie, and it happened to be Happy Feet. Those of you who watched it know that young Mumble is fed by his father Memphis eating a big fish, then regurgitating the half-digested food into Mumble’s beak, like so:

Antarctic adelie penguins (js) 21

This is not from the movie but I don’t want to wrestle with Warner Bros over the copyright for the images.

As I watched the scene, I started thinking about what I like to call ‘naive Ajax’. I already gave Ajax its own blog post so no need to repeat myself here. Suffice to say is that when Ajax was young, making an XHR call to fetch some data was fairly straightforward and powerful. At that time, there were readily available services of the monster variety (Big Bad Services or BBS©). These services often returned horrible XML, and in the ‘all you can eat’ amounts. Developers didn’t want to bother with writing the services themselves, preferring to stay on the client writing JavaScript. They just called the Big Bad Service, suffered through the XML, threw most of it away and built their DOM nodes out of the useful parts.

I think you are starting to see the improbable analogy here: the unprocessed, big data service is like feeding the little Mumbo raw fish – too much for its little beak. Let me count the ways why that is suboptimal:

  1. Big services normally serve verbose XML with tons of namespace information. If you are lucky and the service supports gzip, it may not be a big problem (XML is insanely compressible), but if it does not, a lot of the bytes are sent down the wire unnecessarily.
  2. XML is not the optimal format to process on the client. You can let the browser parse it into a DOM tree, and traverse the DOM. However, that means that your JavaScript code needs to be larger because it contains nontrivial amount of logic just to traverse the data and sift it out.
  3. The data may not be fully digestible (see what I did here) – you may need to derive some of it from what you have received, or make more than one XHR request and sort out the joins in-browser.
  4. Very often you get way more than you need, which is wasteful because you waited and wasted the bandwidth and traversed a big DOM only to throw it away.

It would be foolish to assume that this is the thing of the past and that modern JSON services are much better and do not suffer from this problem. You may as easily choke your Web UI with a ton of JSON data – the only difference is that you will get huge JavaScript objects instead of an XML DOM tree. The accent is on ‘impedance mismatch’, not on the exchange format.

What is the ‘right way’ then? Use what I call ‘Happy Feet Services’. Write your own service on the presentation server (the one serving the rest of the Web app resources) in a service language of your choice. Make that service call the Big Bad Services, process the response, correlate the data, toss the unused parts away – it will do a much better job on the server. Once you finally bring the data set down to a super tasty little nugget, send it to the browser as gzipped JSON. It will arrive quickly because it is small, it will be turned into a JavaScript object by the browser, and your client JavaScript does not need to be very large because the data set is readily usable – no processing required.

If you feel lucky (punk!), you can go one step further and make your Happy Feet Service even more useful: make it dual-purpose using content negotiation. If you ask for “application/json”, return JSON as described. If you ask for “text/html”, send out the same data already rendered and ready to go as HTML markup – all you need to do is pipe it into a DIV using innerHTML property. You will have dynamic Ajax app with very little JavaScript.

For this realization alone, I would say I got my money’s worth. The only thing I learned from Hoodwinked! is not to give coffee to a squirrel. I knew that already.

© Dejan Glozic, 2013

Pulling Back from Extreme AJAX

I always thought that Jesse-James Garret had an awesome name. It is up there with Megan Foxx and that episode of The Simpsons when Homer briefly changed his name to Max Power. How many of us have a name that prompted John Lee Hooker to compare his level of badness to us? Granted, the Jesse in question (Jesse Woodson James) was too busy robbing banks and leading gangs to be a Chief Creative Officer of anything, but the power of the name is undeniable.

JJG, of course, is the guy who coined the phrase ‘Asynchronous JavaScript and XML’ (AJAX). It put a name on the technique that turned JavaScript from a quirky scripting language for coding web site menus to a powerhouse, delivering amazingly interactive Web applications to your browser. If we wanted to be sticklers, the majority of people now use JSON, not the original XML for the X part of the technique, but would we like JJG as much if he called the technique AJAJ? I don’t think so.

AJAX works roughly like this: in the dark ages of Web 1.0, whenever you wanted to react to user input in a significant way, you had to pass the data to the server and get a new rendition of the page. A lot of effort went into hiding this fact – session state was kept, parameters were passed, resources were cached to speed up page redraw, but it was still flashy and awkward – nothing like the creamy smoothness of desktop application UX. Then Microsoft needed some hooks, first in Outlook, then in IE, to make asynchronous requests to the server without the need to reload the page. The feature remained relatively obscure until Google made a splash with Gmail and Google Maps in a way that was browser-agnostic. Then JJG called the whole thing AJAX. You can read it all on Wikipedia.

Fascination with AJAX was long lasting. I guess we were so used to the limitations of Web 1.0 and what browsers were supposed and not supposed to do that for the longest time we had to pinch ourselves when looking at so much interactivity without page reload. I remember watching Google Wave demo at Google IO conference in 2009 and the presenter kept saying “and remember, there are no plug-ins here, this is all done with JavaScript in a regular browser”. Four years after the phrase was born!

There is a self-deprecating American joke that claims that “it is not done until it is overdone”. I live in Canada but when it comes to AJAX we are equally culpable. In our implementation of AJAX in the Rational Jazz Project, we served AJAX for every meal. We had AJAX hors d’oeuvres followed by AJAX soup, AJAX BBQ and then triple-fudge AJAX for desert. No wonder we now suffer from AJAX heartburn. Will somebody pass the Zantac?

See, the problem is that it is easy to overdo AJAX by forgetting that we are not writing desktop applications. We still need to have the healthy respect for the Web and remember that it is a fundamentally different environment than your desktop. Forgetting this fundamental tenet usually leads to a three-stage illness:

  1. Obesity. As in real-world diet, AJAX calories add up, and before you know it, your web page is ready for the Fat Farm. I have seen a page that needed to load more than 1MB of JavaScript (gzipped) before anything useful was shown.
  2. Separation Anxiety. Since it takes so much effort to load a page and make it useful, fat AJAX apps fear browser separation and do everything in their power to avoid a refresh. It can degenerate all the way to ‘one page apps’ that stay around like a bad penny, all the while faking ‘pages’ with DIVs that are hidden and shown via CSS.
  3. The God Complex. The final stage of this malady is that since we are not leaving the browser, and since any nontrivial app needs to manage a number of objects and resources, we come up with elaborate schemes for switching between them, lying to the browser about our navigation history, and finally fully re-implementing a good number of features that browsers already do, only much better. Pop-up menus, history management, tab management – we essentially ask the browser to load our app (painfully) and then go take a nap for a couple of hours – we will take over.

Should we lay this at the doorstep of Microsoft, Google and JJG? It would be easy but it would not help us. Nobody asked us to lose control of ourselves and amass such JavaScript love handles and triple chins. No AJAX Hello World snippet has 1MB of JavaScript. AJAX is wonderful when used in moderation. Large, slow to load AJAX pages are a problem even on the desktop, and are a complete non-starters in the increasingly important mobile world. Even if you ignore the problem of moving huge JavaScript payload over mobile networks, JavaScript engines in smartphone browsers have much smaller room to play and can cripple or downright terminate your app. By the way, this kind of defeats the purpose of so called ‘hybrid apps’ where you install a web app as native using PhoneGap/Cordova shim. That only eliminates the transport part but all the issues with running JavaScript on the phone remain. I am not saying that hybrid apps don’t have their place, only that they are not exempt from JavaScript weight watching.

How does a page wake up one morning and realize nothing in the wardrobe closet fits any more? Writing 1MB worth of JavaScript takes a lot of effort – it does not happen overnight. One of the reasons are desktop and server side developers writing client code but still somehow wanting to think they are in their comfort zone. A lot of Java developers think JavaScript is Java’s silly little brother. They devise reality-distortion force fields that allow them to pretend they are still writing in strongly typed languages (I am looking at you, GWT). Entire Java libraries are converted to JavaScript without a lot of scrutiny. I am strongly against it. As I always say, si fueris Romanae, Romano vivito more. If your code will end up running in a browser as JavaScript, why don’t you write in JavaScript and take advantage of all unique language features, instead of losing things in translation and generating suboptimal code?

But it is not (just) suboptimal code generation that causes the code bloat. It is the whole mindset. Proper strongly typed OO languages encourage creating elaborate class hierarchies and overbuilding your code by using every pattern from the Gang Of Four book at least once. That way the porky code lies. Coding for the web requires acute awareness that every line of code counts and rewards brevity and leanness. As it turns out, you pay for each additional line of JavaScript three times:

  1. More code takes longer to send to the browser over the moody and unpredictable networks
  2. More code takes longer for the browser to parse it before being able to use it
  3. More code will take more time to run in the browser (I know that JavaScript VMs have gotten better lately, but in the real life I still need to perform an occasional act of scriptocide – JavaScript performance problem didn’t entirely go away)

And what would an article by a middle-aged architect be without an obligatory “kids these days” lament? Many of the newly minted developers start out on the client side. They become very good at coaxing the browser into doing wonderful things, and this affinity naturally degenerates into doing ALL your work on the client. The server essentially becomes a resource delivery layer, sending JSON to JavaScript frameworks that are doing all of the interesting work in-browser. This is wrong. The proper way of looking at your application is that it consists of the following parts:

  1. Storage (Model)
  2. Server side logic (Controller)
  3. Server side rendering engine (View)
  4. Network transport layer
  5. Browser that renders HTML using CSS and reacting to user interactions via JavaScript

Notice how JavaScript is one of 7 aspects you need to care about. If you look at some implementations these days, you would think all the other aspects only exist to deliver or otherwise support JavaScript. It is the equivalent of a body builder that focused all his training on his right arm biceps. A balanced approach would yield much better results.

OK, Dejan, quit pontificating. What do we do to avoid dangers of irresponsible AJAXing in the future?

Well, armed with all this hard won experience, we have decided to take the opportunity of the new Jazz Platform project to wipe the slate clean and do things right. We are not alone in this – there is evidence that the pendulum has swung back from Extreme AJAX towards a healthier diet where all the Web App food groups are represented. For example, a lot of people read a blog post about Twitter performance-driven redesign, driven by exactly the same concerns I listed above. More recently, I stumbled upon an article by Mathias Schäfer arguing for a similarly mature and sane approach that embraces both the tried and true Web architectures and the interactivity that only JavaScript can provide.

You will get much better results by using the proper tool for the job. Want to access stored data fast? Pay attention to which storage solution to pick, and how to structure your data to take advantage of the particular properties of your choice. Need to show some HTML? Just serve a static document. Need to serve a dynamic document that will not change for the session? Use server side MVC to render it (I am talking a concept here, not frameworks – many server-side frameworks suffer from similar bloat but it is less detrimental on the server). Use the power of HTML and CSS to do as much rendering, animation, transition effects as you can (you will be surprised what is possible today with HTML5 and CSS3 alone). Then, and only then, use JavaScript to render widgets and deliver interactivity that wowed us so much in the recent past. But even then, don’t load JavaScript widgets cold – take advantage of the server-side MVC to prime them with markup and data so that no trip back to the server is necessary to render the initial page. You can use asynchronous requests later to incrementally update parts of the page as users go about using it (instead of aborting it half way because “this thing takes forever to load, I don’t have that kind of time”.

Eight years since AJAX has entered our vocabulary, the technique has matured and the early excesses are being gradually weeded out. We now know where overindulgence in JavaScript can lead us and must be constantly vigilant as we are writing new web applications for an increasingly diverse browser and mobile clients. We understand that AJAX is a tool in our toolbox, and as long as we don’t view it as a hammer and all our problems as nails, we will be OK. We can be bad like Jesse-James, guilt-free.

© Dejan Glozic, 2013