Dust.js: Such Templating

starbucks-doge-550

Last week I started playing with Node.js and LinkedIn’s fork of Dust.js for server side templating. I think I am beginning to see the appeal that made LinkedIn choose Dust.js over a number of alternatives. Back when LinkedIn had a templating throwdown and chose Dust.js, they classified it in the ‘logic-less’ group, in contrast to the ’embedded JavaScript’ group that allows you to write arbitrary JavaScript in your templates. Here is the list of reasons I like Dust.js over the alternatives:

  1. Less logic instead of logic-less. Unlike Mustache, it allows you to have some sensible view logic in your templates (and add more via helpers).
  2. Single instead of double braces. For some reason Mustache and Handlebars decided that if one set of curly braces is good, two sets must be better. It is puzzling that people insisting on DRY see no irony in needing two sets of delimiters for template commands and variables. Typical Dust.js templates look cleaner as a result.
  3. DRY but not to the extreme (I am looking at you, Jade). I can fully see the HTML markup that will be rendered. Life is too short to have to constantly map the shorthand notation of Jade to what it will eventually produce (and bang your head on the table when it barfs at you or spits out wrong things).
  4. Partials and partial injection. Like Mustache or Handlebars, Dust.js has partials. Like Jade, Dust.js has injection of partials, where the partial reserves a slot for the content that the caller will define. This makes it insanely easy to create ‘skeleton’ pages with common areas, then inject payload. One thing I did was set up three injection areas: in HEAD, in BODY where the content should go and at the end of BODY for injections of scripts.
  5. Helpers. In addition to very minimal logic of the core syntax, helpers add more view logic if needed (‘dustjs-helpers’ module provides some useful helpers that I took advantage of; you can write your own helpers easily).
  6. Stuff I didn’t try yet. I didn’t try streaming and asynchronous rendering, as well as complex paths to the data objects, but intend to study how I can take advantage of them. Seems like something that can come in handy once I get to more advanced use cases. That and the fact that JSPs support streaming so we feel we are not giving up anything by moving to Node/Dust.
  7. Client side templating. This also falls under ‘stuff I didn’t try yet’, but I am singling it out because it requires a different approach. So far our goal was to replace our servlet/JSP server side with Node/Dust pair. Having the alternative to render on the client opens up a whole new avenue of use cases. We have been burnt in the past with too much JavaScript on the client, so I want to approach this carefully, but a lot of people made a case for client side rendering. One thing I would like to try out is adding logic in the controller to sniff the user agent and render the same template on the server or on the client (say, render on the server for lesser browsers or search engine bots). We will definitely try this out.

In our current project we use a number of cooperating Java apps using plain servlets for controllers and JSPs for views. We were very disciplined to not use direct Java expressions in JSPs, only JSTL/EL (Tag Library and Expression Language). JSPs took a lot of flack in the dark days of spaghetti JSPs with a lot of Java code embedded in views, essentially driving current aversion to logic in templates to absurd levels in Mustache. It is somewhat ironic that you can easily create similar spaghetti Jade templates with liberal use of embedded JavaScript, so that monster is alive and well.

Because of our discipline, porting our example app to Node.js with Dust.js for views, Express.js for middleware and routing was easy. Our usual client stack (jQuery, Require.js, Bootstrap) was perfectly usable – we just copied the client code over.

Here is the shared ‘layout.dust’ file that is used by each page in the example app:

<!DOCTYPE html>
<html>
  <head>
    <title>{title}</title>
	<meta http-equiv="X-UA-Compatible" content="IE=edge">
	<meta name="viewport" content="initial-scale=1.0, maximum-scale=1.0, user-scalable=0" />
	<link rel="stylesheet" href="/stylesheets/base.css">
	{+head/}
  </head>

  <body class="jp-page">
	{>navbar/}
	<div class="jp-page-content">
	   {+content/}
	</div>

	<script src="/js/base.min.js"></script>
	<script>
		requirejs.config({
			baseUrl: "/js"
		});
	</script>
	{+script/}
</body>
</html>

Note the {+head/}, {+content/} and {+script/} sections – they are placeholders for content that will be injected from templates that include this partial. This ensures the styles, meta properties, content and script are injected in proper places in the template. One thing to note is that you don’t have to define empty placeholders – you can place content between the opening and closing tag of the section, but we didn’t have any default content to provide here. You can view an empty tag as an ‘injection point’ (this is where the stuff will go), whereas a placeholder with some default content will be more like ‘overriding point’ (the stuff in the caller template will override this).

The header is a partial pulled into the shared template. It has been put together quickly – I can easily see the links for the header being passed in as an array of objects (it would make the partial even cleaner). Note the use of the helper for controlling the selected highlight in the header. It is simply comparing the value of the active link to the static values and adds a CSS class ‘active’ if true:

<div class="navbar navbar-inverse navbar-fixed-top jp-navbar" role="navigation">
   <div class="navbar-header">
     <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-ex1-collapse">
        <span class="sr-only">Toggle Navigation</span>
        <span class="icon-bar"></span>
        <span class="icon-bar"></span>
        <span class="icon-bar"></span>
     </button>
	<a class="navbar-brand" href="/">Examples<div class="jp-jazz-logo"></div></a>
   </div>
   <div class="navbar-collapse navbar-ex1-collapse collapse">
      <ul class="nav navbar-nav">
	<li {@eq key=active value="simple"}class="active"{/eq}><a href="/simple">Simple</a></li>
	<li {@eq key=active value="i18n"}class="active"{/eq}><a href="/i18n">I18N</a></li>
	<li {@eq key=active value="paging"}class="active"{/eq}><a href="/paging">Paging</a></li>
	<li {@eq key=active value="tags"}class="active"{/eq}><a href="/tags">Tags</a></li>
	<li {@eq key=active value="widgets"}class="active"{/eq}><a href="/widgets">Widgets</a></li>
	<li {@eq key=active value="opensocial"}class="active"{/eq}><a href="/opensocial">OpenSocial</a></li>
      </ul>
      <div class="navbar-right">
	<ul class="nav navbar-nav">
	   <li><a href="#">Not logged in</a></li>
	</ul>
      </div>
   </div>
</div>

Finally, here is the sample page using these partials:

{>layout/}
{<content}
  <h2>Simple Page</h2>
    <p>
      This is a simple HTML page generated using Node.js and Dust.js, which loads CSS and JavaScript.<br/>
      <a id="myLink" href="#">Click here to say hello</a> 
    </p>
{/content}
{<script}
   <script src="/js/simple/simple-page.js"></script>
{/script}

In this page, we include shared partial ‘layout.dust’, then inject content area into the ‘content’ placeholder and also some script into the ‘script’ placeholder.

The Express router for this page is very short – all it does is render the template. Note how we are passing the title of the page and also the value of the ‘active’ property to ensure the proper link is highlighted in the header partial:

exports.simple = function(req, res){
  res.render('simple', { title: 'Simple', active: 'simple' });
};

Running the Node app gives you the following in the browser:

dust-simple-page

Since we are using Bootstrap, we also get responsive design thrown in for free:

dust-simple-page-responsive

There you go. Sometimes it pays to follow in the footsteps of those who did the hard work before you – LinkedIn’s fork of Dust.js is definitely a very comfortable and capable templating engine and a great companion to Node.js and Express.js. We feel confident using it in our own projects. In fact, we have decided to write one of the apps in our project using this exact stack. As usual, you will be the first to know what we learned as we are going through it.

© Dejan Glozic, 2014

Advertisements

On the LinkedIn’s Dusty Trail

dusty_road_brush

There is an old New Yorker cartoon (from 1993) where a dog working on the computer tells another dog: “On the Internet, nobody knows you are a dog”. That’s how I occasionally feel about the GitHub projects – they could be solid, multi-contributor powerhouses churning out release after release, and there could be great ideas stalled when the single contributor got distracted by something new and shiny. The thing is that you need to inspect all the project artifacts carefully to tell which is which – single contributors can mount a powerful and amplified presence on GitHub (until they leave, that is).

This issue comes with the territory in Open Source (considering how much you pay for all the hard work of others), but nowhere is it more acute than in LinkedIn’s choice of templating library in 2011. In an often quoted blog post, LinkedIn engineering team embarked on a quest to standardise around one templating library that can be run both on the server and on the client, and also check a number of other boxes they put forward. Out of several contenders they finally settled on Dust.js. This library uses curly braces for pattern substition, which puts it in a hipster-friendly category alongside Mustache and Handlebars. But here is the rub: while the library ticks most of LinkedIn’s self-imposed requirements, its community support leaves much to be desired. The sole contributor seemed unresponsive.

Now, if it was me, I would move on, but LinkedIn’s team decided that they will not be deterred by this. In fact, it looks like they kind of liked the fact that they can evolve the library as they learn by doing. The problem was that committer rights work by bootstrapping – only the original committer can accept LinkedIn changes, which apparently didn’t happen with sufficient snap. Long story short, behold the ‘LinkedIn Fork of Dust.js’, or ‘dustjs-linkedin’ as it is known to NPM.

I followed this story with mild amusement and shaking my head until the end of 2013, when PayPal saw the Node.js light. As part of their Node.js conversion, they picked LinkedIn’s fork of Dust.js for their templating needs. This reminded me of how penguins jump into water – they all wait until one of them jumps, then they all follow in a quick succession. Channeling my own inner penguin, I decided the water was fine and started playing with dustjs-linkedin myself.

This is not my first foray into the world of Node.js, but in my first attempt I used Jade, which is just too DRY for my taste. Being a long time Eclipse user, I just could not revert to a command line, so I resorted to a collection of Web Development tools, then added Nodeclipse, mostly for the project creation wizard and the launcher. Eclipse is very important to me because it answers one of the key issues plaguing Node.js developers that go beyond ‘Hello, World’ – how do I control and structure all the JavaScript files (incidentally one of the hard questions that Zef Hemel posed in his blog post on blank canvas projects).

Then again, Nodeclipse is not perfect, and dustjs-linkedin is not one of the rendering engines they cover in the project wizard. I had to create an Express project configured for Jade, turn around and delete Jade from the project, and use NPM to install dustjs-linkedin locally (i.e. in the project tree under ‘node_modules’), like so:

nodeclipse-project

After working with Nodeclipse for a while, and not being able to use their Node launcher (I had to configure my own external tool launcher), I am now questioning its value but at least it got the initial structure set up for me. Now that I have a good handle of the overall structure, I could create new Node projects myself, so general purpose Web tooling with HTML, JSON and JavaScript editors and helpers may be all you need (of course, you need to also install Node.js and NPM but you would need to do it in all scenarios).

Hooking up nodejs-linkedin also requires consolidate.js in a way that is a bit puzzling to me but it seems to work well, so I didn’t question it (the author is TJ of Express fame, and exploring the code I noticed that nodejs-linkedin is actually listed as one of the recognized engines). The change required to pull in dustjs-linkedin and consolidate, declare dust as a new engine and map it as the view engine for the app:

var express = require('express')
, routes = require('./routes')
, dust = require('dustjs-linkedin')
, cons = require('consolidate')
, user = require('./routes/user')
, http = require('http')
, path = require('path');

var app = express();

// all environments
app.set('port', process.env.PORT || 3000);
app.set('views', __dirname + '/views');
app.engine('dust', cons.dust);
app.set('view engine', 'dust');

That was pretty painless so far. We have configured our views to be in the /views directory, so Dust files placed there can be directly found by Express. Since Express is an MVC framework (although much lighter than what you are normally used to in the JEE world), the C part is handled by routers, placed in the /routes directory. Our small example will have just a landing page rendered by /views/index.dust and the controller will be /routes/index.js. However, to add a bit of interest, I will toss in block partials, showing how to create a base template, then override the ‘payload’ in the child template.

We will start by defining the base template in ‘layout.dust’:

<!DOCTYPE html>
<html>
  <head>
    <title>{title}</title>
    <link rel='stylesheet' href='/stylesheets/style.css' />
  </head>
  <body>
    <h1>{title}</h1>
	{+content}
	This is the base content.
	{/content}
  </body>
</html>

We can now include this template in ‘index.dust’ and define the content section:

{>layout/}
{<content}
<p>
This is loaded from a partial.
</p>
<p>
Another paragraph.
</p>
{/content}

We now need to define index.js controller in /routes because the controller invokes the view for rendering:

/*
* GET home page.
*/
exports.index = function(req, res) {
   res.render('index',
           { title: '30% Turtleneck, 70% Hoodie' });
};

In the code above we are sending the result of rendering the view using Dust to the response. We specify the collection of key/value pairs that will be used by Dust for variable substitution. The only part left is to hook up our controller to the site path (our root) in app.js:

app.get('/', routes.index);

And that is it. Running our Node.js app using Nodeclipse launcher will make it start listening on the localhost port 3000:

dusty_road_browser

So far so good, and it is probably best to stop while we are ahead. We have a Node.js project in Eclipse configured to use Express and LinkedIn’s fork of Dust.js for templating, and everything is working. In my next installment, I will dive into Dust.js in earnest. This is going to be fun!

© Dejan Glozic, 2014

Beam My Model Up, Scotty!

star-trek-transporter

I know, I know. Scotty is from the original series, and the picture above is from TNG. Unlike Leonard from The Big Bang Theory, I prefer TNG over the original, and also Picard over Kirk. Please refrain from hate mail.

Much of both real and virtual ink has been spilled over Star Trek transporter technology and quirks. Who can forget the episode when Scotty has preserved himself as a transporter pattern in an endless loop only to be found and freed 70 years later by the TNG crew (see, there’s your Scotty-TNG link). But this article is about honouring all the hours I spent on various Star Trek instalments by applying the transporter principles to Web development.

As the Web development collective mind matures, a consensus is forming that spaghetti are great on your dining table but bad for your code. MVC design pattern on the server is just accepted as gravity, and several JavaScript frameworks are claiming it is necessary on the client as well. But is it always?

Here is case in point. Our team recently rebuilt a dashboard – a relatively complex peace of interactive technology. This is our third attempt at it (third time’s the charm, right)? Armed with all the experience of the previous two versions, plus our general desire to pull back from the extreme Ajax, we were confident this would be it. Why build a dashboard? Yes, I know, everybody and his uncle has one, and there are perfectly good all singing, all dancing commercial ones to be had. Here is a counter-question: why do you have a kitchen in your house when you can eat at perfectly good restaurants? Everybody has a dashboard because it is much more convenient to have your own (Google Analytics has one, WordPress I am writing this on has one). They can be simple because they don’t need to cater to all the possible scenarios, and much more tightly integrated. And not to mention cheaper (like eating at home vs. eating out). But I digress.

In the previous version, we sent a lot of JavaScript to the client, and after transporting and parsing it, JavaScript turned around and used XHR to fetch the model from the server as JSON. Then we wired up the model and the view and every time users made a change, we would update the model to keep it in sync. Backbone.js or Angular.js would have come in handy for this, only they didn’t exist when we wrote that code. Plus we hate frameworks.

In this version we wanted the dashboard page to arrive mostly assembled from the server. That part was clear. But we wanted to avoid the MV* complexity and performance consequences if possible. Plus it is really hard to even apply the usual data binding with dashboards because you cannot just wire the template with the model and let the framework do its magic. In a dashboard, the template itself is editable – users can drag and drop widgets around, delete them and add new ones. Essentially both the template and the data are built up from the model. This is not your grandmother’s MVC, that’s for sure.

Then we remembered Star Trek transporters and figured – why don’t we disperse the model and embed the model shards into the DOM as we are building the initial HTML on the server? We were waiting to hit a brick wall at some point, but it didn’t happen – this is an actually viable option. Here is why this approach (I will call it M/V) works for us like a charm:

  1. Obviously it is easy to do on the server – when we are building the initial response, embedding model shards into the HTML is trivial using HTML5 custom data properties (those attributes that start with ‘data-‘).
  2. The model arrives ready to use on the client – no need for an extra request to fetch the model after the fact. OK, Backbone.js has a way of bootstrapping models so that they arrive as payload with the initial response. However, that approach qualifies as CrazyHacks™ at best, and you save nothing payload-wise – the sum of the shards equals one whole model.
  3. When the part of the DOM needs to change due to the user interaction, it is easy to keep the associated ‘data-*’ properties in sync.
  4. When moving entire DOM branches (say, when dragging widgets between columns or changing layouts), model shards stay with their DOM elements – this is a real winner in our use case.
  5. We use DOM custom event bubbling to notify about the change – one listener at the top of the DOM branch is all that is needed to keep track of the sharded model’s dirty state. This helps us to stay lean because there is no need for JavaScript pub/sub implementations that increase size. It is laughably trivial to fire and listen to these events using jQuery, and they are fast, being implemented natively by the browser. Bootstrap uses the same approach.
  6. When the time comes to save, we traverse the DOM, collect the shards and assemble them again into a transient JavaScript object, then send it as JSON to the server via XHR. We use PUT/POST when we have the entire model, but more often the not, the view only renders a fraction of it (lazy loading), so we use PATCH instead.

This approach bought us some unreal savings in performance and size. Right now, we brought down total JavaScript from ~245kB to ~70KB minified gzipped (this includes jQuery, Require.js and Bootstrap). The best part is that this JavaScript loads asynchronously, after the initial content has already been displayed. The page is not only objectively fast, it is subjectively even faster because the initial content (what Twitter team calls ‘time to first tweet’) is sent in the first response.

As it is often the case in life, this approach may not work for everybody. A pattern where true MVC really shines is when you have multiple views listening to the same model. Changing the model updates all the views simultaneously, without the need for the pub/sub hell. But if you have exactly one view and one model, using M/V becomes a real option.

I can hear the collective gasp of architects in the audience. There are many things you can say about M/V, but architecturally pure it is not. But here is the problem. Speed and page size should be a valid architectural concern too. In fact, if you want your Web app to be usable on mobile devices, it is practically the only concern that matters. Nobody is going to wait your architecturally pure pig of a page to load while waiting for coffee – life is too short for 1MB+ web pages, pure or not.

Say No to layer cake, embrace small, live long and prosper!

© Dejan Glozic, 2013

Swimming Against The Tide

True story: I visited Ember.js web site and saw three required hipster artifacts: ironic mustaches, cute animals and Ray-Ban Wayfarer glasses (on a cute animal). A tweet was in order and within minutes, it was favored by a competing client side framework by Google (Angular). Who would have guessed client side frameworks are so catty? I can almost picture Angular News guy clicking the ‘Favorite’ button and yelling ‘Oh, Burn!!’ And it wasn’t even a burn, I actually like Ember web site – it is so …. cute.

The reason I visited Ember (and Angular, and Backbone, and Knockout) was to figure out what was going on. There is this scene in a 2002 movie Gangs Of New York where Leonardo DiCaprio leads his gang of Dead Rabbits to fight the competing gang (the Natives), and he has to wade through a river of people running in the opposite direction to avoid cannons fired by the Navy from the harbor. Leonardo and his opponent, a pre-Lincoln Daniel Day-Lewis were so enthralled in their epic fight that they missed the wider context of the New York Draft Riots happening around them. Am I like Leo (minus the looks, fame and fortune), completely missing the wider historic context around me?

Not long ago, I posted a repentant manifesto of a recovered AJAX addict. I swore off the hard stuff and pledged to only consume client-side script in moderation. The good people from Twitter and 37Signals all went through the same trials and tribulations and adopted similar approach (or I adopted theirs). Most recently, Thomas Fuchs, the author of Zepto.js expressed a similar change of heart based on his experiences in getting the fledgling product Charm off the ground. Against that backdrop, the noise of client-side MVC frameworks mentioned above is reaching deafening levels, with all the people apparently not caring about the problems that burned us so much. So what gives?

There are currently two major camps in the Web development right now, and they mostly differ in the role they allocate for the server side. The server side guys (e.g. Twitter, Basecamp, Thomas, yours truly) have been burned by heavy JavaScript clients and want to render initial page on the server, subsequently using PJAX and modest amounts of JavaScript for delicious interactivity and crowd pleasers. Meanwhile, a large population of developers still want to develop one page Web apps that require the script to take over from the browser for the long periods of time, relegating the server to the role of a REST service provider. I don’t want to repeat myself here (kindly read my previous article) but the issues of JavaScript size, parsing time, performance, memory leaks, browser history, SEO didn’t go away – they still exist. Nevertheless, judging by the interest in Angular, Backbone, Ember and other client side JavaScript frameworks, a lot of people think the tradeoffs are worth it.

To be correct, there is a third camp populated mostly by LinkedIn engineering team. They are in a category for themselves because they are definitely not a one page app, yet they do use Dust.js for client side rendering. But they also use a whole mess of in-house libraries for binding pages to services, assembling them, delaying rendering when below the fold, etc. You can read it on their blog – suffice to say that similar to Facebook’s Big Pipe, the chances you can repeat their architecture in your project are fairly slim, so I don’t think their camp is of practical value to this discussion.

Mind you, nobody is arguing the return to the dark ages of Web 1.0. There is no discussion whether JavaScript is needed, only whether all the action is on the client or there is a more balanced division of labor with the server.

I thought long and hard (that is, couple of days tops) about the rise of JavaScript MVC frameworks. So far, this is what I came up with:

  1. Over the last few years, many people have written a lot of crappy, unmaintainable, messy jumble of JavaScript. They now realize the value of architecture, structure and good engineering (client or server).
  2. A lot of people realize that some really smart people writing modern JavaScript frameworks will probably do a better job providing this structure then themselves.
  3. Many projects are simply not large enough to hit the general client side scripting turning point. This puts them in a sweet spot for client side MVC – large enough to be a mess and benefit from structure, not large enough to be a real pig that makes desktop browsers sweat and mobile browsers kill your script execution due to consuming too much RAM.
  4. These projects are also not easily partitioned into smaller contexts that can be loaded as separate Web pages. As a result, they rely on MVC JavaScript frameworks to perform data binding, partitioning, routing and other management.
  5. Modern templating engines such as Mustache or Handlebars can run both on the client and on the server, opening up the option of rendering the initial page server side.
  6. JavaScript community is following the same path that Web 1.0 server side MVC went through: the raise of opinionated and prescriptive MVC frameworks that try to box you into good practices and increase your productivity at the price of control and freedom.
  7. The people using these frameworks don’t really have performance as their first priority.
  8. The people using these frameworks plan to write a separate or native mobile client.

There could be truth in this, or I could be way off base. Either way, my team has no intention of changing course. To begin with, we are allergic to loss of control these frameworks demand – we subscribe to the camp that frameworks are bad. More importantly, we like how snappy our pages are now and want to keep them that way. We intend to keep an eye on the client MVC frameworks and maybe one day we will hit a use case where client side data binding and templates will prove useful (say, if we attempt something like Gmail or Google Docs or Google Calendar). If that happens, we will limit it to that particular use case, instead of going all in.

Meanwhile, @scottjehl perfectly describes my current state of mind thusly:

output-HTML

© Dejan Glozic, 2013

RESS to the Rescue

It seems that every couple of years we feel a collective urge to give a technique a catchy acronym in order to speed up conversation about UI design. Last couple of years, we grew accustomed to throwing around the term Responsive Design casually, probably because it rolls off the tongue easier then “we need to make the UI, like, re-jiggle itself on each thingy”. Although I like saying ‘re-jiggle’. Re-swizzle is my close second.

As we were working on the tech preview of the Jazz Platform Home app, we naturally wanted it to behave on phones and tablets. And it did. The header in particular is a breathing, living thing, going through a number of metamorphoses until a desktop caterpillar turns into a mobile butterfly:

res-all

It all works very well, but a (butter)fly in the ointment is that we need to send the markup for both the caterpillar and the butterfly on each request. We do CSS media query transformations to make them do what we want, but we are clearly suboptimal here. I am sure that with some CSS shenanigans we could pull it off with one set of HTML tags, but at some point it just becomes too hard and error-prone. Clearly it is one thing to make the desktop/tablet header drop elements and adjust with screen size, and another to switch to a completely new element (off-screen navigation) while keeping the former rotting in the display:none hell.

This is not the only example where we had to use this technique. The next example has the section switcher in the Project Details page go from vertical set of tabs to a horizontal icon bar:

proj-res

Again, both markups are sent to the client, and switched in and out by CSS as needed. This may seem minor but in a more complex page you may end up sending too much stuff on all clients. We need to involve the server into this using some kind of a hybrid technique.

As JJG before, Luke Wroblewski has not actually invented this technique but simply put a name on it. RESS stands for REsponsive design + Server Side and first appeared in Luke’s article in 2011. The technique calls for involving the server in responsive design so that only what is needed is sent to the client. It also goes by the name ‘adaptive design’. The server is not meant to replace but rather complement responsive design. You can view the server as a more coarse-grained player in this duo, sending ‘mobile’ or ‘desktop’ stuff to the client, while the responsive design takes care of adapting to the device in more steps or decision points. It needs to because what does it mean ‘mobile’ today anyway? There is a continuum of screen sizes now, from phones to phablets to mini-tablets to regular tables to small laptop screens to desktops – what is ‘mobile’ in that context?

It seems like RESS would help us in our two examples above by allowing us to not send the desktop header to the phone and off-screen navigation to the desktop. However, the server has limited means to perform device detection. At the very bottom, there is HTTP header ‘user-agent’. Agent sniffing is a never-ending task because of new devices that keep coming online. There is a lot of virtual ink spilled on the algorithms that will correctly detect what you are running. This is error-prone activity relying on pattern matching against lists that normally go stale if not updated constantly. There were attempts to subcontract this job, but many are either stack-specific or fee-based (such as WURFL or detectmobilebrowsers.mobi), or a combination of all these approaches. I don’t see how any of these solutions make sense to most of the web sites that want to go the RESS route. Simply judging by the number of available solutions, you can see that this is not a solved problem, analogous to the number of hair loss treatments.

Another approach we can use (or combine with user agent detection) is to send the viewport size to the server in a cookie. This arms the server with the additional means to make correct decision because while, say, iPad is reported as ‘mobile’, it has enough resolution to be sent a full desktop version of the page, and viewport size would reveal that. The problem with this technique is that your site needs to be visited at least once to give JavaScript a chance to set the cookie, which means that the optimization only kicks in on repeat visits.

Considering the imprecise nature of device detection, you should consider RESS a one-two approach – server tries its best to detect the device and send the appropriate content, then responsive design on the client kicks in. A prudent approach would be to err on the safe side on the server and give up on device detection at the first sign of trouble, letting the client do its thing (when, say, user-agent has never been seen before or cookies are disabled preventing viewport size transmission). With this approach, we should ensure that client-side responsive design works well on its own, and consider server-side an optimization, a bonus feature that kicks in for the most straightforward cases (i.e. most well known phones, tablets) and/or on repeat visits.

By using the vision analogy, the most you would expect your server to do in a RESS-enabled site is to tell you that the vehicle on the road is a car and not a truck, and that it is black(ish). As long as you don’t expect it to tell you that it is a 2012 BMW i3 with leather seats and technology package, you will be OK.

© Dejan Glozic, 2013

The Gryphon Dilemma

Gryphon

In my introductory post The Turtleneck and the Hoodie I kind of lied a bit that I stopped doing everything I did in my youth. In fact, I am playing music, recording and producing more than I did in a while. I realized I can do things in the comfort of my home that I could have only dreamed in my youth. My gateway drug was Apple GarageBand, but I recently graduated to the real deal – Logic Pro X. As I was happily mixing a song in its beautifully redesigned user interface, I needed some elaborate delay so I reached for the Delay Designer plug-in. What popped up required some time to get used to:

Delay-Designer

This plug-in (and a few more in Logic Pro) clearly marches to a different drummer. My hunch is that it is a carry-over from the previous version, probably sub-contracted and the authors of the plug-in didn’t get to updating it to the latest L&F. Nevertheless, they shipped it this way because it very powerful and it does a great primary task, albeit in its own quirky way.

This experience reminded my of a dilemma we are faced today in any distributed system composed of a number of moving parts (let’s call them ‘apps’). A number of apps running in a cloud platform can serve Web pages, and you may want to hook them up together in a distributed ‘site of sites’. Clearly the loose nature of this system is great from the point of view of flexibility. You can individually evolve each app as long as the contract that glues them together is upheld. One app can be stable and move slowly, while you can rev the other one like mad. This whole architecture works great for non-visual services. The problem is when you try to pull any kind of coherent end user experience out of this unruly bunch.

A ‘web site’ is an illusion, inasmuch ‘movies’ are ‘moving’ – they are really a collection of still images switched at 24fps or faster. Web browsers are always showing one page at a time. If a user clicks on an external link, the browser will unceremoniously dump all of page’s belongings on the curb and load a new one. If it wasn’t for the user session and content caching, browsers would be like Alzheimer patients, having a first impression of the same page over and over. What binds pages together are common areas that these pages share with their keen, making them a part of the whole.

In order to ensure this illusion, web pages have always shared common areas that navigationally bind them to other pages. For the illusion to be complete, these common areas need to be identical from page to page (modulo selection highlights). Browsers have become so good at detecting shared content on pages they are switching in and out, that you can only spot a flash or flicker if the page is slow or there is another kind of anomaly. Normally the common areas are included using the View part of the MVC framework – including page fragments is 101 of the view templates. Most of the time it appears as if only the unique part of the page is actually changing.

Now, imagine what happens when you attempt to build a distributed system of apps where some of the apps are providing pages and others are supplying common areas. When all the apps are version 1.0, all is well – everything fits together and it is impossible to tell your pages are really put together like words on ransom notes. After a while, the nature of independently moving parts take over. We have two situations to contend with:

  1. An app that supplies common areas is upgraded to v2.0 while the other ones stay at v1.0
  2. An app that provides some of the pages is upgraded to v2.0 while common areas stay at 1.0
intra-page
Evolution of composite pages with parts evolving at a different pace.

These are just two sides of the same coin – in both cases, you have a potential for end-results that turn into what I call ‘a Gryphon UX’ – a user experience where it is obvious different parts of the page have diverged.

Of course, this is not a new situation. Operating system UIs go through these changes all the time with more or less controversy (hello, Windows 8 and iOS7). When that happens, all the clients using their services get the free face lift, willy-nilly. However, since native apps (either desktop or mobile) normally use native widgets, there are times when even an unassisted upgrade turns out without a glitch (your app just looks more current), and in real world cases, you only need to do some minor tweaking to make it fit the new L&F.

On the Web however, the Web site design runs much deeper, affecting everything on each page. A full-scale site redesign is a sweeping undertaking that is seldom attempted without full coordination of components. Evolving only parts of a page is plainly obvious and results in it not only being put together like a ransom note but actually looking like one.

There is a way out of this conundrum (sort of). In a situation where a common area can change on you any time, app developers can sacrifice inter-page consistency for intra-page consistency. There is no discussion that common set of links is what makes a site, but these links can be shared as data, not finished page fragments. If apps agree on the navigational data interchange format, they can render the common areas themselves and ensure gryphons do not visit them. This is like reducing your embassy status to a consulate – clearly a downturn in relationships.

Let’s apply this to a scenario above. With version evolution, individual pages will maintain their consistency, but as users navigate between them, common areas will change – the illusion will be broken and it will be very obvious (and jarring) that each page is on its own. In effect, what was before a federation of pages is more like a confederation, a looser union bound by common navigation but not the common user experience (a ‘page ring’ of sorts).

inter-page
Evolution of composite page where each page is fully controlled by its provider.

It appears that this is not as much a way out of the problem as ‘pick your poison’ situation. I already warned you that the Gryphon dilemma is more of a philosophical problem than a technical one. I would say that in all likelihood, apps that coordinate and work closely together (possibly written by the same team) will opt to share common areas fully. Apps that are more remote will prefer to maintain their own consistency at the expense of inter-page experience.

I also think it all depends on how active the app authors are. In a world of continuous development, perhaps short periods of Gryphon UX can be tolerated knowing that a new stable state is only a few deploys away. Apps that have not been visited for a while may prefer to not be turned into mythological creatures without their own consent.

And to think that Sheldon Cooper wanted to actually clone a gryphon – he could have just written a distributed Web site with his three friends and let the nature take its course.

© Dejan Glozic, 2013

The Web UI Integration Continuum

Puzzle Krypt-2

One of the key roles of the Jazz Platform I am working on right now is that of integration. From the early days of the Eclipse Platform, I was fascinated with composable systems, and how this composition manifests itself in the user interface. Integration on the Web is simultaneously natural and frustrating. It is natural because the entirety of the Web is a gargantuan aggregate of parts, but it is also frustrating when this aggregation is attempted inside one of those parts. As far as the browser is concerned, you are giving it a page, the whole thing, HEAD, BODY, the works. It is not concerned with how you source the content of this page, and will in fact fight you bitterly if you want to pull a buffet style assembly of the page, sourcing parts from different domains. For the browser, the buck stops at your page, and if you disagree, you will be consigned to the iframe hell (and yes, I know about Web Components, HTML Imports and Project Polymer, but let’s talk again when there is less need for all the polyfills in the modern browsers).

As the architect responsible for Web UI integration of the Platform, I tried to get some better perspective on this topic, pulling back from the details as far as I could (until the letters on my monitor became too small to read). Everywhere I looked on the Web, I could see all kinds of integration examples, and as looking for patterns is one of my hobbies, I tried to group them in the smallest number of buckets. I have noticed that we as a species are obsessed with trying to fit anything we can lay our hands on into the four quadrants, probably owing it to our reliance on cardinal directions from the dawn of civilization. Try as I might, I could not come up with the four quadrants analogy of my own. The picture that emerged in my head turned more into the four distinct points on a continuum in the upper right quadrant:

modes-of-integration
The Web UI integration continuum

The coordinates of the space are semantic (how much the elements being integrated depend on a particular vocabulary) and compositional (how much the elements have an expectation of composition in some kind of a container). The four distinct points or modes on the continuum are:

  1. Data – in this mode, participants are asked to contribute data via services following an agreed upon vocabulary. The meaning of the data is fully known to the platform and it is responsible for rendering the data in various ways depending on the context. The platform serves as a data or status integrator. The value is in being able to see the data from various places put together and rendered by the platform, or made available to other participants to render the aggregation.
  2. Links – in this mode, participants are asked to contribute hyperlinks to various places (again, via services but this time we are returning navigation elements, not status data). The platform serves as a navigation integrator. The value is in having a one-stop shop for navigating to participants’ pages, but in a way that is fully dynamic (otherwise why not just use any of the gazillion bookmarks management tools out there).
  3. Activities – in this mode, participants are providing a timeline of activities. For the sake of generalization, we will assume that these activities are in the Activity Streams format, and we will promote an RSS feed into an activity stream where each activity’s verb is ‘post’. The platform serves as the activity integrator, curating those that are tailored to the authenticated user and producing a merged stream that can be filtered and rendered in many different ways. Incidentally, this is what various feed aggregators have been doing for eons (say, Feedly I currently use after my messy divorce with Google Reader).
  4. Widgets – in this mode, participants are providing Web widgets (a general term of a building block that does something useful, e.g. OpenSocial gadget) that the platform will arrange in some manner. The platform has little knowledge of the content of the widgets – it serves as a widget integrator. The value is in composition – putting together a page where building blocks are sourced from a number of different participants, and the platform manages the screen real estate (and may offer drag and drop customization as a bonus).

Note how the modes of integration are arranged in a continuum that goes from semantic to compositional. This gradation indicates how the emphasis is moving from pure data with little or no instructions on how the integration is to be presented, to expecting all the integration to be purely a screen real-estate transaction with little knowledge of what is in the box. The reason this is a continuum is that few real world use cases fall neatly into just one category. Links may contain a title, a description, an indication whether to navigate in place or open a new window. Activities in activity streams contain a lot of presentation content, including strings, images and snippets of HTML that will be rendered in the activity bounding box (the activity feed renderer views these snippets as black(ish) boxes, modulo markup sanitation for security reasons). Widgets that expect to be physically placed on a page may still provide some metadata allowing the container to have some basic knowledge of what they are, even though the gist of the contract is still to carve out an area of the page and let them do their thing.

What this all means in practice comes when you want to instruct a participant in the integration platform to contribute information. Let’s say that we want to hook up a server into the platform UI. We could:

  1. Provide a status REST service that when asked, returns JSON with the current state of the server “I am OK”, “I am kinda bummed”, with additional information such as memory, CPU utilization, etc., according to the agreed upon server status vocabulary. The platform can render a happy face for good server, sad face for a sad server and a hover over the server icon can show the basic stats, all sourced from the service. This is data or status integration. We can easily mix a whole mess of servers on the same page (tons of happy and sad faces, as they go on living their server lives).
  2. Servers can post an activity stream, letting us know when things change (memory low, too many processes, restarting, online again). We can merge these activities into admin’s activity feed, making him forever haunted by the needy servers and their DOS attack phobias.
  3. Servers can contribute links to their home and admin pages we can hook up into our menus and directories (alternatively, links can be sneaked in via the REST service in (1), combining status data and URLs to probe further).
  4. Servers can provide unique performance widgets providing detailed rendering of the status, CPU and memory utilization in a way we don’t need to understand other than allocate some real estate for them. Maybe we just cannot do justice to the happy or sad server faces, and only the server itself can render all the depth and complexity of its inner state.
  5. We can combine 1 through 4 in one awesome server card, containing current status, navigation for further reading, most recent server activities, and some advanced status widgets provided by the server itself. All four modes of integration folded in one neat package.

Since I formulated this model, I have tried to back-test it with all the scenarios of Web UI integration I came across. Here are some examples:

  1. Activities – Facebook, Twitter, LinkedIn, all activity feeds basically. An RSS feed variety – every feed integrator known to man. My favorite is activities that contain object types for which there are formal or informal preview widgets (say, posting a link to a youtube video that ends up being rendered with the video widget embedded in place). A great example of activity integration that brings widget integration for the ride.
  2. Status integration – a gazillion services currently in use that you can call to fetch some interesting data an render on your page (either fetching on the server and inserting content in place, or making a proxied XHR to the service and render on the client).
  3. Links integration – various federated directories, distributed site maps, any type of contribution into extensible menus, launch pads etc. Any time your web page computes the links to put in menus by collecting the links from participants (say, Jazz Home menu).
  4. Widgets integration – any web site that supports widgets – all dashboards/portals, any page where widgets can be added to the side bar, and just about any page that supports ads, Facebook ‘Like’ button, WordPress I am using to write this blog, upcoming Web Components etc.

So far I didn’t find an example of Web UI integration that cannot be nicely categorized as one of these, or a combination therein. Try it yourself – if you find a use case that defies this categorization, I will post a rendition of a big eyed sad puppy face in a follow-up blog, declaring you the winner of the dare.

© Dejan Glozic, 2013