The Genius of Bootstrap (OK, and Foundation)

Credit: Carlos Paes, 2005, Wikimedia Commons
Credit: Carlos Paes, 2005, Wikimedia Commons

This week we spent a lot of time sifting through the available options for the client side Web component model. We were doing it in the context of figuring out what to use for the next generation of Bluemix, so we were really trying to think hard and strategic. It is a strange time to do this. Web Components are so close you can touch them (on Chrome at least), but the days you can code against the entire standard and not bat an eyelash are still further into the future than we would have liked (the same can be said for ES6 – the future is going to be great, just wait a little longer).

You must believe

In its core, the Web is based on linked documents. That didn’t change all these years not matter how much exciting interactive stuff we managed to cram on top. In fact, when people fond of the founding principles cry ‘don’t break the Web’, they mostly rail on approaches that create black holes in the Web universe – domains where rules of the Web such as the ability to crawl the DOM, follow the links and browser history stop applying.

By and large, Web document is consumed as a whole by the browsers. There is no native HTML component model, at least not in the way similar to CSS and JavaScript. It is possible to include any number of modular CSS files, and any number of individual JavaScript libraries (not that it is particularly healthy for your performance). Not so for your markup – in fact browsers are positively hostile to content coming from other places (I don’t blame them because security).

In that climate, any component model so far was mounted on top of a library or framework. Before you can use jQuery widgets, you need jQuery to provide the plug-in component model. All the solutions to date were necessarily two-part: first you buy into a particular buffet table aka proprietary component model, then you can fill up your plate from the said buffet. This is nerve-racking – you must pick the particular model that you think will work for you and stay with your project long enough (and be maintained in the future). Rolling a complete set of building blocks on your own is very expensive, but so is being locked into a wrong library or framework.

Client side only

Another problem that most of the usual offerings share is that they are unapologetically client side. What it means is that a typical component will provide some dummy content such as ‘Please wait…’ if it shows content, or nothing if it is a ‘building block’ widget of some kind. Only after JavaScript loads will it spring to life, which means show anything useful. Widgets that are shown on user input (calendar picker being the quintessential example) suffer no ill consequences from this approach, but if you put client-side-only widgets on the main page, your SEO, performance and user experience will suffer.

Whether this is of importance to you depends on where you stand on the ‘server vs client side’ religious war. Twitter made it very clear that loading JavaScript, making an XHR request back to the mother ship for data, and then rendering the data on the client is not working for them in their seminal 2012 blog post. I am against it as well, as we were bitten hard with bad initial performance of large JavaScript SPAs. YMMV.

Hidden DOM

Web Components as a standard bring another thing to the table: hidden DOM. When you add a custom component to your page, the buck stops at the component boundary – parent styles will not leak into the component, and DOM queries will not include elements inside the custom component. This yields vital encapsulation currently possible only using iframes, with all the nastiness they bring to the table. However, it also makes it hard to style and provide initial state of the components while rendering the page on the server.

In theory, Node.js may allow us to run JavaScript on the server and construct the initial content (again, a theory, I am not sure it is actually possible without ugly hacks). Even if possible, it would not work for other server stacks. Essentially Web Components want you to just drop the component in your markup, set a few properties and let it do its stuff, which in most cases means ‘nothing’ until JavaScript for the component loads.

Model transfiguration

One of the perennial problems of starting your rendering on the server and resuming on the client is model transfer. You had to do some work on the server to curate data required to render the component’s initial state. It would be a waste to discard this data and let the JavaScript for the component go through the same process again when loaded. There are two different approaches to this:

  1. Model embedding – during server side rendering, nuggets of data are embedded in the markup using HTML5 data-* properties. Client side JavaScript uses these nuggets to reconstruct the model without the need to make network requests.
  2. Model bootstrapping – this approach is used by some MV* frameworks (e.g. BackboneJS). You can construct your component’s model, use it to render on the server, then inline the model as text in HTML to be eval-ed on the client. The result is the same – model is ready and does not need to be synced with the server, necessitating a network request.

Enter Bootstrap

Our experience with proprietary web components was mostly with Dojo/Dijit, since IBM made a sizeable investment in this open source library and a ton of products were written using it. It has all the characteristics of a walled garden – before you sample from its buffet (Dijit), you need to buy into the widget component model that Dojo Core provides. Once you do it, you cannot mix and match with Prototype, YUI, or jQuery UI. This is not an exclusive fault of Dojo – all JavaScript component models are like this.

Remember when I told you how Twitter wanted to be able to send something from the server ready for the browser to consume? When we first discovered Bootstrap, we were smitten by its approach. Were were looking for a proprietary widget system to which we had to sell our souls but failed to find it (in fact, the Bootstrap creator Mark Otto expressed open distaste for components that require extensive JavaScript).

Consider:

  1. There is no hidden DOM. There is just plain HTML that is styled by Bootstrap CSS.
  2. This HTML can arrive from the server, or can be dynamically created by JavaScript – no distinction.
  3. Behaviour is added via jQuery plug-ins.
  4. Plug-ins look for Bootstrap components in the DOM and attach event listeners, and start the dynamic behaviour (e.g. Carousel).
  5. The data needed by JavaScript is extracted from ‘data-*’ properties in HTML, and can be programmatically modified once JavaScript loads (model embedding, remember?).

Considering Twitter’s blog post on server side rendering, it is no wonder Bootstrap is incredibly easy to put to use in such a context. You don’t pass a list of entries to the ‘menu’ component, only to be turned into a menu when JavaScript loads. Your menu is simply an ‘ul’ element, with menu items being ‘li’ elements that are just styled to look like a menu. Thanks to CSS3, a lot of animation and special effects are provided natively by the browser, without the need for custom JavaScript to slow down your page. As a result, Bootstrap is really mostly CSS with a sprinkling of JavaScript for behavior (no surprise because it grew out of Twitter’s style guide document).

<div class="dropdown">
   <button class="btn btn-default dropdown-toggle" type="button" id="dropdownMenu1" data-toggle="dropdown" aria-expanded="true">
      Dropdown
      <span class="caret"></span>
   </button>
   <ul class="dropdown-menu" role="menu" aria-labelledby="dropdownMenu1">
      <li role="presentation"><a role="menuitem" tabindex="-1" href="#">Action</a></li>
      <li role="presentation"><a role="menuitem" tabindex="-1" href="#">Another action</a></li>
      <li role="presentation"><a role="menuitem" tabindex="-1" href="#">Something else here</a></li>
      <li role="presentation"><a role="menuitem" tabindex="-1" href="#">Separated link</a></li>
   </ul>
</div>

How important this is for your use case depends on the components. Building block components such as menus, nav bars, tabs, containers, carousels etc. really benefit from server-side construction because they can be immediately rendered by the browser, making your page feel very snappy and immediately useful. The rest of the page can be progressively enhanced as JavaScript arrives and client-side-only components are added to the mix.

If server side is not important to you, Web Components custom element approach seems more elegant:

<fancy-dropdown></fancy-dropdown>

The rest of the markup visible in Bootstrap example is all in the hidden DOM. Neat, except if you want something rendered on the server as well.

Truth to be told, it seems to be possible to create Web Components that act similarly to Bootstrap components. In fact, there is a demo showing a selection of Bootstrap components re-imagined as custom elements. I don’t know how real or ‘correct’ this is, just adding it to the mix for completeness. What is not clear is whether this is merely possible or actually encouraged for all custom element creators.

Haters gonna hate

Bootstrap is currently in its third major version and has been immensely popular, but for somewhat different reasons than I listed here. It comes with a very usable, fresh and modern looking theme that many developers use as-is, never bothering to customize. As a result, there are many cookie-cutter web sites out there, particularly if put together by individuals rather than brand-sensitive corporations and startups.

This has created a massive wave of hate from designers. In the pre-Bootstrap days, developers normally could not design if their life depended on it, putting designers on the critical path for every single UI. Now, most internal, prototype and throwaway apps and sites can look ‘good enough’, freeing up designers to focus on big, long running projects and clients looking to impart their own ‘design language’ on their properties.

I would claim that while Bootstrap as-is may not be suitable for a real professional product, Bootstrap approach is something that should not be thrown away with the bathwater. I know that ‘theming Bootstrap’ sounds like ‘Cuba Libre without the rum’ (note for teetotalers – it’s just Coke). If a toolkit is mostly CSS, and you replace it, what is left? Well, what is left are class names, documentation, jQuery plug-ins and the general approach. A small team of designers and developers can create a unique product or company theme, and the army of developers can continue to use all of Bootstrap documentation without any change.

I know many a company designer is tempted to ‘start fresh’ and build a custom system, but it a much bigger job than it looks like, and is not much different from just theming Bootstrap, with the difference being that you are now on the hook to provide JavaScript for behavior and extensively document it. You can create themes that transform Bootstrap beyond recognition, demonstrated in the Bootstrap Expo. And it is a massive challenge to match the open source network effect (599 contributors, 10,495 commits).

Devil’s Advocate

In the past, there were complains that Bootstrap is bloated (which can be addressed to a degree by cherry-picking Less/Sass files and building a custom CSS), not accessible (this is getting better over time), and has too many accessor rules (no change here). Another complaint is that when a component doesn’t quite do what is desired, modifications eventually cost more than if the component was written from scratch.

I have no problem buying any and all of these complaints, but still claim that the approach is more important than the actual design system. In fact, I put Zurb’s Foundation in the title to indicate a competitor that uses an identical approach (styling HTML with jQuery for behaviour). I could use either (in fact, I have a growing appreciation for Foundation’s clean and understated look that is less immediately recognizable compared to Bootstrap). And the community numbers are nothing to sneeze at (603 contributors, 7,919 commits).

So your point is…

My point is that before thinking about reusable Web components for your project, settle on a design system, be it customized Bootstrap, Foundation or your own. This will ensure a design language fit for your product, and will leave a lot of options open for the actual implementation of user interfaces. Only then should you think of client-side-only components, and you should only use them for building blocks that you can afford to load lazily.

© Dejan Glozic, 2014

Advertisements

Design for the T-Rex Vision

T-rex_by_Lilyu

Note: In case you missed it, there is currently a chance to get a free eBook of RESS Essentials from Packt Publishing. All you need to do is check the book out at Packt Publishing web site, drop a line about the book back at the comments section of my article with your email address and enter a draw for one of the three free books. And now, back to regular programming.

As a proud owner of a rocking home theater system, I think I played Jurassic Park about a gazillion times (sorry, neighbors). There I got some essential life lessons, for example that T-Rex’s vision is based on movement – it does not see you if you don’t move. While I am still waiting to put that knowledge into actual use, it got me thinking recently about design, and the need to update and change the visuals of our Web applications.

Our microwave recently died, and we had to buy a new one. Once installed, the difference was large enough to make me very conscious about its presence in the kitchen. But a funny thing happen – after a few days of me getting used to it, I stopped seeing it. I mean, I did see it, otherwise where did I warm up my cereals in the morning? But I don’t see it – it just blends into the overall kitchen-ness around me. There’s that T-Rex vision in practice.

Recently an article by Addison Duvall – Is Cool Design Really Uncool caught my attention because it posits that new design trends get old much too soon thanks to the quick spreading via the Internet. This puts pressure on the designers to constantly chase the next trend, making them the perennial followers. Addison offers an alternative (the ‘true blue’ approach) in which a designer ‘sticks to his/her guns’ as a personal mark or style, but concedes that only a select few can actually be trend setters instead of followers. Probably due to them turning into purveyors of a ‘niche’ product and their annoying need to eat and clothe themselves, thus nudging them back into the follower crowd.

According to Mashable, Apple reported that 74% of their devices are already running iOS 7. This means that 74% of iPhone and iPad users look at flat, Retina-optimized and non-skeuomorphic designs every day (yours truly included). When I showed the examples of iOS 7 around the office when I updated to it, I got a mixed reaction, with some lovers of the old school Apple design truly shocked, and in a bad way. Not myself though – as a true Steven Spielberg disciple, I knew that it will be just a matter of time until this becomes ‘the new normal’ (whether there will be tears on my cheeks and the realization that ‘I love Big Brother’ is still moot). My vision is based on movement, and after a while, iOS 7 stopped moving and I don’t see it any more. You know what I did see? When iPad Retina arrived with iOS 6 installed and I looked at the horrible, horrible 3D everywhere (yes, I missed iPad Air by a month – on the upside, it is company-provided, so the price is right). I could not update to iOS 7 soon enough.

I guess we established that for many people (most people in fact), the only way around T-Rex vision is movement. That’s why we refresh the designs ever so often. In the words of Don Draper and his ‘Greek mentor’, the most exciting word in advertizing is ‘new’. Not ‘better’, ‘nicer’, ‘more powerful’, just ‘new’ – different, not the same old thing you have been starting at forever (well, for a year anyway, which seems like eternity in hindsight). The designs do not spoil like a (Greek) yogurt in your fridge, they just become old to us. This is not very different from my teenage daughter staring at a full closet and declaring ‘I have nothing to wear’. It is not factually true but in her teenage T-Rex eyes, the closet is empty.

OK then, so we need to keep refreshing the look of our Web apps. What does that mean in practice? Nothing good, it turns out. You would think that in a true spirit of separation of style and semantics, all you need to do is update a few CSS files and off you go.

Not so fast. You will almost certainly hit the following bummers along the way:

  1. Look is not only ‘skin deep’. In the olden days when we used Dojo/Dijit, custom widgets were made out of DOM elements (DIVs, SPANs) and then styled. This was when these widgets needed to run on IE6 (shudder). This means that updating the look means changing the DOM structure of the widgets, and the accompanied JavaScript.
  2. If the widget is reused, there is high likelihood that upstream clients have reached into the DOM and made references to these DOM nodes. Why? Because they can (there is no ‘private’ visibility in JavaScript). I really, really hate that and cannot wait for shadow DOM to become a reality. Until then, try updating one of those reused widgets and wait for the horrified upstream shrieks. Every time we moved to the new MINOR version of Dojo/Dijit, there was blood everywhere. This is no way to live.
  3. Aha, you say, but the newer way is so much better. Look at Bootstrap – it is mostly CSS, with only 8kB of minified gzipped JavaScript. Yes, I agree – the new world we live in is nicer, with native HTML elements directly styleable. Nevertheless, as we now use Boostrap 3.0, we wonder what would happen if we modify the vanilla CSS and then try to move to 4.0 when it arrives – how much work will that be? And please don’t start plugging Zurb Foundation now (as in “ooh, it has more subtle styles, it is easier to skin than Bootstrap”). I know it is nice, but in my experience, nothing is ever easy – it would just be difficult in a somewhat different way.
  4. You cannot move to the new look only partially. That’s the wonder of the new app-based, Cloud-based world that never ceases to amaze me. Yes, you can build a system out of loosely coupled services running in the Cloud, and it will work great as long as these services are giving you JSON or some other data. But if you try to assemble a composite UI, the browser is where the metaphorical rubber meets the road – woe unto you if your components are not all moved to the new look in lockstep – you will have a case of the Gryphon UI on your hands.

I wish I had some reassuring words for you, but I don’t. This is a hard problem, and it is depressing that the federated world of the Cloudy future didn’t change that complexity even a bit. In the end, everything ends up in a single browser window, and needs to match. Unless all your apps adopt something like Bootstrap, don’t change the default styles and move to the each new version diligently, you will suffer the periodic pain of re-skinning so that the T-Rexes that use your UIs can see it again (for a short while). It also helps to have sufficient control over all the moving parts so that coordinated moves can actually be made and followed through, in the way LinkedIn moved from several stacks and server side rendering to shared Dust.js templates on the client. Taking advantage of HTML5, CSS3 and shadow DOM will lessen the pain in the future by increasing separation between style and semantics, but it will never entirely diminish it.

If you think this is limited to the desktop, think again. Yes, I know that in theory you need to be visually consistent within the confines of your mobile app only, which is a much smaller world, but you also need to not look out of place after the major mobile OS update. Some apps on my iPhone caught up, some still look awkward. Facebook app was flat for a while already. Twitter is mostly content anyway, so they didn’t need to do much. Recently Dropbox app refreshed its look and is now all airy with lightweight icons and hair lines everywhere, while slate.com app is still chunky with 3D gradients (I guess they didn’t budget for frequent app refreshes). Oh, well – I guess that’s the price of doing pixel business – you need to budget for the new virtual clothes regularly.

Oh, look – my WordPress updated the Dashboard – looks so pretty, so – new!

© Dejan Glozic, 2013

RESS to the Rescue

It seems that every couple of years we feel a collective urge to give a technique a catchy acronym in order to speed up conversation about UI design. Last couple of years, we grew accustomed to throwing around the term Responsive Design casually, probably because it rolls off the tongue easier then “we need to make the UI, like, re-jiggle itself on each thingy”. Although I like saying ‘re-jiggle’. Re-swizzle is my close second.

As we were working on the tech preview of the Jazz Platform Home app, we naturally wanted it to behave on phones and tablets. And it did. The header in particular is a breathing, living thing, going through a number of metamorphoses until a desktop caterpillar turns into a mobile butterfly:

res-all

It all works very well, but a (butter)fly in the ointment is that we need to send the markup for both the caterpillar and the butterfly on each request. We do CSS media query transformations to make them do what we want, but we are clearly suboptimal here. I am sure that with some CSS shenanigans we could pull it off with one set of HTML tags, but at some point it just becomes too hard and error-prone. Clearly it is one thing to make the desktop/tablet header drop elements and adjust with screen size, and another to switch to a completely new element (off-screen navigation) while keeping the former rotting in the display:none hell.

This is not the only example where we had to use this technique. The next example has the section switcher in the Project Details page go from vertical set of tabs to a horizontal icon bar:

proj-res

Again, both markups are sent to the client, and switched in and out by CSS as needed. This may seem minor but in a more complex page you may end up sending too much stuff on all clients. We need to involve the server into this using some kind of a hybrid technique.

As JJG before, Luke Wroblewski has not actually invented this technique but simply put a name on it. RESS stands for REsponsive design + Server Side and first appeared in Luke’s article in 2011. The technique calls for involving the server in responsive design so that only what is needed is sent to the client. It also goes by the name ‘adaptive design’. The server is not meant to replace but rather complement responsive design. You can view the server as a more coarse-grained player in this duo, sending ‘mobile’ or ‘desktop’ stuff to the client, while the responsive design takes care of adapting to the device in more steps or decision points. It needs to because what does it mean ‘mobile’ today anyway? There is a continuum of screen sizes now, from phones to phablets to mini-tablets to regular tables to small laptop screens to desktops – what is ‘mobile’ in that context?

It seems like RESS would help us in our two examples above by allowing us to not send the desktop header to the phone and off-screen navigation to the desktop. However, the server has limited means to perform device detection. At the very bottom, there is HTTP header ‘user-agent’. Agent sniffing is a never-ending task because of new devices that keep coming online. There is a lot of virtual ink spilled on the algorithms that will correctly detect what you are running. This is error-prone activity relying on pattern matching against lists that normally go stale if not updated constantly. There were attempts to subcontract this job, but many are either stack-specific or fee-based (such as WURFL or detectmobilebrowsers.mobi), or a combination of all these approaches. I don’t see how any of these solutions make sense to most of the web sites that want to go the RESS route. Simply judging by the number of available solutions, you can see that this is not a solved problem, analogous to the number of hair loss treatments.

Another approach we can use (or combine with user agent detection) is to send the viewport size to the server in a cookie. This arms the server with the additional means to make correct decision because while, say, iPad is reported as ‘mobile’, it has enough resolution to be sent a full desktop version of the page, and viewport size would reveal that. The problem with this technique is that your site needs to be visited at least once to give JavaScript a chance to set the cookie, which means that the optimization only kicks in on repeat visits.

Considering the imprecise nature of device detection, you should consider RESS a one-two approach – server tries its best to detect the device and send the appropriate content, then responsive design on the client kicks in. A prudent approach would be to err on the safe side on the server and give up on device detection at the first sign of trouble, letting the client do its thing (when, say, user-agent has never been seen before or cookies are disabled preventing viewport size transmission). With this approach, we should ensure that client-side responsive design works well on its own, and consider server-side an optimization, a bonus feature that kicks in for the most straightforward cases (i.e. most well known phones, tablets) and/or on repeat visits.

By using the vision analogy, the most you would expect your server to do in a RESS-enabled site is to tell you that the vehicle on the road is a car and not a truck, and that it is black(ish). As long as you don’t expect it to tell you that it is a 2012 BMW i3 with leather seats and technology package, you will be OK.

© Dejan Glozic, 2013

The Gryphon Dilemma

Gryphon

In my introductory post The Turtleneck and the Hoodie I kind of lied a bit that I stopped doing everything I did in my youth. In fact, I am playing music, recording and producing more than I did in a while. I realized I can do things in the comfort of my home that I could have only dreamed in my youth. My gateway drug was Apple GarageBand, but I recently graduated to the real deal – Logic Pro X. As I was happily mixing a song in its beautifully redesigned user interface, I needed some elaborate delay so I reached for the Delay Designer plug-in. What popped up required some time to get used to:

Delay-Designer

This plug-in (and a few more in Logic Pro) clearly marches to a different drummer. My hunch is that it is a carry-over from the previous version, probably sub-contracted and the authors of the plug-in didn’t get to updating it to the latest L&F. Nevertheless, they shipped it this way because it very powerful and it does a great primary task, albeit in its own quirky way.

This experience reminded my of a dilemma we are faced today in any distributed system composed of a number of moving parts (let’s call them ‘apps’). A number of apps running in a cloud platform can serve Web pages, and you may want to hook them up together in a distributed ‘site of sites’. Clearly the loose nature of this system is great from the point of view of flexibility. You can individually evolve each app as long as the contract that glues them together is upheld. One app can be stable and move slowly, while you can rev the other one like mad. This whole architecture works great for non-visual services. The problem is when you try to pull any kind of coherent end user experience out of this unruly bunch.

A ‘web site’ is an illusion, inasmuch ‘movies’ are ‘moving’ – they are really a collection of still images switched at 24fps or faster. Web browsers are always showing one page at a time. If a user clicks on an external link, the browser will unceremoniously dump all of page’s belongings on the curb and load a new one. If it wasn’t for the user session and content caching, browsers would be like Alzheimer patients, having a first impression of the same page over and over. What binds pages together are common areas that these pages share with their keen, making them a part of the whole.

In order to ensure this illusion, web pages have always shared common areas that navigationally bind them to other pages. For the illusion to be complete, these common areas need to be identical from page to page (modulo selection highlights). Browsers have become so good at detecting shared content on pages they are switching in and out, that you can only spot a flash or flicker if the page is slow or there is another kind of anomaly. Normally the common areas are included using the View part of the MVC framework – including page fragments is 101 of the view templates. Most of the time it appears as if only the unique part of the page is actually changing.

Now, imagine what happens when you attempt to build a distributed system of apps where some of the apps are providing pages and others are supplying common areas. When all the apps are version 1.0, all is well – everything fits together and it is impossible to tell your pages are really put together like words on ransom notes. After a while, the nature of independently moving parts take over. We have two situations to contend with:

  1. An app that supplies common areas is upgraded to v2.0 while the other ones stay at v1.0
  2. An app that provides some of the pages is upgraded to v2.0 while common areas stay at 1.0
intra-page
Evolution of composite pages with parts evolving at a different pace.

These are just two sides of the same coin – in both cases, you have a potential for end-results that turn into what I call ‘a Gryphon UX’ – a user experience where it is obvious different parts of the page have diverged.

Of course, this is not a new situation. Operating system UIs go through these changes all the time with more or less controversy (hello, Windows 8 and iOS7). When that happens, all the clients using their services get the free face lift, willy-nilly. However, since native apps (either desktop or mobile) normally use native widgets, there are times when even an unassisted upgrade turns out without a glitch (your app just looks more current), and in real world cases, you only need to do some minor tweaking to make it fit the new L&F.

On the Web however, the Web site design runs much deeper, affecting everything on each page. A full-scale site redesign is a sweeping undertaking that is seldom attempted without full coordination of components. Evolving only parts of a page is plainly obvious and results in it not only being put together like a ransom note but actually looking like one.

There is a way out of this conundrum (sort of). In a situation where a common area can change on you any time, app developers can sacrifice inter-page consistency for intra-page consistency. There is no discussion that common set of links is what makes a site, but these links can be shared as data, not finished page fragments. If apps agree on the navigational data interchange format, they can render the common areas themselves and ensure gryphons do not visit them. This is like reducing your embassy status to a consulate – clearly a downturn in relationships.

Let’s apply this to a scenario above. With version evolution, individual pages will maintain their consistency, but as users navigate between them, common areas will change – the illusion will be broken and it will be very obvious (and jarring) that each page is on its own. In effect, what was before a federation of pages is more like a confederation, a looser union bound by common navigation but not the common user experience (a ‘page ring’ of sorts).

inter-page
Evolution of composite page where each page is fully controlled by its provider.

It appears that this is not as much a way out of the problem as ‘pick your poison’ situation. I already warned you that the Gryphon dilemma is more of a philosophical problem than a technical one. I would say that in all likelihood, apps that coordinate and work closely together (possibly written by the same team) will opt to share common areas fully. Apps that are more remote will prefer to maintain their own consistency at the expense of inter-page experience.

I also think it all depends on how active the app authors are. In a world of continuous development, perhaps short periods of Gryphon UX can be tolerated knowing that a new stable state is only a few deploys away. Apps that have not been visited for a while may prefer to not be turned into mythological creatures without their own consent.

And to think that Sheldon Cooper wanted to actually clone a gryphon – he could have just written a distributed Web site with his three friends and let the nature take its course.

© Dejan Glozic, 2013

Progressive Disclosure and Progressive Reduction

nomagic

‘Make things as simple as possible, but not simpler’ is the user-friendly paraphrase of a quote from a lecture Albert Einstein delivered at Oxford in 1933. It is a wonderful irony that Einstein was proven wrong (by the composer Roger Sessions) in that he didn’t make the quote itself ‘as simple as possible’, necessitating subsequent improvements.

He also said other things. For example, when some kids where playing on his patio where concrete did not fully set, he allegedly yelled at them, later justifying the yelling with “I love children in ze abstract, not in ze concrete”. We will leave the application of the Einstein’s Gulf to his personal life for another time and focus instead on the problem of simplicity.

My designer colleague Kim Peter shared a blog post recently about LayerVault’s approach to this balancing act called progressive reduction. In a nut shell, LayerVault proposes initial user experience that is helpful and descriptive, only to be made ‘tighter’ and simpler as the user gains proficiency. This progress is supposed to be automatic (by tracking activity) – no explicit declaration from a user required (‘I am so an expert now, honestly’). The concept also provides for users being downgraded due to lack of recent activity.

Progressive reduction example (LayerVault).

The concept is not new but pairing it with usage monitoring caught my attention. I discussed it with my colleagues and got mixed reaction. As a team, we became allergic to solutions that seem overtly ‘magical’, and UI that changes between two visits without the explicit gesture definitely falls into that class (I am looking at you, Microsoft Word 2003 menus).  However, designers are inexplicably drawn to the siren call of ‘the perfect interface’ where things cannot be ‘made simpler’. Look at all the gushing over the evolution of the Starbucks logo, resulting in the removal of the company name (Starbucks reckons we are all expert coffee drinkers by now and need no hand holding in locating the nearest source of developer’s nectar).

Progressive reduction reminded me of another user experience technique – progressive disclosure. When I was on the Eclipse team, we had been tasked with solving a somewhat different problem IBM was experiencing with some of its large desktop applications (you can read the presentation about it by a former Eclipse colleague Kim Horne; favorite part: user.brain.OutOfMemoryError). They had so many features that the out of the box experience had some of the ‘747 cockpit’ feel to it. The fix of that bug was to hide most of the features and slowly reveal them as the user goes about his daily tasks (‘only if you answer these questions three, the other side of the UI you’ll see‘).

We spent a lot of time discussing whether the true fix was to ruthlessly edit the UI, or declare that all the dials and switches are required in order to actually fly the 747. On the topic of feature editing, I remember that Kevin Haaland, my manager and Eclipse Platform project manager at that time, made a distinction between true, or systemic and fake or accidental complexity, of which only the former actually matters, the latter being a by-product of things being cobbled together without enough thought about the end result. I also developed a distaste for our lack of backbone in making a design decisions, resulting in delegating too many choices to the users via a massive preference dialog (300 individual preference panels in one instance). Even the 747 cockpit has evolved to become much more streamlined and pilot-friendly – I guess we need to look for another metaphor. Oh, and progressive disclosure was ‘magical’ too – the features in the UI appeared on their own, not as a result of an explicit action.

It is no accident that progressive disclosure is not mentioned as frequently nowadays with Web or mobile applications. For the former, people are experiencing your site through the peephole of one page at a time, so it can be enormous and you only need to worry about the complexity of a single page (unless you include links to 300 other pages). And on mobile devices, it is really hard to create a mobile app with too many features due to the natural restrictions of the medium.

Perhaps another reason it is not spelled out is because it graduated into the ‘duh’ status – of course you will provide a clean initial interface and hide secondary and tertiary functionality behind menus, slots, drawers and other UI elements for stashing things away. It is just good design, nothing to see here, moving on.

Progressive reduction is a different beast altogether and is concerned with reconfiguring the experience with the level of expertise as it changes.

I think the thought process that led to progressive reduction was as follows:

  1. We worked hard with our target audience to ensure all the features in the product are actually wanted and sought after
  2. We like clean UI so most of the features are stashed away so that the initial experience is as sparse as a Scandinavian living room
  3. We now have an opposite problem – users unfamiliar with the interface not quickly finding the features they want
  4. We better put some in-context hand holding – training wheels – until the user is confident with the UI
  5. Once the user is confident with the UI, he will start resenting the hand holding
  6. Users are too busy to click on the ‘dismiss the training wheels’ button, so the UI should ‘sense’ when to reconfigure itself

I am fine with most of this progression, except the claim number 6. In-context guidance is a welcome concept as long as it right beside the UI it is related to and can be easily dismissed. But I have an issue with the claim that we need to take the act of explicit dismissal away from the users and base it instead on usage. A UI that flexes from descriptive to streamlined can easily be toggled between these modes with a gesture in a more predictable fashion, and with a minimal workload overhead (and no overhead of tracking and storing usage data). For example, task guides in the Jazz Platform prototype can be explicitly dismissed and they stay closed until summoned again:

An Example of a Jazz Platform prototype task guide
An Example of a Jazz Platform prototype task guide

My misgivings with the automated part notwithstanding, I liked the article and I think the takeaway message is to separate the modes of presentation (beginner, expert) and how the UI switches between the modes (explicit action or some kind of magic). As you can guess, I am biased towards ‘no surprises’ UI and an explicit gesture to make the UI progressively simpler (beginner>normal>expert modes) is perfectly fine with me. However, I don’t want the UI itself deciding what level of simplicity I deserve at any point in time, and if I was a bad boy and should be downgraded to the training wheels for the failure of visiting the page often enough.

© Dejan Glozic, 2013