Components Are (Still) Hard on the Web

matryoshka_dolls_3671820040_2

Here’s Johnny! I know, I know. It’s been a while I posted something. In my defence, I was busy gathering real world experience – the kind that allows me to forget the imposter syndrome for a while. We have been busy shipping a complex microservice system, allowing me to test a number of my blog posts in real life. Most are holding up pretty well, thank you very much. However, one thing that continues to bother me is that writing reusable components is still maddeningly hard on the Web in 2016.

Framing the problem

First, let’s define the problem. In a Web app of a sufficient complexity, there will be a number of components that you would want to reuse. You already know we are a large Node.js shop – reusing modules via NPM is second nature to us. It would be so nice to be able to npm install a Web component and just put it in your app. In fact, we do exactly that with React components. Alas, it is much harder once you leave the React world.

First of all, let’s define what ‘reusing a Web component’ would mean:

  1. You can put a component written by somebody else somewhere on your page.
  2. The component will work well and will not do something nasty.
  3. The component will not look out of place in your particular design.
  4. You will know what is happening inside the component, and will be able to react to its lifecycle.

First pick a buffet

Component (or widget) reuse was for the longest time a staple of desktop UI development. You bought into the component model by using the particular widget toolkit. That is not a big deal on Windows or MacOs – you have no choice if you want to make native applications. The same applies to native mobile development. However, on the Web there is no single component model. Beyond components that are intrinsic part of HTML, in order to create custom components you need to buy into one of the popular Web frameworks first. You need to pick the proverbial buffet first before you can sample from it.

In 2016, the key battle is currently between abstracting out and embracing the Web Platform. You can abstract the platform (HTML, CSS, DOM) using JavaScript. This is the approach used by React (and my team by extension). Alternatively, you can embrace the platform and use HTML as your base (what Web Components, Polymer and perhaps Angular 2 propose). You cannot mix and match – you need to pick your approach first.

OK, I lied. You CAN mix and match, but it becomes awkward and heavy. React abstracts out HTML but if you use a custom component instead of the built-in HTML, React will work fine. All the React traits (diff-ing the two incremental iterations of virtual DOM, then applying the difference to the actual DOM) works for custom components as well. Therefore, it is fine to slip in a Web Component into a React app.

The opposite does not really work well – consuming a React component in Angular or Polymer is awkward and not worth it really. Not that the original direction is worth it necessarily – you need to load Web Components JavaScript AND React JavaScript.

Don’t eat the poison mushroom

One of the ways people loaded components in their pages in the past was by using good old iframes. Think what you will about them, but you could really lock the components down that way. If you load a component into your own DOM, you need to really trust it. Single origin policy and CORS are supposed to help you prevent a component leaking data from your page to the mother ship. Nevertheless, particularly when it comes to more complex components, it pays to know what they are doing, go through the source code etc. This is where open source really helps – don’t load a black box component into your DOM.

The shoes don’t match the belt

One of the most complex problems to deal with when consuming a Web component of any type is the design. When you are working in a native SDK, the look and feel of the component is defined by the underlying toolkit. All iOS components have the ‘right’ look and feel out of the box when you consume them. However, Web apps have their own themes that create combinatorial explosion. A reusable component needs to do one of the following things:

  1. Be configurable and themeable so that you can either set a few parameters to better blend it into your style guide, or provide entire template to really dial it in
  2. Be generic and inoffensive enough to be equidistant from any parent theme
  3. Be instantly recognizable (think youtube player) in a way that makes it OK that it has its own look and feel.

A very complex reusable component with a number of elements can be very hard to dial in visually by consumers. In corporations, this may reduce the number of themes you want to support. A large component may take it upon itself to support 2-3 supported and widely used design style guides. Then all you need to do is provide a single parameter (style guide name) to make the component use the right styles across the board.

What is going on inside?

Adding a component into your page is not only a matter of placement and visual style. Virtually all reusable components are interactive. A component can be self-contained (for example, all activity in a youtube player is confined to its bounding box), or expected to interact with the parent. If the component must interact with the parent, you need to consider the abstraction chain. Consider the simple countdown timer as a reusable component. Here is how the abstraction chain works:

timer-abstraction

The timer itself uses two low-level components – ‘Start’ and ‘Stop’ buttons. Inside the timer, the code will add click listeners for both buttons. The listeners will add semantic meaning to the buttons by doing things according to their role – starting and stopping the timer.

Finally, when this component is consumed by your page, only one listener is available – ‘onTimerCountdown()’. Users will interact with the timer, and when the timer counts down to 0, the listener you registered will be notified. You should be able to expect events at the right semantic level from all reusable components, from the simplest calendars to large complex components.

If a component can be made part of a larger document, two things you will care the most is serialization and dirty state. When users interact with the component and make a modification, you want to be told that the component is changed. This should trigger the dirty state of the parent. When the user clicks ‘Save’, you should be able to serialize the component and store this state in the larger document. Inversely, on bootstrap you should be able to pass the serialized state to the component to initialize itself.

Note that the actual technology used does not matter here – even the components embedded using iframes can use window.postMessage to send events up to the parent (and accept messages from the parent). While components living in your DOM will resize automatically, iframe-ed components will need to also send resizing events via window.postMessage to allow the parent to set the new size of the iframe.

The long tail

More complex reusable components don’t only have client-side presence. They have a need to call back to the server and fetch the data they need. You can configure such a component in two ways:

  1. You can fetch the data it requires for the component. In that case, the component is fully dependent on the container and it is container’s responsibility to perform all the XHR calls to fetch the data and pass it to the component. This approach may be best for pages that want full control of the network calls. As an added bonus, you can fit such a component into a data flow such as Flux, where some of the data may be coming from Web Socket driven server-side push, not just XHR request.
  2. You can proxy the requests that the component is performing. This approach is also acceptable because it allows the proxy to control which third-party servers are going to be whitelisted.
  3. You can configure CORS so that the component can make direct calls on its own. This needs to be done carefully to avoid the component siphoning data from the page to servers you don’t approve.

On all of these cases you may still want to be told about the events inside the web component using the component events as discussed above.

Frameworks are just the beginning

So there you go – all the problems you need to wrestle with when trying to reuse components in a larger project. Chances are the component is written in the ‘wrong’ framework, but trying to make the component load in your page is only the beginning. Fitting the component into the page visually, figuring out what is happening in it events-wise, and feeding it data from the server is the real battle. Unless you are trying to load a calendar widget, this is where you will spend most of your time.

© Dejan Glozic, 2016

Pessimism as a Service

As far as I can remember, I was forgetful (ironic, I know). I could be driving for ten minutes, wandering why I don’t see the phone Bluetooth symbol on my car’s dash, and realizing I forgot my cellphone at home. Lately, when I reach the door, I ask myself: “OK, what have you forgotten”? Not “have you forgotten anything” but “what”, assuming that an affirmative answer is a forgone conclusion. Such a negative, “guilty until proven innocent” approach saved me many times, but taxed my soul. Am I really that predictable? Is cynicism the only way?

As our super cool, micro-service packed, React supercharged project is picking up steam, I am looking at everything we have done and counting the ways we have deployed ‘Pessimism as a Service’ to production. These examples may seem disconnected to you, but I assure you, there is a cold, calculated thread binding them. Hey, it’s a totally accepted artistic form – my own omnibus, as it were.

Micro services and human nature

I said it before, and I will say it again – micro services are more about people and process than about technology. In his strained attempt to disguise his distaste for micro services, Martin Fowler has still mustered a fainted praise in the way micro services tend to enforce code modularity.

The trouble is that, with a monolithic system, it’s usually pretty easy to sneak around the barrier. Doing this can be a useful tactical shortcut to getting features built quickly, but done widely they undermine the modular structure and trash the team’s productivity. Putting the modules into separate services makes the boundaries firmer, making it much harder to find these cancerous workarounds.

Martin Fowler on Strong Module Boundaries

It this will not inject you with a healthy dose of Weltschmerz, nothing will. What he is saying is that reaching directly into modules instead of using proper interfaces is a tech version of a cookie jar, and instead of counting on your maturity and discipline, micro services simply hide the cookie jar or put it on a top shelf, were you can’t reach it because you skipped the gym too many times.

Large systems are built by real-world organizations, and people are messy, petty, complicated, full of hidden agendas and desires. Engineers who try to look at micro services as a rational system fail to grasp the potent property that requires high emotional intelligence to understand. And it is nothing new – in fact I posit that the first micro service architecture has been practiced by the Nipmuk Indians, living near the lake in today’s Massachusets of the impossible name Chargoggagoggmanchauggagoggchaubunagungamaugg. Translated, it is really a module boundary protocol:

You fish on your side [of the lake], I fish on mine, nobody fishes in the middle.


– Full Indian name for the lake Manchaug, shortened by locals not familiar with micro-service architecture

So, yeah. Ideally, a monolithic system could be highly modular and clean if implemented by highly disciplined, rational people impervious to human foibles. When you manage to hire a teamful of such people, do let me know. In the mean time, the jaded micro service system we are using is humming in production.

AKKA is not a true micro service system

True story – I went to present in the first Toronto Reactive meetup because: (a) I mixed Reactive with React and (b) I wanted to learn what the whole Reactive Manifesto was by presenting on it. Hey, learning by doing!

As such, I was exposed to the AKKA framework. You can read all about Reactive in one of my previous blogs, but suffice to say that AKKA is a framework based on the ‘actor’ pattern and designed specifically to foster an asynchronous, dynamic and flexible architecture that can be deployed to a single server, and then spread out across any number of clusters as the needs grow.

There is a lot to like in AKKA, but I must sadly posit here that it is not a true representative of a micro service system. It is a system inspired by micro services, implementing many of their tenets and with some really nice properties. And yet it betrays one of the key aspects of micro services in that it is not pessimistic. In order to get the benefits of it, you need to lock yourself into a Scala/AKKA stack, paraphrasing the famous Ford Model T joke (you could order it in any color as long as it was black). You lose the ability to choose your stack per micro service.

This property is often misunderstood as a licence for anarchy – a recipe for disaster, cobbling together a concoction of languages, platforms, stacks and runtimes that nobody will be able to keep running and maintain. Of course that unchecked freedom has its price: a real world microservice system will most likely be using only 2-3 stacks (in our case, they ended up being Node.js and Java) and a small number of client side frameworks (for our extended team, React and AngularJS). But there is an ocean of separation between one and two platforms – the former representing lock-in, the latter being freedom.

As I always assume I forgot something, we should always assume that something better is just around the corner, and we don’t want to be hopelessly locked in when it arrives. But we also don’t want to bet our farm on it just yet. This is where the ability to start small is vital: we can try out new approaches in a single micro service without the obligation for a wholesale switch. AKKA requires that we profess our undying love to it and its Scala/JVM stack. Your milage may vary, but I cannot put all my money in that or any other single basket.

React is smart so you can be dumb

On to the client side of the the full stack. My readers know I have expressed my reservation about AngularJS before. I always found its syntax weird, its barrier of entry too high for a practical working system, and that’s before we even mention the version 2.0 schism. However, I always feared I will be viewed as ‘old man that yells at cloud‘ for not recognizing Angular’s genius, until React arrived.

You see, I got React instantly. I didn’t have to scratch my head and re-read its examples. When you read React code, you know exactly what is happening. Of course, that’s because it does less – just the View part. You need to implement Flux for coordinating actions, data stores and views, but Flux is even simpler, and consists of a single dispatcher module you fetch from NPM. You also need something like react-router in order to handle client side page switching. Then you need something like react-engine if you want isomorphic apps (I was told the new term is ‘universal’; I will use both for fun).

You may not fathom the difference in approaches between AngularJS and React until you watch the video explaining React’s design philosophy. You can tell that Facebook deploys React to production. In my opinion, Angular suffers from being designed by rock stars for other rock stars. Once you start getting real and deploying non-trivial apps to production, you need to scale, and that means increasing the number of people that can be productive with your framework of choice. React was designed with the assumption that if the framework is predictable and relatively simple, the velocity can be increased without the proportional increase in the bug rate. Otherwise, what’s the point?

React designers took human nature into account, assumed that we are all dumb at various times of day or week, and ensured that even in those unhappy moments, we can still read our React code and understand what it is doing with relative ease. It feels like a rotten compromise, but it is pure genius.

Web Components just around the corner

Ah, Web Components. The ultimate native component model that will solve Everything. Three years ago there was a lot of excitement, and people jumping on the polyfills to ‘temporarily’ shim the browsers until everybody implements them natively. Fast-forward to November 2015, and today you still cannot bet your project on them in production. Yes, they are natively implemented in Chrome, but if you didn’t want to use IE-only browser extensions 15 years ago, why would you do it when Google, and not Microsoft, is the vendor trying to sell its agenda as a standard.

Yes, there has been some movement on cross-browser support for Web Components, at least when Shadow DOM is concerned. Nevertheless, nothing stands still, and now some aspects of the ES6 module loading are at odds with HTML Imports (an important part of Web Components spec).

And of course, what has also happened in the last three years is that we got React. It has a very strong component model (albeit one that you can only peruse if you lock yourself into React), and more importantly, it extends to the server and even native rendering. This makes React attractive in ways that Web Components will never be able to match.

A year ago, we seriously toyed with the idea of just using shims until Web Components, clearly the future of the component models, arrive. I am glad I allowed my jaded self to prevail and instead used React – it helped us ship to production, with no performance compromises coming from shims, and looking back, we would be nowhere close to the promised glorious future if we allowed exuberance to sway our better judgement.

I am not saying ‘No’ to Web Components forever – they are actually not incompatible with React, and in fact a low-level Web Component can be used just like a native component in a React application, reaping the benefits of the DOM diffing. However, we don’t intend to write Web Components ourselves – we are fully isomorphic and server-side rendering gives us benefits that a comparable Web Component would not.

I predict that Web Components will be the way for incompatible frameworks to co-exist, the way to ‘fish in the middle’ of the Nipmuk lake mentioned above.

Optimism dreams, pessimism ships

These four examples show why enthusiasm and optimism rule the prototypes, meetups and articles, but pessimism takes over in production. Taking human nature into account, rolling with the imperfections of reality, expecting and preparing for the worst pays off tenfold once the projects get serious.

Now, if I can only remember if I turned the stove off before leaving home.

© Dejan Glozic, 2015

ReactJS: The Day After

A man with an excruciating headache, Wikimedia Commons
A man with an excruciating headache, Wikimedia Commons

The other day I stumbled upon a funny Onion fake news report of the local man whose one-beer plan went terribly awry. Knowing how I professed undying love to ReactJS in the previous article, and extrapolating from life that after every night on the town comes the morning of reckoning, it is time to revisit my latest infatuation.

Alas, those expecting me to declare my foolishness and heartbreak with ReactJS are hoping in vain. Instead, what you will get here is a sober (ha) account of the problems, gotchas and head scratchers we encountered running ReactJS in production. We continue to use it and plan to build our next set of micro services using it, but we have a more realistic view of it now. So let’s dive in.

  1. Code Splitting – First off, my example didn’t just use ReactJS, but also react-router and react-engine. This amazing trio together allowed us to realize the dream of isomorphic apps, where you start rendering on the server, let the browser quickly render the initial content, load JavaScript, mount React components and continue with the same code on the client.
    Nevertheless, when we got past the small example, we realized that we need to split the code we initially bundled together using browserify. At the time of this writing, code splitting is not entirely painless. React-router in its version 0.13 has examples that all presume the use of Webpack to build your JavaScript. We are using browserify and must suffer until React-router 1.0 arrives. In the mean time, we can use react-router-proxy-loader, which allows us to asynchronously load code from a bundle that does not expect Webpack.

  2. React-engine growing pain – As any new library, react-engine has some rough edges. We are happy to report that one of the issues we had with it (the inability to control how react-router is being instantiated) has already been resolved. We are hoping to be able to make react-engine omit some of the data it sends to the client because it is only ever used for server-side rendering.
  3. ReactJS id properties – React attaches ‘reactid’ data property to almost all DOM elements, using ids that are sometimes very long, resulting in situations like:
    <span data-reactid=".ejv9lnvzeo.1.2.3.0.0.$7c87c148-e1a4-4cb8-81f8-c5e74be7684b.0.1.0.0">Hello</span>
    

    If you are using gzip for the markup (as you should), these strings compress very well, but you still end up with a very messy and hard to read HTML when you view source. React team is debating back and forth on the need of these properties and they may disappear at some point in the future. I for one will not miss them.

  4. Fussy with the whitespace – While you may think when working with JSX that you are coding in HTML, you are not, and nowhere is it more apparent than when you try to add some free text in the body of HTML elements, or to mix free text and elements. JSX converts snippets of text into spans at will, resulting in HTML that bears little resemblance to the initial JSX.
    I wish there was a better way to do this. I know all the virtues of React and how JSX is most decidedly not HTML, but some things like free-form text with some embedded tags should not result in a flurry of spans (and the hated data-reactid properties).
  5. Fussy with JavaScript tags – Inserting JavaScript tags in JSX is easy if you are referencing external JS files, but if you try to inline some JavaScript right there, JSX can through you curveball after curveball until you give up and extract that code into a file. This is not a show stopper but it is annoying when you want to inline couple of lines. From the maintainability point of view, it is probably better to keep JavaScript in its own file, so I am not going to protest too loudly.

ReactJS and Web Components

As with any JS framework, making a choice is normally followed by a little nagging voice in your head concerned that you chose wrong. When it comes to religious choices (AngularJS vs ReactJS vs EmberJS etc.), there is little you can do – you just need to make a leap of faith, make sure the framework works for your particular use case and jump.

However, Web Components are something else – they promise to be ‘the native Web’ at some point, so choosing between Web Components and ReactJS is not a religious debate. Even today, with the shims it is possible to run Web Components in browsers not supporting them natively, and natively in Chrome. A growing body of reusable Web components is something you don’t want to be left out of if you are Reactified to the max.

Luckily, Andrew Rota helped out with his presentation on complementarity of ReactJS and Web Components at the recent ReactJS Conf 2015. It is worth the watch, and the skinny is that since about October 2014, custom components are a fair game in JSX. This means that you can place HTML imports in the head element, and then freely use custom components in JSX the same way you would native HTML elements.

In fact, you are not loosing out on the promise of ReactJS virtual DOM. React treats custom components the same way as native HTML components – it will compare your new render to the current DOM state and only change what needs changing (adding, removing, or changing elements and properties that are not the same). This means that you can extend the power of ReactJS to Web Components.

Of course, there are some caveats, but it turns out that things you need to care about when writing Web Components for ReactJS consumption are generally applicable. Writing small components, extremely well encapsulated, that do not leak or make assumptions about the page they are running in, or try to insert stuff outside their own boundary.

No turning back

So this turned to be a click bait of sorts, for we are not turning back from ReactJS, just learning how to use it efficiently and how to be better at it. Stay tuned for the new cool stuff we were able to do with it.

© Dejan Glozic, 2015

The Genius of Bootstrap (OK, and Foundation)

Credit: Carlos Paes, 2005, Wikimedia Commons
Credit: Carlos Paes, 2005, Wikimedia Commons

This week we spent a lot of time sifting through the available options for the client side Web component model. We were doing it in the context of figuring out what to use for the next generation of Bluemix, so we were really trying to think hard and strategic. It is a strange time to do this. Web Components are so close you can touch them (on Chrome at least), but the days you can code against the entire standard and not bat an eyelash are still further into the future than we would have liked (the same can be said for ES6 – the future is going to be great, just wait a little longer).

You must believe

In its core, the Web is based on linked documents. That didn’t change all these years not matter how much exciting interactive stuff we managed to cram on top. In fact, when people fond of the founding principles cry ‘don’t break the Web’, they mostly rail on approaches that create black holes in the Web universe – domains where rules of the Web such as the ability to crawl the DOM, follow the links and browser history stop applying.

By and large, Web document is consumed as a whole by the browsers. There is no native HTML component model, at least not in the way similar to CSS and JavaScript. It is possible to include any number of modular CSS files, and any number of individual JavaScript libraries (not that it is particularly healthy for your performance). Not so for your markup – in fact browsers are positively hostile to content coming from other places (I don’t blame them because security).

In that climate, any component model so far was mounted on top of a library or framework. Before you can use jQuery widgets, you need jQuery to provide the plug-in component model. All the solutions to date were necessarily two-part: first you buy into a particular buffet table aka proprietary component model, then you can fill up your plate from the said buffet. This is nerve-racking – you must pick the particular model that you think will work for you and stay with your project long enough (and be maintained in the future). Rolling a complete set of building blocks on your own is very expensive, but so is being locked into a wrong library or framework.

Client side only

Another problem that most of the usual offerings share is that they are unapologetically client side. What it means is that a typical component will provide some dummy content such as ‘Please wait…’ if it shows content, or nothing if it is a ‘building block’ widget of some kind. Only after JavaScript loads will it spring to life, which means show anything useful. Widgets that are shown on user input (calendar picker being the quintessential example) suffer no ill consequences from this approach, but if you put client-side-only widgets on the main page, your SEO, performance and user experience will suffer.

Whether this is of importance to you depends on where you stand on the ‘server vs client side’ religious war. Twitter made it very clear that loading JavaScript, making an XHR request back to the mother ship for data, and then rendering the data on the client is not working for them in their seminal 2012 blog post. I am against it as well, as we were bitten hard with bad initial performance of large JavaScript SPAs. YMMV.

Hidden DOM

Web Components as a standard bring another thing to the table: hidden DOM. When you add a custom component to your page, the buck stops at the component boundary – parent styles will not leak into the component, and DOM queries will not include elements inside the custom component. This yields vital encapsulation currently possible only using iframes, with all the nastiness they bring to the table. However, it also makes it hard to style and provide initial state of the components while rendering the page on the server.

In theory, Node.js may allow us to run JavaScript on the server and construct the initial content (again, a theory, I am not sure it is actually possible without ugly hacks). Even if possible, it would not work for other server stacks. Essentially Web Components want you to just drop the component in your markup, set a few properties and let it do its stuff, which in most cases means ‘nothing’ until JavaScript for the component loads.

Model transfiguration

One of the perennial problems of starting your rendering on the server and resuming on the client is model transfer. You had to do some work on the server to curate data required to render the component’s initial state. It would be a waste to discard this data and let the JavaScript for the component go through the same process again when loaded. There are two different approaches to this:

  1. Model embedding – during server side rendering, nuggets of data are embedded in the markup using HTML5 data-* properties. Client side JavaScript uses these nuggets to reconstruct the model without the need to make network requests.
  2. Model bootstrapping – this approach is used by some MV* frameworks (e.g. BackboneJS). You can construct your component’s model, use it to render on the server, then inline the model as text in HTML to be eval-ed on the client. The result is the same – model is ready and does not need to be synced with the server, necessitating a network request.

Enter Bootstrap

Our experience with proprietary web components was mostly with Dojo/Dijit, since IBM made a sizeable investment in this open source library and a ton of products were written using it. It has all the characteristics of a walled garden – before you sample from its buffet (Dijit), you need to buy into the widget component model that Dojo Core provides. Once you do it, you cannot mix and match with Prototype, YUI, or jQuery UI. This is not an exclusive fault of Dojo – all JavaScript component models are like this.

Remember when I told you how Twitter wanted to be able to send something from the server ready for the browser to consume? When we first discovered Bootstrap, we were smitten by its approach. Were were looking for a proprietary widget system to which we had to sell our souls but failed to find it (in fact, the Bootstrap creator Mark Otto expressed open distaste for components that require extensive JavaScript).

Consider:

  1. There is no hidden DOM. There is just plain HTML that is styled by Bootstrap CSS.
  2. This HTML can arrive from the server, or can be dynamically created by JavaScript – no distinction.
  3. Behaviour is added via jQuery plug-ins.
  4. Plug-ins look for Bootstrap components in the DOM and attach event listeners, and start the dynamic behaviour (e.g. Carousel).
  5. The data needed by JavaScript is extracted from ‘data-*’ properties in HTML, and can be programmatically modified once JavaScript loads (model embedding, remember?).

Considering Twitter’s blog post on server side rendering, it is no wonder Bootstrap is incredibly easy to put to use in such a context. You don’t pass a list of entries to the ‘menu’ component, only to be turned into a menu when JavaScript loads. Your menu is simply an ‘ul’ element, with menu items being ‘li’ elements that are just styled to look like a menu. Thanks to CSS3, a lot of animation and special effects are provided natively by the browser, without the need for custom JavaScript to slow down your page. As a result, Bootstrap is really mostly CSS with a sprinkling of JavaScript for behavior (no surprise because it grew out of Twitter’s style guide document).

<div class="dropdown">
   <button class="btn btn-default dropdown-toggle" type="button" id="dropdownMenu1" data-toggle="dropdown" aria-expanded="true">
      Dropdown
      <span class="caret"></span>
   </button>
   <ul class="dropdown-menu" role="menu" aria-labelledby="dropdownMenu1">
      <li role="presentation"><a role="menuitem" tabindex="-1" href="#">Action</a></li>
      <li role="presentation"><a role="menuitem" tabindex="-1" href="#">Another action</a></li>
      <li role="presentation"><a role="menuitem" tabindex="-1" href="#">Something else here</a></li>
      <li role="presentation"><a role="menuitem" tabindex="-1" href="#">Separated link</a></li>
   </ul>
</div>

How important this is for your use case depends on the components. Building block components such as menus, nav bars, tabs, containers, carousels etc. really benefit from server-side construction because they can be immediately rendered by the browser, making your page feel very snappy and immediately useful. The rest of the page can be progressively enhanced as JavaScript arrives and client-side-only components are added to the mix.

If server side is not important to you, Web Components custom element approach seems more elegant:

<fancy-dropdown></fancy-dropdown>

The rest of the markup visible in Bootstrap example is all in the hidden DOM. Neat, except if you want something rendered on the server as well.

Truth to be told, it seems to be possible to create Web Components that act similarly to Bootstrap components. In fact, there is a demo showing a selection of Bootstrap components re-imagined as custom elements. I don’t know how real or ‘correct’ this is, just adding it to the mix for completeness. What is not clear is whether this is merely possible or actually encouraged for all custom element creators.

Haters gonna hate

Bootstrap is currently in its third major version and has been immensely popular, but for somewhat different reasons than I listed here. It comes with a very usable, fresh and modern looking theme that many developers use as-is, never bothering to customize. As a result, there are many cookie-cutter web sites out there, particularly if put together by individuals rather than brand-sensitive corporations and startups.

This has created a massive wave of hate from designers. In the pre-Bootstrap days, developers normally could not design if their life depended on it, putting designers on the critical path for every single UI. Now, most internal, prototype and throwaway apps and sites can look ‘good enough’, freeing up designers to focus on big, long running projects and clients looking to impart their own ‘design language’ on their properties.

I would claim that while Bootstrap as-is may not be suitable for a real professional product, Bootstrap approach is something that should not be thrown away with the bathwater. I know that ‘theming Bootstrap’ sounds like ‘Cuba Libre without the rum’ (note for teetotalers – it’s just Coke). If a toolkit is mostly CSS, and you replace it, what is left? Well, what is left are class names, documentation, jQuery plug-ins and the general approach. A small team of designers and developers can create a unique product or company theme, and the army of developers can continue to use all of Bootstrap documentation without any change.

I know many a company designer is tempted to ‘start fresh’ and build a custom system, but it a much bigger job than it looks like, and is not much different from just theming Bootstrap, with the difference being that you are now on the hook to provide JavaScript for behavior and extensively document it. You can create themes that transform Bootstrap beyond recognition, demonstrated in the Bootstrap Expo. And it is a massive challenge to match the open source network effect (599 contributors, 10,495 commits).

Devil’s Advocate

In the past, there were complains that Bootstrap is bloated (which can be addressed to a degree by cherry-picking Less/Sass files and building a custom CSS), not accessible (this is getting better over time), and has too many accessor rules (no change here). Another complaint is that when a component doesn’t quite do what is desired, modifications eventually cost more than if the component was written from scratch.

I have no problem buying any and all of these complaints, but still claim that the approach is more important than the actual design system. In fact, I put Zurb’s Foundation in the title to indicate a competitor that uses an identical approach (styling HTML with jQuery for behaviour). I could use either (in fact, I have a growing appreciation for Foundation’s clean and understated look that is less immediately recognizable compared to Bootstrap). And the community numbers are nothing to sneeze at (603 contributors, 7,919 commits).

So your point is…

My point is that before thinking about reusable Web components for your project, settle on a design system, be it customized Bootstrap, Foundation or your own. This will ensure a design language fit for your product, and will leave a lot of options open for the actual implementation of user interfaces. Only then should you think of client-side-only components, and you should only use them for building blocks that you can afford to load lazily.

© Dejan Glozic, 2014

Micro-Services and Page Composition Problem

800px-20121027_0811_Sintra_06

Dispite many desirable properties, micro-services carry two serious penalties to be contended with: authentication (which we covered in the previous post) and Web page composition, which I intend to address now.

Imagine you are writing a Node.js app and use Dust.js for the V of the MVC, as we are doing. Imagine also that several pages have shared content you want to inject. It is really easy to do using partials, and practically every templating library has a variation of that (and not just for Node.js).

However, if you build a micro-service system and your logical site is spread out between several micro-services, you have a complicated problem on your hands. Now partial inclusion needs to happen across the network, and another service needs to serve the shared content. Welcome to the wonderful world of distributed composition.

This topic came into sharp focus during Nodeconf.eu 2014. Clifton Cunningham presented the work of his team in this particular area, and the resulting project Compoxure they have open-sourced and shared with us. Clifton has written about it in his blog and it is a very interesting read.

Why bother?

At this point I would like to step back and look at the general problem of document component model. For all their sophistication and fantastic feature set, browsers are stubbornly single document-oriented. They fight with us all the time when it comes to where the actual content on the page comes from. It is trivially easy to link to a number of stylesheets and JavaScript files in the HEAD section of the document, but you cannot point at a page fragment and later use it in your document (until Web Components become a reality, that is – including page fragments that contain custom element templates and associated styles and scripts is the whole point of this standard).

Large monolithic server-side applications were mostly spared from this problem because it was fairly easy to include shared partials within the same application. More recently, single page apps (SPAs) have dealt with this problem using client side composition. If everything is a widget/plug-in/addon, your shared area can be similarly included into your page from the client. Some people are fine with this, but I see several flaws in this approach:

  1. Since there is no framework-agnostic client side component model, you end up stuck with the framework you picked (e.g. Angular.js headers, footers or navigation areas cannot be consumed in Backbone micro-services)
  2. The pause until the page is assembled in SPAs due to JavaScript downloading and parsing can range from a short blip to a seriously annoying blank page stare. I understand that very dynamic content may need some time to be assembled but shared areas such as headers, footers, sidebars etc. should arrive quickly, and so should the initial content (yeah, I don’t like large SPAs, why do you ask?)

The approach we have taken can be called ‘isomorphic’ – we like to initially render on the server for SEO and fast first content, and later progressively enhance using JavaScript ‘on the fly’, and dynamically load with Require.js. If you use Node.js and JavaScript templating engine such as Dust.js, the same partials can be reused on the client (something Airbnb has demonstrated as a viable option). The problem is – we need to render a complete initial page on the server, and we would like the shared areas such as headers, sidebars and footers to arrive as part of that first page. With a micro-service system, we need a solution for distributed document model on the server.

Alternatives

Clifton and myself talked about options at length and he has a nice breakdown of alternatives at the Compoxure GitHub home page. For your convenience, I will briefly call out some of these alternatives:

  1. Ajax – this is a client-side MVC approach. I already mentioned why I don’t like it – it is bad for SEO, and you need to stare at the blank page while JavaScript is being downloaded and/or parsed. We prefer to use JavaScript after the initial hit.
  2. iFrames – you can fake document component models by using seamless iframes. Bad for SEO again, there is no opportunity for cashing (therefore, performance problems due to latency), content in iFrames is clipped at the edges, and problems for cross-frame communication (although there are window.postMessage workarounds). They do however solve the single-domain restriction browsers impose on Ajax. Nevertheless, they have all the cool factor of re-implementing framesets from the 90s.
  3. Server Side Includes (SSIs) – you can inject content using this approach if you use a proxy such as Nginx. It can work and even provide for some level of caching, but not the programmatic and fine grain control that is desirable when different shared areas need different TTL (time to live) values.
  4. Edge Side Includes (ESIs) – a more complete implementation that unfortunately locks you into Varish or Akamai.

Obviously for Clifton’s team (and ourselves), none of these approaches quite delivers, which is why services like Compoxure exist in the first place.

Direct composition approach

Before I had an opportunity to play with Compoxure, we spent a lot of time wrestling with this problem in our own project. Our current thinking is illustrated in the following diagram:

composition1The key aspects of this approach are:

  1. Common areas are served by individual composition services.
  2. Common area service(s) are proxied by Nginx so that they can later be called by Ajax calls. This allows the same partials to be reused after the initial page has rendered (hence ‘isomorphic’).
  3. Common area service can also serve CSS and JavaScript. Unlike the hoops we need to go through to stitch HTML together, CSS and JavaScript can simply be linked in HEAD of the micro-service page. Nginx helps making the URLs nice, for example ‘/common/header/style.css’ and ‘/common/header/header.js’.
  4. Each micro-service is responsible for making a server-side call, fetching the common area response and passing it into the view for inlining.
  5. Each micro-service takes advantage of shared Redis to cache the responses from each common service. Common services that require authentication and can deliver personalized response are stored in Redis on a per-user basis.
  6. Common areas are responsible for publishing messages to the message broker when something changes. Any dynamic content injected into the response is monitored and if changed, a message is fired to ensure cached values are invalidated. At the minimum, common areas should publish a general ‘drop cache’ message on restart (to ensure new service deployments that contain changes are picked up right away).
  7. Micro-services listen to invalidation messages and drop the cached values when they arrive.

This approach has several things going for it. It uses caching, allowing micro-services to have something to render even when common area services are down. There are no intermediaries – the service is directly responding to the page request, so the performance should be good.

The downside is that each service is responsible for making the network calls and doing it in a resilient manner (circuit breaker, exponential back-off and such). If all services are using Node.js, a module that encapsulates Redis communication, circuit breaker etc. would help abstract out this complexity (and reduce bugs). However, if micro-services are in Java or Go, we would have to duplicate this using language-specific approaches. It is not exactly rocket science, but it is not DRY either.

The Compoxure approach

Clifton and guys have taken a route that mimics ESI/SSI, while addressing their shortcomings. They have their own diagrams but I put together another one to better illustrate the difference to the direct composition diagram above:

composition2In this approach, composition is actually performed in the Compoxure proxy that is inserted between Nginx and the micro-services. Instead of making its own network calls, each micro-service adds special attributes to the DIV where the common area fragment should be injected. These attributes control parameters such as what to include, what cache TTLs to employ, which cache key to use etc. There is a lot of detail in the way these properties are set (RTFM), but suffice to say that Compoxure proxy will serve as an HTML filter that injects the content from the common areas into these DIVs as instructed.

<div cx-url='{{server:local}}/application/widget/{{cookie:userId}}'
     cx-cache-ttl='10s' cx-cache-key='widget:user:{{cookie:userId}}'
     cx-timeout='1s' cx-statsd-key="widget_user">
This content will be replaced on the way through
</div>

This approach has many advantages:

  1. The whole business of calling the common area service(s), caching the response according to TTLs, dealing with network failure etc. is handled by the proxy, not by micro-services.
  2. Content injection is stack-agnostic – it does not matter how the micro-service that serves the HTML is written (in Node.js, Java, Go etc.) as long as the response contains the expected tags
  3. Even in a system written entirely in Node.js, writing micro-services is easier – no special code to add to each controller
  4. Compoxure is used only to render the initial page. After that, Ajax takes over and composition service is hit with Ajax calls directly.

Contrasting the approach with direct composition, we identified the following areas of concern:

  1. Compoxure parses HTML in order to locate DIVs with special tags. This adds a performance hit, although practical results imply it is fairly small
  2. Special tags are not HTML5 compliant (‘data-‘ prefix would work). If this bothers you, you can configure Compoxure to completely replace the DIV with these tags with the injected content, so this is likely a non-issue.
  3. Obviously Compoxure inserts itself in front of the micro-services and must not go down. It goes without saying that you need to run multiple instances and practice ZDD (Zero-Downtime Deployment).
  4. Caching is static i.e. content is cached based on TTLs. This makes picking the TTL values tricky – our approach that involves pub/sub allows us to use higher TTL values because we will be told when to drop the cached value.
  5. When you develop, direct composition approach requires that you have your own micro-service up, as well as common area services. Compoxure adds another process to start and configure locally in order to be able to see your page with all the common areas rendered. If you hit your micro-service directly, all the DIVs with the ‘cx-‘ properties will be empty (or contain the placeholder content).

Discussion

Direct composition and Compoxure proxy are two valid approaches to the server-side document component model problem. They both work well, with different tradeoffs. Compoxure is more comfortable for developers – they just configure a special placeholder div and magic happens on the way to the browser. Direct composition relies on fewer moving parts, but makes each controller repeat the same code (unless that code is encapsulated in a shared Node.js module).

An approach that bridges both worlds and something we are seriously thinking of doing is to write a Dust.js helper that further simplifies inclusion of the common areas. Instead of importing a module, you would import a helper and then just use it in your markup:

<div>
{@import url="{headerUrl}" cache-ttl="10s"
cache-key="widget:user:{userid}" timeout="1s"}
</div>

Of course, Compoxure has some great properties that are not easy to replicate with this approach. For example, it does not pass TTL values to Redis directly because it would cause the cashed content to disappear after the coundown, and Compoxure perfers to keep the last content past TTL in case the service is down (better to serve slightly stale content than no content at all). This is a great feature and would need to be replicated here. I am sure I am missing other great features and Clifton will probably remind me about it.

Conclusion

In the end, I like both approaches for different reasons, and I can see a team use both successfully. In fact, I could see a solution where both are available – a Dust.js helper for Node.js/Dust.js micro-services, and Compoxure for everybody else (as a fallback for services that cannot or do not want to fetch common areas programmatically). Either way, the result is superior to the alternatives – I strongly encourage you to try it in your next micro-service project.

You don’t even have to give up your beloved client-side MVCs – we have examples where direct composition is used in a page with Angular.js apps and another with a Backbone app. These days, we are spoiled for choice.

© Dejan Glozic, 2014

Should We Fight or Embrace the DOM?

Dom Tower, Utrecht, 2013, Anitha Mani (Wikimedia Commons)
Dom Tower, Utrecht, 2013, Anitha Mani (Wikimedia Commons)

Now, my story is not as interesting as it is long.

Abe Simpson

One of the privileges (or curses) of experience is that you amass a growing number of cautionary tales with which you can bore your younger audience to death. On the other hand, knowing the history of slavery came in handy for Captain Pickard to recognize that what Federation was trying to do by studying Data is create an army of Datas, not necessarily benefit the humanity. So experience can come in handy once in a while.

So let’s see if my past experience can inform a topic de jour.

AWT, Swing and SWT

The first Java windowing system (AWT) was based on whatever the underlying OS had to offer. The original decision was to ensure the same Java program runs anywhere, necessitating a ‘least common denominator’ approach. This translated to UIs that sucked equally on all platforms, not exactly something to get excited about. Nevertheless, they embraced the OS, inheriting both the shortcomings and receiving the automatic improvements.

The subsequent Swing library took a radically different approach, essentially taking on the responsibility of rendering everything. It was ‘fighting the OS’, or at least side-stepping it by creating and controlling its own reality. In the process, it also became responsible for keeping up with the OS. The Eclipse project learned that fighting the OS is trench warfare that is never really ‘won’. Using an alternative system (SWT) that accepted the windowing system of the underlying OS turned out to be a good strategic decision, both in terms of the elusive ‘look and feel’, and for riding the OS version waves as they sweep in.

The 80/20 of custom widgets

When I was working on the Eclipse project, I had my own moment of ‘sidestepping’ the OS by implementing Eclipse Forms. Since browsers were not ready yet, I wrote a rudimentary engine that gave me text reflows, hyperlinks and images. This widget was very useful when mixed with other normal OS widgets inside Eclipse UI. As you could predict, I got the basic behavior fairly quickly (the ’80’ part). Then I spent couple of years (including help from younger colleagues), doing the ‘last mile’ (the ’20’) – keyboard, accessibility, BIDI. It was never ‘finished’ and it was never quite as good as the ‘real’ browser, and not nearly as powerful.

One of the elements of that particular custom widget was managing the layout of its components. In essence the container was managing a collection of components but the layout of the components was delegated to the layout manager that could be set on the container. This is an important characteristic that will come in handy later in the article. I remember the layout class as one of the trickiest and hardest to get done right and fully debug. After it was ‘sort of’ working correctly, everybody dreaded touching it, and consequently forgot how it worked.

DOM is awesome

I gave you this Abe Simson moment of reflection to set the stage for a battle that is raging today between people who want to work with the browser’s DOM, and people who think it is the root of all evil and should be worked around. As is often these days, both points of view came across my twitter feed from different directions.

In the ’embrace the DOM’ corner, we have the Web Components crowd, who are thinking that DOM is just fine. In fact, they want us to expand it to turn it into a universal component model (instead of buying into the ‘bolt on’ component models of widget libraries). I cannot wait for it: I always hated the barrier of entry for Web libraries. In order to start reusing components today, you first need to buy into the bolt-on component model (not unlike needing to buy another set top box in order to start enjoying programming from a new content provider).

‘Embracing the DOM’ means a lot of things, and in a widely retweeted article by Reto Schläpfer about React.js, he argued that the current MV* client side framework treat DOM as the view, managing data event flow ‘outside the DOM’. Reto highlights the example of React.js library as an alternative, where the DOM that already manages the layout of your view can be pressed into double-duty of serving as the ‘nervous system’.

This is not entirely new, and has been used successfully elsewhere. I wrote previously on DOM event bubbling used in Bootstrap that we used successfully in our own code. Our realization that with it we didn’t feel the need for MVC is now echoed by React.js. In both cases, layout and application events (as opposed to data events) are fused – layout hierarchy is used as a scaffolding for the event paths to flow, using the build-in DOM behavior.

For completeness, not all people would go as far as claim that React.js obviates the need for client-side MVC – for example, Backbone.js has been shown to play nicely with React.js.

DOM is awful

In the other corner are those that believe that DOM (or at least the layout part of it) is broken beyond repair and should be sidestepped. My micro-service camarade de guerre Adrian Rossouw seems to be quite smitten with the Famo.us framework. This being Adrian, he approached this is the usual comprehensive way, collecting all the relevant articles using wayfinder.co (I am becoming increasingly spoiled/addicted to this way of capturing Internet wisdom on a particular topic).

Studying Famo.us is an archetype of a red herring – while its goal is to allow you build beautiful apps using JavaScript, transformations and animation, the element relevant to this discussion is that they sidestep DOM as the layout engine. You create trees and use transforms, which Famo.us uses to manage the DOM as an implementation detail, mostly as a flat list of nodes. Now recall my Abe Simpson story about SWT containers and components – doesn’t it ring similar to you? A flat list of components and a layout manager on top of it controlling the layout as manifestation of a strategy pattern.

Here is what Famo.us has to say about they approach to DOM for layouts and events:

If you inspect a website running Famo.us, you’ll notice the DOM is very flat: most elements are siblings of one another. Inspect any other website, and you’ll see the DOM is highly nested. Famo.us takes a radically different approach to HTML from a conventional website. We keep the structure of HTML in JavaScript, and to us, HTML is more like a list of things to draw to the screen than the source of truth of a website.

Developers are used to nesting HTML elements because that’s the way to get relative positioning, event bubbling, and semantic structure. However, there is a cost to each of these: relative positioning causes slow page reflows on animating content; event bubbling is expensive when event propagation is not carefully managed; and semantic structure is not well separated from visual rendering in HTML.

They are not the only one with ‘the DOM is broken’ message. Steven Wittens in his Shadow DOM blog post argues a similar position:

Unfortunately HTML is crufty, CSS is annoying and the DOM’s unwieldy. Hence we now have libraries like React. It creates its own virtual DOM just to be able to manipulate the real one—the Agile Bureaucracy design pattern.

The more we can avoid the DOM, the better. But why? And can we fix it?

……

CSS should be limited to style and typography. We can define a real layout system next to it rather than on top of it. The two can combine in something that still includes semantic HTML fragments, but wraps layout as a first class citizen. We shouldn’t be afraid to embrace a modular web page made of isolated sections, connected by reference instead of hierarchy.

Beware what you are signing up for

I would have liked to have a verdict for you by the end of the article, but I don’t. I feel the pain of both camps, and can see the merits of both approaches. I am sure the ‘sidestep the DOM’ camp can make their libraries work today, and demonstrate how they are successfully addressing the problems plaguing the DOM in the current browser implementations.

But based on my prior experience with the sidestepping approach, I call for caution. I will also draw on my experience as a father of two. When a young couple goes through the first pregnancy, they focus on the first 9 months, culminating in the delivery. This focus is so sharp and short-sighted that many of the couples are genuinely bewildered when the hospital hands them their baby and kicks them to the hospital entrance, with newly purchased car seat safely secured in the back. It only dawns on them at that point that baby is forever – that the bundle of joy is now their responsibility for life.

With that metaphor in mind, I worry about taking over the DOM’s responsibility for layout. Not necessarily for what it means today, but couple of years down the road when both the standards and the browser implementations inevitably evolve. Will it turn into a trench warfare that cannot be won, a war of attrition that drains resources and results in abandoned libraries and frameworks?

Maybe I can figure that one out after a nap.

© Dejan Glozic, 2014