PayPal, You Got Me At ‘Isomorphic ReactJS’

Juvet_mirror_01

I love PayPal’s engineering department. There, I’ve said it. I have followed what Jeff Harrell and the team have been doing ever since I started reading about their wholesale jump into Node.js/Dust.js waters. In fact, their blogs and presentations finally convinced me to push for Node.js in my IBM team as well. I had a pleasure of talking to them at conferences multiple times and I continue to like their overall approach.

PayPal is at its core a no-nonsense enterprise that moved from Java to Node.js. Everything I have seen coming from them had the same pragmatic approach, with concerns you can expect from running Node.js in production – security, i18n, converting a large contingent of Java engineers to Node.js.

More recently, I kept tab on PayPal’s apparent move to from Dust.js to ReactJS. Of course, this time around we learned faster and were already playing with React ourselves (using Dust.js for simpler, content-heavy pages and reserving ReactJS for more dynamic use cases). However, we haven’t really started pushing on ReactJS because I was still looking at how to take advantage of React’s ability to render on the server.

Well, the wait is over. PayPal has won my heart again by releasing a React engine that connects the dots in a way so compatible with what we needed that made me jump with joy. Unlike the version I used for my previous blog post, this one allows server side components to be client-mountable, offering true isomorphic goodness. Finally, a fat-free yogurt that does not taste like paper glue.

Curate and connect

The key importance of PayPal’s engine is in what it brings together. One of the reasons React has attracted so much attention lately is its ability to render into a string on the server, then render to the real DOM on the client using the same code. This is made possible by using NodeJS, which is by now our standard stack (I haven’t written a line of Java code for more than a year, on my honour).

But that is not enough – in order to carry over into the client with the same code, you need ‘soft’ page switching – showing boxes inside other boxes and updating the browser history as these boxes are swapped. This has been brought to us now by another great library – react-router. This module inspired by Ember’s amazing router is quickly becoming ‘the’ router for React applications.

What PayPal did in their engine was connect all these great libraries, then write important glue code to put it all together. It is now possible to mix normal server side templates with pages that start their life on the server, then continue on the client with the state preserved and ready to go as soon as JavaScript is loaded.

Needless to say, this was the solution we were looking for. As far as we are concerned, this will put an end to needless ‘server vs client’ wars, and allow us to have our cake and eat it too. Mmm, cake.

Show us some sample code

OK, let’s get our hands dirty. What many people need when writing applications is a hybrid between a site and an app – the ability to put together a web site that has one or more single-page apps embedded in it. We will build an example site that has two plain ReactJS pages rendered on the server, while the third page is really an SPA taking advantage of react-engine and the ability to go full isomorphic.

We will start by creating a shared layout JSX component to be consumed by all other pages:

var React = require('react');
var Header = require('./header.jsx');

module.exports = React.createClass({

  render: function render() {
    var bundle;

    if (this.props.addBundle)
      bundle = <script src='/bundle.js'/>;

    return (
      <html>
        <head>
          <meta charSet='utf-8' />
          <title>
            {this.props.title}
          </title>
          <link rel="stylesheet" href="/css/styles.css"/>
        </head>
        <body>
          <Header {...this.props}></Header>
          <div className="main-content">
             {this.props.children}
          </div>
        </body>
        {bundle}
      </html>
    );
  }
});

We extracted common header as a separate component that we required and inlined:

var React = require('react');

module.exports = React.createClass({

  displayName: 'header',

  render: function render() {
    var linkClass = 'header-link';
    var linkClassSelected = 'header-link header-selected';

    return (
      <section className='header' id='header'>
        <div className='header-title'>{this.props.title}</div>
        <nav className='header-links'>
          <ul>
            <li className={this.props.selection=='header-home'?linkClassSelected:linkClass} id='header-home'>
              <a href='/'>Home</a>
            </li>
            <li className={this.props.selection=='header-page2'?linkClassSelected:linkClass} id='header-page2'>
              <a href='/page2'>Page 2</a>
            </li>
            <li className={this.props.selection=='header-spa'?linkClassSelected:linkClass} id='header-spa'>
              <a href='/spa/section1'>React SPA</a>
            </li>
          </ul>
        </nav>
      </section>
    );
  }
});

The header shows three main pages – ‘Home’, ‘Page2’ and ‘React SPA’. The first two are plain server side pages that are rendered by express and sent to the client as HTML:

var Layout = require('./layout.jsx');
var React = require('react');

module.exports = React.createClass({

  render: function render() {

    return (
      <Layout {...this.props}>
        <h2>Home</h2>
        <p>An example of a plain server-side ReactJS page.</p>
      </Layout>
    );
  }
});

On to the main course

The third page (‘React SPA’) is where all the fun is. Here, we want to create a single-page app so that when we navigate to it by clicking on its link in the header, all subsequent navigations inside it are client-side. However, true to our isomorphic requirement, we want the initial content of ‘React SPA’ page to be rendered on the server, after which react-router and React component will take over.

To show the potential of this approach, we will build a very useful layout – a page with a left nav containing three links (Section 1, 2 and 3), each showing different content in the content area of the page. If you have seen such a page once, you saw it a million times – this layout is internet’s bread and butter.

We start building our SPA top-down. Our top level ReactJS component will reuse Layout component:

var Layout = require('./layout.jsx');
var React = require('react');
var Nav = require('./nav.jsx');
var Router = require('react-router');

module.exports = React.createClass({

  render: function render() {

    return (
      <Layout {...this.props} addBundle='true'>
        <Nav {...this.props}/>
        <Router.RouteHandler {...this.props}/>
      </Layout>
    );
  }
});

We have loaded left nav as a Nav component:

var React = require('react');
var Link = require('react-router').Link;

module.exports = React.createClass({

  displayName: 'nav',

  render: function render() {
    var activeClass = 'left-nav-selected';

    return (
      <section className='left-nav' id='left-nav'>
        <div className='left-nav-title'>{this.props.name}</div>;
        <nav className='left-nav-links'>
          <ul>
            <li className='left-nav-link' id='nav-section1'>
              <Link to='section1' activeClassName={activeClass}>Section 1</Link>
            </li>
            <li className='left-nav-link' id='nav-section2'>
              <Link to='section2' activeClassName={activeClass}>Section 2</Link>
            </li>
            <li className='left-nav-link' id='nav-section3'>
              <Link to='section3' activeClassName={activeClass}>Section 3</Link>
            </li>       
          </ul>
        </nav>
      </section>
    );
  }
});

This looks fairly simple, except for one crucial difference: instead of adding plain ‘a’ tags for links, we used Link components coming from react-router module. They are the key for the magic here – on the server, they will render normal links, but with ‘breadcrumbs’ allowing React router to mount click listeners on them, and cancel normal navigation behaviour. Instead, they will cause React components registered as handlers for these links to be shown. In addition, browser history will be maintained so that back button and address bar works as expected for these ‘soft’ navigations.

Component RouteHandler is responsible for executing the information specified in our route definition:

var Router = require('react-router');
var Route = Router.Route;

var SPA = require('./views/spa.jsx');
var Section1 = require('./views/section1.jsx');
var Section2 = require('./views/section2.jsx');
var Section3 = require('./views/section3.jsx');

var routes = module.exports = (
  <Route path='/spa' handler={SPA}>;
    <Route name='section1' handler={Section1} />;
    <Route name='section2' handler={Section2} />;
    <Route name='section3' handler={Section3} />;
	<Router.DefaultRoute handler={Section1} />;
  </Route>
);

As you can infer, we are not declaring all the routes for our site, just the section for the single-page app (under the ‘/spa’ path). There we have built three subpaths and designated React components as handlers for these routes. When a Link component whose ‘to’ property is equal to the route name is activated, the component designated as handler will be shown.

Server needs to cooperate

In order to get our HTML5 push state enabled router to work, we need server side cooperation. In the olden days when SPAs were using hashes to ensure client side navigation is not causing page reloading, we didn’t need to care about the server because hashes stayed on the client. Those days are over and we want true deep URLs on the client, and we can have them using HTML5 push state support.

However, once we start using true links for everything, we need to tell the server to not try to render pages that belong to the client. We can do this in express like this:

app.get('/', function(req, res) {
  res.render('home', {
    title: 'React Engine Demo',
    name: 'Home',
    selection: 'header-home'
  });
});

app.get('/page2', function(req, res) {
  res.render('page2', {
    title: 'React Engine Demo',
    name: 'Page 2',
    selection: 'header-page2'
  });
});

app.get('/spa*', function(req, res) {
  res.render(req.url, {
    title: 'SPA - React Engine Demo',
    name: 'React SPA',
    selection: 'header-spa'
  });
});

Notice that we have defined controllers for two routes using normal ‘res.render’ approach, but the third one is special. First off, we have instructed express to not try to render any pages under /spa by sending them all to the React router. Notice also that instead of sending normal view names in res.render, we are passing entire URL coming from the request. This particular detail is what makes ‘react-engine’ ingenious – the ability to mix react-router and normal views by looking for the presence of the leading ‘/’ sign.

A bit more boilerplate

Now that we have all these pieces, what else do we need to get this to work? First off, we need the JS file to configure the react router on the client, and start the client side mounting:

var Routes = require('./routes.jsx');
var Client = require('react-engine/lib/client');

// Include all view files. Browserify doesn't do
// this automatically as it can only operate on
// static require statements.
require('./views/**/*.jsx', {glob: true});

// boot options
var options = {
  routes: Routes,

  // supply a function that can be called
  // to resolve the file that was rendered.
  viewResolver: function(viewName) {
    return require('./views/' + viewName);
  }
};

document.addEventListener('DOMContentLoaded', function onLoad() {
  Client.boot(options);
});

And to round it all, we need to deliver all this JavaScript, and JSX templates, to the client somehow. There are several ways to approach JavaScript modularization on the client, but since we are using Node.js and sing the isomorphic song, what can be more apt than using Browserify to carry over CommonJS into the client? The following command line will gather the entire dependency tree for index.js into one tidy bundle:


browserify -t reactify -t require-globify public/index.js -o public/bundle.js

If you circle back all the way to Layout.jsx, you will notice that we are including a sole script tag for /bundle.js.

The complete source code for this app is available on GitHub. When we run ‘npm install’ to bring all the dependencies, then ‘npm start’ to run browserify and start express, we get the following:

react-engine-demo

When we click on header links, they cause full page reload, rendered by express server. However, clicks on the left nav links cause the content of the SPA page to change without a page reload. Meanwhile, the address bar and browser history is dutifully updated, and deep links are available for sharing.

Discussion

You can probably tell that I am very excited about this approach because it finally brings together fast initial rendering and SEO-friendly server-side pages, with full dynamic ability of client side apps. All excitement aside, we need to remember that this is just views – we would need to write more code to add action dispatcher and data stores in order to implement full Flux architecture.

Performance wise, the combined app renders very quickly, but one element sticks out. Bundle.js in its full form is about 800KB of JavaScript, which is a lot. When running the command to minify it, it is trimmed down to 279KB, and when compression is enabled in express, it goes further down to 62.8KB to send down the wire. We should bear in mind that this is ALL JavaScript we need – ReactJS, as well as our own components. It should also be noted that this JavaScript is loaded asynchronously and that we are sending content from the server already – we will not see a white page while script is being downloaded and parsed.

In a more complex application, we would probably want to segregate JavaScript into more than one bundle so that we can load code in chunks as needed. Luckily, react-router already addressed this.

The app is deployed on Bluemix – you can access it at http://react-engine-demo.mybluemix.net. Try it out, play with the source code and let me know what you think.

Great job, PayPal! As for me, I am completely sold on ReactJS for most real world applications. We will be using this approach in our current project from now on.

© Dejan Glozic, 2015

Advertisements

Isomorphic Apps Part 2: Node, React.js, and Socket.io

Two Heads, 1930, Wikimedia Commons
Two Heads, 1930, Wikimedia Commons

When I was a kid, I went to the movies to watch Mel Brooks’ “History of The World, Part I”. I had a great time and could not wait for the sequel (that featured, among other things, Hitler on ice, a Viking funeral and laser-shooting rabbis in ‘Jews in Space’ teaser). Alas, ‘Part II’ never came. Determined to not subject my faithful readers to such a disappointment, here comes the promised part II of my ‘Isomorphic Apps’ trilogy.

In the first part of this story, we created an isomorphic app by taking advantage of the fact that we can use Dust.js as an Express view engine, and then compile partials into JavaScript and re-use them on the client as needed. In order to compare approaches with only one variable changed, we will switch to React.js for the view.

What’s the deal with React.js

React.js is attracting a lot of attention these days due to the novel approach it has taken to building dynamic Web apps. At the heart of the approach is the notion of a virtual DOM. React.js components manipulate an abstraction of a DOM that is then transformed into the physical DOM in a highly optimized fashion. Even more ingeniously, browser’s DOM is only one of the possible transformations: virtual DOM can be also serialized into plain HTML, which makes it possible to use it on the server. Even more recently, it can be serialized into native code to address mobile (and even desktop) UI components.

I am old enough to remember Java’s “Write once, run anywhere” slogan, and this looks like new generation’s attempt to make a run for this chimera. But even putting React native on a side for a moment, the fact that you can render on the server makes React supremely suitable for isomorphic apps, something Angular.js is lacking.

React.js is also refreshingly simple to figure out. Angular.js has this famous adoption roller coaster, and sometimes when you don’t get an Angular peculiarity, you feel the fault is with you, not Angular. React.js took an approach that life is short, and we can do better things with our time than figure out the maddening quirks of a complex framework. There is no two-way binding (because it has shown to be a double-edged sword – see what I did here). When the model changes, you just naively rebuild the view (sometimes referred to as ‘write pages like it’s the 90s’). Seems massively suboptimal, but remember that you are only rebuilding the virtual DOM – React.js figures out the actual delta and only applies the delta against the real DOM. And since most of the performance (or lack thereof) lies in the physical DOM, React.js promises fast apps without writing a lot of code for smart and surgical updating on model changes.

Configuring React.js as an Express view engine

Alright, I hope this wet your appetite for some coding. We will start by cloning the page from part I and adding another view engine in app.js (because I am cheap/lazy and don’t want to run another app for this). For this we need to install react on the server, as well as the express view adapter.

We will start by installing ‘react’ and ‘express-react-views’ and configuring the jsx view engine:


var react = require('express-react-views');

...

app.engine('jsx', react.createEngine());
app.set('view engine', 'jsx');

The last line above should only be set if you will use JSX as the only view engine for Express. In my case, I had to omit that line because I am already serving some Dust pages, and you can only set one default engine. The only thing I lost this way was the ability to find JSX templates without the extension – they can still be rendered when extension is included.

The controller for our React.js page is almost identical to the one we wrote for Dust.js:


var model = require('../models/todos');

module.exports.get = function(req, res) {
   model.list(req.user, function(err, todos) {
      res.render('isomorphic_react.jsx',
         { title: 'React - Isomorphic', user: req.user, todos: todos });
   });
};

Most of the fun happens in the view, as expected. React.js requires some getting used to. For starters, JSX syntax is actually XML (and not even XHTML), so all elements require termination. Many attribute names require camel case, which is very annoying (I always hated Jade for this mental transformation, and now JSX is doing the same for me). At least the JSX transformer is yelling at you in the console about possible errors you made, so fixing up your JSX is not too hard:

var React = require('react');
var DefaultLayout = require('./rlayout');
var RTodo = require('./rtodo');

var Todos = React.createClass({
  render: function() {
    return (
      <DefaultLayout { ...this.props} selection="react">
        <h1>Using React.js for View</h1>
        <h2>Todos</h2>
        <div className="new">
           <textarea id="new-todo-text" placeholder="New todo"/>
        </div>
        <div className="delete">
           <button type="button" id="delete-all"
              className="btn btn-primary">Delete All</button>
        </div>
        <div id="todos" className="todos">
           {this.props.todos.map(function(todo) {
          	return <RTodo key={todo.id} {...todo} />;
           })}
        </div>
        <script src="/js/prettyDate.js"></script>
        <script src="/js/rtodo.js"></script>
        <script src="/js/rtodos.js"></script>
      </DefaultLayout>
    );
  }
});

module.exports = Todos;

The code above requires some explanation. Unlike with Dust.js, both inclusion into a common layout template and instantiation of partials is done through React.js component model. Notice that we imported DefaultLayout component that is our standard page boilerplate. The payload of the page is simply specified as content of the instantiated component in the ‘render’ method above.

Another important point is that unlike Dust.js, properties are not automatically passed down the component hierarchy – we need to explicitly do it (notice the strange “{ …this.props }” expression in the DefaultLayout declaration – what I am saying is ‘pass all the properties down to the child component’). We can also define new properties, which I am doing by passing ‘selection’ that will be used by the header component (to highlight the ‘React’ link).

Another important section of the template is where I am instantiating RTodo component (a single Todo card). Flow control can be tricky in JSX because the entire template is one giant return statement, so everything needs to evaluate to an expression. Notice the trick with using the array map to iterate over the list of todos and render each child todo component.

This code will produce a page very similar to the one with Dust.js, with identical results. In fact, it is possible to go back and forth because both pages are using the same REST service for the model.

JSX compiler

So far we took care of the server side. As with Dust.js, we can compile components we need on the client side, this time using jsx compiler that comes by installing ‘react-tools’:


#!/bin/bash
node_modules/react-tools/bin/jsx --extension jsx views/ public/js/ rtodo

We can compile any number of components and place them into the JS directory under /public folder so that Express can serve them to the browser.

The client side script is very similar to the one used by the Dust.js page. The only difference is in the ‘Add’ action handler:

var socket = io.connect('/');
socket.on('todos', function (message) {
  if (message.type=='add') {
    var newTodo = document.createElement('div');
    React.render(React.createElement(RTodo, message.state),
              newTodo);
    $(".todos").prepend(newTodo);
  }
  ...

The code is remarkably similar – instead of calling ‘dust.render’ to render the partial using the element we received via the Socket.io message, we ask React to render the compiled element into a new DOM element we created on the fly. We then prepend this element into the parent DIV.

Commentary and comparisons

First off, I would say that this second attempt at writing an isomorphic app was a success because I was able to replicate Dust.js example from part I with identical behaviour. However, it is not as good a fit for React.js. A better example would see us modifying a model and asking React.js to re-render an existing DOM branch. Now that I feel reasonably comfortable around React.js, I think I will create something more dynamic for it in the near future. A true React-y way of doing the list of todos would be to simply re-render the entire list on each Socket.io message. We would let React.js figure out that all it needs to do is insert a new Todo DIV into the parent node. This way we would not need to create DOM elements ourselves, as in the code above.

After spending a year working with Dust.js, JSX took some getting used to. As I said, I hated Jade because it created an additional layer of abstraction between me and HTML, and I never quite knew what final HTML it will produce. JSX evokes the same feelings in me, but the error/correction loop has shortened as I learned it more. In addition, the value I get in return is much higher with JSX than with Jade.

Nevertheless, certain things will not get better with time. JSX is awkward to work with when it comes to logic. Remember, the entire template is an expression, so statements are really hard to fit in. Tertiary conditonals work, and as you saw, it is possible to use tricks with maps to iterate over children in a collection. I still prefer Dust.js for straightforward pages, but I can see how React.js can work with components in a very dynamic app.

I like React.js component model, as well as the fact that code and markup are close to each other – for a developer this is very useful. I also like the fact that, JSX quirks aside, there is much less magic compared to Angular.js. Of course, React.js is really just a View of MVC, so it is not a fair comparison. On the other hand, I am now dying to hook it up into Backbone as a view – it feels like a great combination (and of course, there are already articles on exploiting this exact combination). The more I think and read about it, Backbone models/collections/router and React.js views may just end up being my favorite stack for writing highly dynamic apps with server side bonus for SEO and initial experience.

A word of caution

If your system has elements of both a site and an app, use micro-services and implement site portions with a more straightforward templating solution (as I already covered in the previous blog post). This is going to make content authoring easier and increase the number of content providers with cursory knowledge of HTML that will feel confident authoring and/or modifying something like Dust.js templates. Leave React.js for 10x developers working on the highly dynamic ‘app’ portions of the system. By the way, this is an area where micro-services shine – this kind of partitioning is one of their key selling points. You can easily have micro-services using Dust.js and micro-services using React.js (and as I have already shown, even mixed in the same Node app).

One of the downsides of a mixed system (one using both Dust.js and React.js) is that sometimes content pages have dynamic component sprinkled in them. The challenge is invoking those component without requiring your casual developers to be afraid to touch such pages. Invoking React.js component in a Dust.js page would require inserting JavaScript tags, which is less then ideal. This is where Web Components are much easier to reason about, and there are already attempts to bridge the two worlds – invoking React.js components as custom Web Components.

And that’s a wrap

As before, you can browse the source code as IBM DevOps Services project, and the latest version of the app is running in Bluemix. In the final instalment of this trilogy, I will make our example a bit more dynamic (to let React.js show its true potential), and add some structure using Backbone.js. Until then, React away!

© Dejan Glozic, 2015

Should We Fight or Embrace the DOM?

Dom Tower, Utrecht, 2013, Anitha Mani (Wikimedia Commons)
Dom Tower, Utrecht, 2013, Anitha Mani (Wikimedia Commons)

Now, my story is not as interesting as it is long.

Abe Simpson

One of the privileges (or curses) of experience is that you amass a growing number of cautionary tales with which you can bore your younger audience to death. On the other hand, knowing the history of slavery came in handy for Captain Pickard to recognize that what Federation was trying to do by studying Data is create an army of Datas, not necessarily benefit the humanity. So experience can come in handy once in a while.

So let’s see if my past experience can inform a topic de jour.

AWT, Swing and SWT

The first Java windowing system (AWT) was based on whatever the underlying OS had to offer. The original decision was to ensure the same Java program runs anywhere, necessitating a ‘least common denominator’ approach. This translated to UIs that sucked equally on all platforms, not exactly something to get excited about. Nevertheless, they embraced the OS, inheriting both the shortcomings and receiving the automatic improvements.

The subsequent Swing library took a radically different approach, essentially taking on the responsibility of rendering everything. It was ‘fighting the OS’, or at least side-stepping it by creating and controlling its own reality. In the process, it also became responsible for keeping up with the OS. The Eclipse project learned that fighting the OS is trench warfare that is never really ‘won’. Using an alternative system (SWT) that accepted the windowing system of the underlying OS turned out to be a good strategic decision, both in terms of the elusive ‘look and feel’, and for riding the OS version waves as they sweep in.

The 80/20 of custom widgets

When I was working on the Eclipse project, I had my own moment of ‘sidestepping’ the OS by implementing Eclipse Forms. Since browsers were not ready yet, I wrote a rudimentary engine that gave me text reflows, hyperlinks and images. This widget was very useful when mixed with other normal OS widgets inside Eclipse UI. As you could predict, I got the basic behavior fairly quickly (the ’80’ part). Then I spent couple of years (including help from younger colleagues), doing the ‘last mile’ (the ’20’) – keyboard, accessibility, BIDI. It was never ‘finished’ and it was never quite as good as the ‘real’ browser, and not nearly as powerful.

One of the elements of that particular custom widget was managing the layout of its components. In essence the container was managing a collection of components but the layout of the components was delegated to the layout manager that could be set on the container. This is an important characteristic that will come in handy later in the article. I remember the layout class as one of the trickiest and hardest to get done right and fully debug. After it was ‘sort of’ working correctly, everybody dreaded touching it, and consequently forgot how it worked.

DOM is awesome

I gave you this Abe Simson moment of reflection to set the stage for a battle that is raging today between people who want to work with the browser’s DOM, and people who think it is the root of all evil and should be worked around. As is often these days, both points of view came across my twitter feed from different directions.

In the ’embrace the DOM’ corner, we have the Web Components crowd, who are thinking that DOM is just fine. In fact, they want us to expand it to turn it into a universal component model (instead of buying into the ‘bolt on’ component models of widget libraries). I cannot wait for it: I always hated the barrier of entry for Web libraries. In order to start reusing components today, you first need to buy into the bolt-on component model (not unlike needing to buy another set top box in order to start enjoying programming from a new content provider).

‘Embracing the DOM’ means a lot of things, and in a widely retweeted article by Reto Schläpfer about React.js, he argued that the current MV* client side framework treat DOM as the view, managing data event flow ‘outside the DOM’. Reto highlights the example of React.js library as an alternative, where the DOM that already manages the layout of your view can be pressed into double-duty of serving as the ‘nervous system’.

This is not entirely new, and has been used successfully elsewhere. I wrote previously on DOM event bubbling used in Bootstrap that we used successfully in our own code. Our realization that with it we didn’t feel the need for MVC is now echoed by React.js. In both cases, layout and application events (as opposed to data events) are fused – layout hierarchy is used as a scaffolding for the event paths to flow, using the build-in DOM behavior.

For completeness, not all people would go as far as claim that React.js obviates the need for client-side MVC – for example, Backbone.js has been shown to play nicely with React.js.

DOM is awful

In the other corner are those that believe that DOM (or at least the layout part of it) is broken beyond repair and should be sidestepped. My micro-service camarade de guerre Adrian Rossouw seems to be quite smitten with the Famo.us framework. This being Adrian, he approached this is the usual comprehensive way, collecting all the relevant articles using wayfinder.co (I am becoming increasingly spoiled/addicted to this way of capturing Internet wisdom on a particular topic).

Studying Famo.us is an archetype of a red herring – while its goal is to allow you build beautiful apps using JavaScript, transformations and animation, the element relevant to this discussion is that they sidestep DOM as the layout engine. You create trees and use transforms, which Famo.us uses to manage the DOM as an implementation detail, mostly as a flat list of nodes. Now recall my Abe Simpson story about SWT containers and components – doesn’t it ring similar to you? A flat list of components and a layout manager on top of it controlling the layout as manifestation of a strategy pattern.

Here is what Famo.us has to say about they approach to DOM for layouts and events:

If you inspect a website running Famo.us, you’ll notice the DOM is very flat: most elements are siblings of one another. Inspect any other website, and you’ll see the DOM is highly nested. Famo.us takes a radically different approach to HTML from a conventional website. We keep the structure of HTML in JavaScript, and to us, HTML is more like a list of things to draw to the screen than the source of truth of a website.

Developers are used to nesting HTML elements because that’s the way to get relative positioning, event bubbling, and semantic structure. However, there is a cost to each of these: relative positioning causes slow page reflows on animating content; event bubbling is expensive when event propagation is not carefully managed; and semantic structure is not well separated from visual rendering in HTML.

They are not the only one with ‘the DOM is broken’ message. Steven Wittens in his Shadow DOM blog post argues a similar position:

Unfortunately HTML is crufty, CSS is annoying and the DOM’s unwieldy. Hence we now have libraries like React. It creates its own virtual DOM just to be able to manipulate the real one—the Agile Bureaucracy design pattern.

The more we can avoid the DOM, the better. But why? And can we fix it?

……

CSS should be limited to style and typography. We can define a real layout system next to it rather than on top of it. The two can combine in something that still includes semantic HTML fragments, but wraps layout as a first class citizen. We shouldn’t be afraid to embrace a modular web page made of isolated sections, connected by reference instead of hierarchy.

Beware what you are signing up for

I would have liked to have a verdict for you by the end of the article, but I don’t. I feel the pain of both camps, and can see the merits of both approaches. I am sure the ‘sidestep the DOM’ camp can make their libraries work today, and demonstrate how they are successfully addressing the problems plaguing the DOM in the current browser implementations.

But based on my prior experience with the sidestepping approach, I call for caution. I will also draw on my experience as a father of two. When a young couple goes through the first pregnancy, they focus on the first 9 months, culminating in the delivery. This focus is so sharp and short-sighted that many of the couples are genuinely bewildered when the hospital hands them their baby and kicks them to the hospital entrance, with newly purchased car seat safely secured in the back. It only dawns on them at that point that baby is forever – that the bundle of joy is now their responsibility for life.

With that metaphor in mind, I worry about taking over the DOM’s responsibility for layout. Not necessarily for what it means today, but couple of years down the road when both the standards and the browser implementations inevitably evolve. Will it turn into a trench warfare that cannot be won, a war of attrition that drains resources and results in abandoned libraries and frameworks?

Maybe I can figure that one out after a nap.

© Dejan Glozic, 2014