With React, I Don’t Need to Be a Ninja Rock Star Unicorn

San Diego Comic-Con 2011 - Lego Ninja, The Conmunity - Pop Culture Geek from Los Angeles, CA, USA

Followers of my blog may remember my microservice period. There was a time I could not shut up about them. Now, several blogs and not a peep. Am I over microservices? Not by a long stretch.

For us, microservices are gravity now. I remember an interview with Billy Corgan of The Smashing Pumpkins where, when pressed about his choice of guitar strings, he answered: “I use them”. That’s how I feel about microservices – now that we live and breathe them every day, they are not exciting, they are air. The only people who get exited about air are SCUBA divers, I suppose, particularly if they are running low.

ReactJS, on the other hand, is interesting to us because we are still figuring it out. For years we were trying to have our cake and eat it too – merge the benefits of server and client side rendering. I guess my answer to ‘vanilla or chocolate ice cream’ is ‘yes please’, and with React, I can have my chocolate sundae for breakfast, lunch and dinner.

The problem with ninja rock star unicorns

The magical creature from the title is of course the sought after 10x developer. He/she knows all the modern frameworks, sometimes reads about the not so modern ones just for laughs, and thrives where others are banging their heads against their desks repeatedly.

Rock star developers not only do not shy away from frameworks with high barrier of entry such as Angular, they often write there own, even more sophisticated and intricate. Nothing wrong with that, except that you cannot hire a full team of them, and even if you could, I doubt team dynamics would be particularly great. The reality of today’s developer job market is that you will likely staff a team with great, competent and potentially passionate developers. I say potentially because in many cases their passion will depend on you as a leader and your ability to instill it.

The React connection

This is where React comes into play. Careful readers of this blog may remember my aversion to JavaScript frameworks in general. For modestly interactive sites, you can go a long way with just Node.js, Express, a templating library such as Dust.js and a sprinkle of jQuery for a good measure. However, a highly dynamic app driven by REST APIs is too much of a challenge for jQuery or vanilla JS alone. I am not saying it cannot be done, but by the same token, while you can cut your grass with box cutters, it is massively less efficient than a lawn mower. At the end of the day, you need the right tools for the right job, and that means some kind of a JavaScript library or a framework.

What kept me away from Angular for the longest time was the opinionated nature of it, and the extent to which it seeks to define your entire world. Angular is a cult – you cannot be in it part time.

“Angular is a cult – you cannot be only a part time member.”

Not an Angular cult member

I have already written about why I got attracted to React from a technical point of view. There are great things you can do with isomorphic apps when you combine great React libraries. But these are all technical reasons. The main reason I am attracted to React is its philosophy.

We are all idiots at times

Angular used to pride itself as a ‘super-heroic JavaScript framework’. Last time I checked, they removed it from the home page (although it still appears in Google searches – ironic, I know). I presume they meant that the framework itself gives you super-hero powers, not that you need to be a super-hero developer in order to use it, but sometimes I felt that way.

I am singling out Angular somewhat unfairly – most MVC JavaScript framework approach the problem by giving you tools to carefully wire up elements on the page with events, react to watched variables, surgically change styles, properties, collections and so on. Sounds great in the beginning, until you scale to a real-world application, and things become really complex.

This complexity may not be a big deal while you are in the thick of it, coding like a beast. You may be a great developer. The problem is that the moment you turn your head away from that code, you start the ‘idiot’ clock – until that time when you no longer remember how everything fits together.

Now, if you are looking at your own code and cannot figure out how it works, what are the chances another team member will? I long ago proclaimed that dumb code is good and smart code is bad. Not bad code, just straightforward, easy to understand, ‘no software patent here’ code. Your future self will be grateful, future maintainers doubly so.

React allows us to write boring code

Let me summarize the key React philosophy in one sentence:

“Something changed in my application’s state. Better re-render it.”

React in a nutshell

I cannot emphasize enough the importance of this approach. It does not say “watch 50 variables and do surgically change DOM elements and properties when something happens in one component, then cascade those surgical changes to other components also watching those components”. Behind this, of course, is React’s ingenious approach of using a virtual DOM and only updating the real DOM with the actual changes between the two. You can read about it on React’s web page.

After many years of ‘surgical’ JavaScript DOM manipulations, there is something completely counter-intuitive about ‘just re-render’ approach. It feels like it should not work. It feels wasteful to keep creating all those JavaScript objects, until you realize that they are really cheap, and that the true cost are actual DOM manipulations.

In fact, you can use this approach with any JavaScript rendering engine – Mustache, Handlebars, Dust. The only problem is – if you try the ‘something changed, re-render the component’ approach there, templates will re-render into inner HTML, and that is wasteful. It is also potentially disruptive if users are interacting with form elements you just recycled under their feet. React, on the other hand, will not do it – it will carefully update the DOM elements and properties around the form controls.

Increase velocity without increasing bug rate

The key design goal of React was to help real world projects running code in production. Developers of modern cloud applications are under constant pressure from product management to increase velocity. The same product management of course expects you to maintain quality of your apps, which is very hard. It is not hard to imagine that shortening the cycles will increase the bug rate, unless the code we write is simplified. Writing ‘surgical’, intricate code in less time is asking for trouble, but React is easy to understand. There is uniformity in its approach, a repeatability that is reassuring and easy for people who didn’t write the code originally to understand when they pick it up.

Developers take great pride in their work, and sometimes they get carried away by thinking that code is their deliverable. They are wrong. What you are delivering are user experiences, and code is just means to an end. In the future, we will just explain what we want to some more powerful Siri or Cortana and our app will come to existence. Until then, we should use whatever allows us to deliver it with high velocity but without bugs that would normally come with it.

For my team, React is just the ticket. As is often in life, YMMV.

© Dejan Glozic, 2015

Advertisements

ReactJS: The Day After

A man with an excruciating headache, Wikimedia Commons
A man with an excruciating headache, Wikimedia Commons

The other day I stumbled upon a funny Onion fake news report of the local man whose one-beer plan went terribly awry. Knowing how I professed undying love to ReactJS in the previous article, and extrapolating from life that after every night on the town comes the morning of reckoning, it is time to revisit my latest infatuation.

Alas, those expecting me to declare my foolishness and heartbreak with ReactJS are hoping in vain. Instead, what you will get here is a sober (ha) account of the problems, gotchas and head scratchers we encountered running ReactJS in production. We continue to use it and plan to build our next set of micro services using it, but we have a more realistic view of it now. So let’s dive in.

  1. Code Splitting – First off, my example didn’t just use ReactJS, but also react-router and react-engine. This amazing trio together allowed us to realize the dream of isomorphic apps, where you start rendering on the server, let the browser quickly render the initial content, load JavaScript, mount React components and continue with the same code on the client.
    Nevertheless, when we got past the small example, we realized that we need to split the code we initially bundled together using browserify. At the time of this writing, code splitting is not entirely painless. React-router in its version 0.13 has examples that all presume the use of Webpack to build your JavaScript. We are using browserify and must suffer until React-router 1.0 arrives. In the mean time, we can use react-router-proxy-loader, which allows us to asynchronously load code from a bundle that does not expect Webpack.

  2. React-engine growing pain – As any new library, react-engine has some rough edges. We are happy to report that one of the issues we had with it (the inability to control how react-router is being instantiated) has already been resolved. We are hoping to be able to make react-engine omit some of the data it sends to the client because it is only ever used for server-side rendering.
  3. ReactJS id properties – React attaches ‘reactid’ data property to almost all DOM elements, using ids that are sometimes very long, resulting in situations like:
    <span data-reactid=".ejv9lnvzeo.1.2.3.0.0.$7c87c148-e1a4-4cb8-81f8-c5e74be7684b.0.1.0.0">Hello</span>
    

    If you are using gzip for the markup (as you should), these strings compress very well, but you still end up with a very messy and hard to read HTML when you view source. React team is debating back and forth on the need of these properties and they may disappear at some point in the future. I for one will not miss them.

  4. Fussy with the whitespace – While you may think when working with JSX that you are coding in HTML, you are not, and nowhere is it more apparent than when you try to add some free text in the body of HTML elements, or to mix free text and elements. JSX converts snippets of text into spans at will, resulting in HTML that bears little resemblance to the initial JSX.
    I wish there was a better way to do this. I know all the virtues of React and how JSX is most decidedly not HTML, but some things like free-form text with some embedded tags should not result in a flurry of spans (and the hated data-reactid properties).
  5. Fussy with JavaScript tags – Inserting JavaScript tags in JSX is easy if you are referencing external JS files, but if you try to inline some JavaScript right there, JSX can through you curveball after curveball until you give up and extract that code into a file. This is not a show stopper but it is annoying when you want to inline couple of lines. From the maintainability point of view, it is probably better to keep JavaScript in its own file, so I am not going to protest too loudly.

ReactJS and Web Components

As with any JS framework, making a choice is normally followed by a little nagging voice in your head concerned that you chose wrong. When it comes to religious choices (AngularJS vs ReactJS vs EmberJS etc.), there is little you can do – you just need to make a leap of faith, make sure the framework works for your particular use case and jump.

However, Web Components are something else – they promise to be ‘the native Web’ at some point, so choosing between Web Components and ReactJS is not a religious debate. Even today, with the shims it is possible to run Web Components in browsers not supporting them natively, and natively in Chrome. A growing body of reusable Web components is something you don’t want to be left out of if you are Reactified to the max.

Luckily, Andrew Rota helped out with his presentation on complementarity of ReactJS and Web Components at the recent ReactJS Conf 2015. It is worth the watch, and the skinny is that since about October 2014, custom components are a fair game in JSX. This means that you can place HTML imports in the head element, and then freely use custom components in JSX the same way you would native HTML elements.

In fact, you are not loosing out on the promise of ReactJS virtual DOM. React treats custom components the same way as native HTML components – it will compare your new render to the current DOM state and only change what needs changing (adding, removing, or changing elements and properties that are not the same). This means that you can extend the power of ReactJS to Web Components.

Of course, there are some caveats, but it turns out that things you need to care about when writing Web Components for ReactJS consumption are generally applicable. Writing small components, extremely well encapsulated, that do not leak or make assumptions about the page they are running in, or try to insert stuff outside their own boundary.

No turning back

So this turned to be a click bait of sorts, for we are not turning back from ReactJS, just learning how to use it efficiently and how to be better at it. Stay tuned for the new cool stuff we were able to do with it.

© Dejan Glozic, 2015

PayPal, You Got Me At ‘Isomorphic ReactJS’

Juvet_mirror_01

I love PayPal’s engineering department. There, I’ve said it. I have followed what Jeff Harrell and the team have been doing ever since I started reading about their wholesale jump into Node.js/Dust.js waters. In fact, their blogs and presentations finally convinced me to push for Node.js in my IBM team as well. I had a pleasure of talking to them at conferences multiple times and I continue to like their overall approach.

PayPal is at its core a no-nonsense enterprise that moved from Java to Node.js. Everything I have seen coming from them had the same pragmatic approach, with concerns you can expect from running Node.js in production – security, i18n, converting a large contingent of Java engineers to Node.js.

More recently, I kept tab on PayPal’s apparent move to from Dust.js to ReactJS. Of course, this time around we learned faster and were already playing with React ourselves (using Dust.js for simpler, content-heavy pages and reserving ReactJS for more dynamic use cases). However, we haven’t really started pushing on ReactJS because I was still looking at how to take advantage of React’s ability to render on the server.

Well, the wait is over. PayPal has won my heart again by releasing a React engine that connects the dots in a way so compatible with what we needed that made me jump with joy. Unlike the version I used for my previous blog post, this one allows server side components to be client-mountable, offering true isomorphic goodness. Finally, a fat-free yogurt that does not taste like paper glue.

Curate and connect

The key importance of PayPal’s engine is in what it brings together. One of the reasons React has attracted so much attention lately is its ability to render into a string on the server, then render to the real DOM on the client using the same code. This is made possible by using NodeJS, which is by now our standard stack (I haven’t written a line of Java code for more than a year, on my honour).

But that is not enough – in order to carry over into the client with the same code, you need ‘soft’ page switching – showing boxes inside other boxes and updating the browser history as these boxes are swapped. This has been brought to us now by another great library – react-router. This module inspired by Ember’s amazing router is quickly becoming ‘the’ router for React applications.

What PayPal did in their engine was connect all these great libraries, then write important glue code to put it all together. It is now possible to mix normal server side templates with pages that start their life on the server, then continue on the client with the state preserved and ready to go as soon as JavaScript is loaded.

Needless to say, this was the solution we were looking for. As far as we are concerned, this will put an end to needless ‘server vs client’ wars, and allow us to have our cake and eat it too. Mmm, cake.

Show us some sample code

OK, let’s get our hands dirty. What many people need when writing applications is a hybrid between a site and an app – the ability to put together a web site that has one or more single-page apps embedded in it. We will build an example site that has two plain ReactJS pages rendered on the server, while the third page is really an SPA taking advantage of react-engine and the ability to go full isomorphic.

We will start by creating a shared layout JSX component to be consumed by all other pages:

var React = require('react');
var Header = require('./header.jsx');

module.exports = React.createClass({

  render: function render() {
    var bundle;

    if (this.props.addBundle)
      bundle = <script src='/bundle.js'/>;

    return (
      <html>
        <head>
          <meta charSet='utf-8' />
          <title>
            {this.props.title}
          </title>
          <link rel="stylesheet" href="/css/styles.css"/>
        </head>
        <body>
          <Header {...this.props}></Header>
          <div className="main-content">
             {this.props.children}
          </div>
        </body>
        {bundle}
      </html>
    );
  }
});

We extracted common header as a separate component that we required and inlined:

var React = require('react');

module.exports = React.createClass({

  displayName: 'header',

  render: function render() {
    var linkClass = 'header-link';
    var linkClassSelected = 'header-link header-selected';

    return (
      <section className='header' id='header'>
        <div className='header-title'>{this.props.title}</div>
        <nav className='header-links'>
          <ul>
            <li className={this.props.selection=='header-home'?linkClassSelected:linkClass} id='header-home'>
              <a href='/'>Home</a>
            </li>
            <li className={this.props.selection=='header-page2'?linkClassSelected:linkClass} id='header-page2'>
              <a href='/page2'>Page 2</a>
            </li>
            <li className={this.props.selection=='header-spa'?linkClassSelected:linkClass} id='header-spa'>
              <a href='/spa/section1'>React SPA</a>
            </li>
          </ul>
        </nav>
      </section>
    );
  }
});

The header shows three main pages – ‘Home’, ‘Page2’ and ‘React SPA’. The first two are plain server side pages that are rendered by express and sent to the client as HTML:

var Layout = require('./layout.jsx');
var React = require('react');

module.exports = React.createClass({

  render: function render() {

    return (
      <Layout {...this.props}>
        <h2>Home</h2>
        <p>An example of a plain server-side ReactJS page.</p>
      </Layout>
    );
  }
});

On to the main course

The third page (‘React SPA’) is where all the fun is. Here, we want to create a single-page app so that when we navigate to it by clicking on its link in the header, all subsequent navigations inside it are client-side. However, true to our isomorphic requirement, we want the initial content of ‘React SPA’ page to be rendered on the server, after which react-router and React component will take over.

To show the potential of this approach, we will build a very useful layout – a page with a left nav containing three links (Section 1, 2 and 3), each showing different content in the content area of the page. If you have seen such a page once, you saw it a million times – this layout is internet’s bread and butter.

We start building our SPA top-down. Our top level ReactJS component will reuse Layout component:

var Layout = require('./layout.jsx');
var React = require('react');
var Nav = require('./nav.jsx');
var Router = require('react-router');

module.exports = React.createClass({

  render: function render() {

    return (
      <Layout {...this.props} addBundle='true'>
        <Nav {...this.props}/>
        <Router.RouteHandler {...this.props}/>
      </Layout>
    );
  }
});

We have loaded left nav as a Nav component:

var React = require('react');
var Link = require('react-router').Link;

module.exports = React.createClass({

  displayName: 'nav',

  render: function render() {
    var activeClass = 'left-nav-selected';

    return (
      <section className='left-nav' id='left-nav'>
        <div className='left-nav-title'>{this.props.name}</div>;
        <nav className='left-nav-links'>
          <ul>
            <li className='left-nav-link' id='nav-section1'>
              <Link to='section1' activeClassName={activeClass}>Section 1</Link>
            </li>
            <li className='left-nav-link' id='nav-section2'>
              <Link to='section2' activeClassName={activeClass}>Section 2</Link>
            </li>
            <li className='left-nav-link' id='nav-section3'>
              <Link to='section3' activeClassName={activeClass}>Section 3</Link>
            </li>       
          </ul>
        </nav>
      </section>
    );
  }
});

This looks fairly simple, except for one crucial difference: instead of adding plain ‘a’ tags for links, we used Link components coming from react-router module. They are the key for the magic here – on the server, they will render normal links, but with ‘breadcrumbs’ allowing React router to mount click listeners on them, and cancel normal navigation behaviour. Instead, they will cause React components registered as handlers for these links to be shown. In addition, browser history will be maintained so that back button and address bar works as expected for these ‘soft’ navigations.

Component RouteHandler is responsible for executing the information specified in our route definition:

var Router = require('react-router');
var Route = Router.Route;

var SPA = require('./views/spa.jsx');
var Section1 = require('./views/section1.jsx');
var Section2 = require('./views/section2.jsx');
var Section3 = require('./views/section3.jsx');

var routes = module.exports = (
  <Route path='/spa' handler={SPA}>;
    <Route name='section1' handler={Section1} />;
    <Route name='section2' handler={Section2} />;
    <Route name='section3' handler={Section3} />;
	<Router.DefaultRoute handler={Section1} />;
  </Route>
);

As you can infer, we are not declaring all the routes for our site, just the section for the single-page app (under the ‘/spa’ path). There we have built three subpaths and designated React components as handlers for these routes. When a Link component whose ‘to’ property is equal to the route name is activated, the component designated as handler will be shown.

Server needs to cooperate

In order to get our HTML5 push state enabled router to work, we need server side cooperation. In the olden days when SPAs were using hashes to ensure client side navigation is not causing page reloading, we didn’t need to care about the server because hashes stayed on the client. Those days are over and we want true deep URLs on the client, and we can have them using HTML5 push state support.

However, once we start using true links for everything, we need to tell the server to not try to render pages that belong to the client. We can do this in express like this:

app.get('/', function(req, res) {
  res.render('home', {
    title: 'React Engine Demo',
    name: 'Home',
    selection: 'header-home'
  });
});

app.get('/page2', function(req, res) {
  res.render('page2', {
    title: 'React Engine Demo',
    name: 'Page 2',
    selection: 'header-page2'
  });
});

app.get('/spa*', function(req, res) {
  res.render(req.url, {
    title: 'SPA - React Engine Demo',
    name: 'React SPA',
    selection: 'header-spa'
  });
});

Notice that we have defined controllers for two routes using normal ‘res.render’ approach, but the third one is special. First off, we have instructed express to not try to render any pages under /spa by sending them all to the React router. Notice also that instead of sending normal view names in res.render, we are passing entire URL coming from the request. This particular detail is what makes ‘react-engine’ ingenious – the ability to mix react-router and normal views by looking for the presence of the leading ‘/’ sign.

A bit more boilerplate

Now that we have all these pieces, what else do we need to get this to work? First off, we need the JS file to configure the react router on the client, and start the client side mounting:

var Routes = require('./routes.jsx');
var Client = require('react-engine/lib/client');

// Include all view files. Browserify doesn't do
// this automatically as it can only operate on
// static require statements.
require('./views/**/*.jsx', {glob: true});

// boot options
var options = {
  routes: Routes,

  // supply a function that can be called
  // to resolve the file that was rendered.
  viewResolver: function(viewName) {
    return require('./views/' + viewName);
  }
};

document.addEventListener('DOMContentLoaded', function onLoad() {
  Client.boot(options);
});

And to round it all, we need to deliver all this JavaScript, and JSX templates, to the client somehow. There are several ways to approach JavaScript modularization on the client, but since we are using Node.js and sing the isomorphic song, what can be more apt than using Browserify to carry over CommonJS into the client? The following command line will gather the entire dependency tree for index.js into one tidy bundle:


browserify -t reactify -t require-globify public/index.js -o public/bundle.js

If you circle back all the way to Layout.jsx, you will notice that we are including a sole script tag for /bundle.js.

The complete source code for this app is available on GitHub. When we run ‘npm install’ to bring all the dependencies, then ‘npm start’ to run browserify and start express, we get the following:

react-engine-demo

When we click on header links, they cause full page reload, rendered by express server. However, clicks on the left nav links cause the content of the SPA page to change without a page reload. Meanwhile, the address bar and browser history is dutifully updated, and deep links are available for sharing.

Discussion

You can probably tell that I am very excited about this approach because it finally brings together fast initial rendering and SEO-friendly server-side pages, with full dynamic ability of client side apps. All excitement aside, we need to remember that this is just views – we would need to write more code to add action dispatcher and data stores in order to implement full Flux architecture.

Performance wise, the combined app renders very quickly, but one element sticks out. Bundle.js in its full form is about 800KB of JavaScript, which is a lot. When running the command to minify it, it is trimmed down to 279KB, and when compression is enabled in express, it goes further down to 62.8KB to send down the wire. We should bear in mind that this is ALL JavaScript we need – ReactJS, as well as our own components. It should also be noted that this JavaScript is loaded asynchronously and that we are sending content from the server already – we will not see a white page while script is being downloaded and parsed.

In a more complex application, we would probably want to segregate JavaScript into more than one bundle so that we can load code in chunks as needed. Luckily, react-router already addressed this.

The app is deployed on Bluemix – you can access it at http://react-engine-demo.mybluemix.net. Try it out, play with the source code and let me know what you think.

Great job, PayPal! As for me, I am completely sold on ReactJS for most real world applications. We will be using this approach in our current project from now on.

© Dejan Glozic, 2015

Don’t Take Micro-Services Off-Road

Fred Bauder, 2009, Wikimedia Commons
Fred Bauder, 2009, Wikimedia Commons

I own an Acura TL 2006. It’s a great car. Every day I derive great pleasure driving it to work. It has a tight sporty suspension, precise steering, comfortable leather seats and an awesome audio system.

At the same time, I know better than to take it off-road. Its high performance tires are optimized for asphalt traction and low rolling resistance, not gravel or soil. It does not have enough clearance for rocks, or 4×4 drive required for rough terrain. If I did take it off-road, I could erroneously conclude that it is an awful car, which I know not to be true. I would have simply used it for something it was never designed to do.

I used this example to explain the concern I have seeing the evolution of the industry’s relationship with the micro-service architecture. It was just a matter of time people until people started taking their micro-service Acuras off-road and then writing how they are awful cars.

Original success stories

Architectures and approaches normally turn into trends because enough use cases exist to corroborate their genuine usefulness when solving a particular problem or a class of problems. Otherwise, only architecture astronauts would care. In the case of micro-services before they were trendy, enough companies built monoliths beyond their manageability. They had a real problem on their hands – a large application that fundamentally clashed with the modern ways of scaling, managing and evolving large systems in the cloud. Through some trial and error, they reinvented their properties as a loose collections of micro-services with independent scalability, life cycle and data concerns. Netfix, Groupon, Paypal, SoundCloud are just a small sample of companies running micro-services in production with success.

It is important to remember this because the trendiness of micro-services threatens to compel developers to try them out in contexts where they are not meant to be used, resulting in the projects overturned in the mud. This is bad news for all of us who derive genuine benefits from such an architecture.

Things to avoid

It is therefore good to try to arrive at a useful list of use cases where micro-services are not a good choice. It will keep us more honest, keep the micro-service hype at bay and prevent some failures that would sour people to an otherwise sound technical approach:

  1. Don’t start with micro-services – this one is a no-brainer. Micro-services attempt to solve problems of scale. When you start, your app is tiny. Even if it is not, it is just you or maybe you and couple more developers. You know it intimately and can rewrite it over a weekend. The app is small enough that you can easily reason about it. There is a reason why we use the word ‘monolith’ – it implies a rock big enough that it can kill you if it falls on you. When you start, your app is more like a pebble. It takes certain amount of time and effort by a growing number of developers to even approach monolith (and therefore micro-service) territory.
  2. Don’t even think about micro-services without DevOps – micro-services cause an explosion of moving parts. It is insane to attempt it without serious deployment and monitoring automation. You should be able to push a button and get your app deployed. In fact, you should not even do anything – committing code should get your app deployed through the commit hooks that trigger the delivery pipelines (at least in development – you still need some manual checks and balances for deploying into production).
  3. Try not to manage your own infrastructure – micro-services often introduce multiple databases, message brokers, data caches and similar services that all need to be maintained, clustered and kept in top shape. It really helps if your first attempt at micro-services is free from such concerns. A PaaS such as Cloud Foundry or Heroku will allow you to be functional faster and with less headache than with an IaaS, providing that your micro-services are PaaS-friendly.
  4. Don’t create too many micro-services – each new micro-service adds overhead. Cumulative overhead may outstrip the benefits of the architecture if you go crazy. It is better to err on the side of larger services and only split when they end up containing parts with conflicting demands for scaling, life cycle and/or data. Making them too small will simply transfer complexity away from the micro-services and into the service integration task.
  5. Don’t share micro-services between systems – I listed this final point here for completeness, but it is so important that it requires to be broken into its own section.

On micro-service sharing

I have seen many a fiery debate about the difference between micro-services and SOA. There are many similarities (it is hard to argue that micro-service architecture, or MSA is revisiting SOA principles). More recently I have formed a fairly strong opinion that a key differentiation between MSA and SOA is that of ambition.

When you go back and read about the lofty goals of SOA proponents, it is easy to notice that the aim was much higher. MSA success stories didn’t attempt to reinvent the world around catalogs of reusable services, systems that are discovering those services through registries, etc. At the beginning of every MSA success story is a team that grew their simple application too fast without refactoring along the way and hit the maintainability wall.

If you carefully read ‘monolith to micro-services’ blog posts, you will notice that the end result is the same thing. Groupon team has not created a ‘catalog of social coupon services to be assembled into coupon applications’ – they rebuilt Groupon Web site. They broke the monolith into small pieces and rebuilt it again. As far as their end users are concerned, the monolith is still there – the site was rebuilt in mid-air.

Since I think that micro-services are pragmatic and sane revisiting of SOA, it is apt to assume that creating reusable micro-services is low on the list of priorities. Yes, a micro-service needs to be individually deployable and be flexible enough that it can be bound to other services dynamically (minimally through some kind of a configuration on startup). You need to be able to deploy each service to multiple logical ‘spaces’ (DEV, QA, STAGING, PROD). But each logical micro-service instance is part of a single distributed monolith, re-imagined in a cloud-friendly way.

From a monolith to a – distributed monolith?

Where am I going with all this? I am a bit concerned that the industry noise will ruin micro-services by taking them outside their comfort zone. Too many people are taking them to the areas where they shouldn’t, and I don’t want the inevitable backlash to overshoot. Micro-services are a solution for the Big Ball of Mud architecture, but the alternative micro-service system is still a big ball. This ball made up of many small balls, is cleaner and easier to manage, deploy, scale and evolve, and can be inflated bigger than the old ball without exploding, but it is fundamentally the same thing.

Any attempts at nano-services, trying to deploy micro-services manually, using them because they are trendy without real need, or re-using them between multiple systems will result in a disappointment we don’t really need at the moment.

Are micro-services SOA? No, and please let’s keep it that way.

© Dejan Glozic, 2015

Isomorphic Apps Part 2: Node, React.js, and Socket.io

Two Heads, 1930, Wikimedia Commons
Two Heads, 1930, Wikimedia Commons

When I was a kid, I went to the movies to watch Mel Brooks’ “History of The World, Part I”. I had a great time and could not wait for the sequel (that featured, among other things, Hitler on ice, a Viking funeral and laser-shooting rabbis in ‘Jews in Space’ teaser). Alas, ‘Part II’ never came. Determined to not subject my faithful readers to such a disappointment, here comes the promised part II of my ‘Isomorphic Apps’ trilogy.

In the first part of this story, we created an isomorphic app by taking advantage of the fact that we can use Dust.js as an Express view engine, and then compile partials into JavaScript and re-use them on the client as needed. In order to compare approaches with only one variable changed, we will switch to React.js for the view.

What’s the deal with React.js

React.js is attracting a lot of attention these days due to the novel approach it has taken to building dynamic Web apps. At the heart of the approach is the notion of a virtual DOM. React.js components manipulate an abstraction of a DOM that is then transformed into the physical DOM in a highly optimized fashion. Even more ingeniously, browser’s DOM is only one of the possible transformations: virtual DOM can be also serialized into plain HTML, which makes it possible to use it on the server. Even more recently, it can be serialized into native code to address mobile (and even desktop) UI components.

I am old enough to remember Java’s “Write once, run anywhere” slogan, and this looks like new generation’s attempt to make a run for this chimera. But even putting React native on a side for a moment, the fact that you can render on the server makes React supremely suitable for isomorphic apps, something Angular.js is lacking.

React.js is also refreshingly simple to figure out. Angular.js has this famous adoption roller coaster, and sometimes when you don’t get an Angular peculiarity, you feel the fault is with you, not Angular. React.js took an approach that life is short, and we can do better things with our time than figure out the maddening quirks of a complex framework. There is no two-way binding (because it has shown to be a double-edged sword – see what I did here). When the model changes, you just naively rebuild the view (sometimes referred to as ‘write pages like it’s the 90s’). Seems massively suboptimal, but remember that you are only rebuilding the virtual DOM – React.js figures out the actual delta and only applies the delta against the real DOM. And since most of the performance (or lack thereof) lies in the physical DOM, React.js promises fast apps without writing a lot of code for smart and surgical updating on model changes.

Configuring React.js as an Express view engine

Alright, I hope this wet your appetite for some coding. We will start by cloning the page from part I and adding another view engine in app.js (because I am cheap/lazy and don’t want to run another app for this). For this we need to install react on the server, as well as the express view adapter.

We will start by installing ‘react’ and ‘express-react-views’ and configuring the jsx view engine:


var react = require('express-react-views');

...

app.engine('jsx', react.createEngine());
app.set('view engine', 'jsx');

The last line above should only be set if you will use JSX as the only view engine for Express. In my case, I had to omit that line because I am already serving some Dust pages, and you can only set one default engine. The only thing I lost this way was the ability to find JSX templates without the extension – they can still be rendered when extension is included.

The controller for our React.js page is almost identical to the one we wrote for Dust.js:


var model = require('../models/todos');

module.exports.get = function(req, res) {
   model.list(req.user, function(err, todos) {
      res.render('isomorphic_react.jsx',
         { title: 'React - Isomorphic', user: req.user, todos: todos });
   });
};

Most of the fun happens in the view, as expected. React.js requires some getting used to. For starters, JSX syntax is actually XML (and not even XHTML), so all elements require termination. Many attribute names require camel case, which is very annoying (I always hated Jade for this mental transformation, and now JSX is doing the same for me). At least the JSX transformer is yelling at you in the console about possible errors you made, so fixing up your JSX is not too hard:

var React = require('react');
var DefaultLayout = require('./rlayout');
var RTodo = require('./rtodo');

var Todos = React.createClass({
  render: function() {
    return (
      <DefaultLayout { ...this.props} selection="react">
        <h1>Using React.js for View</h1>
        <h2>Todos</h2>
        <div className="new">
           <textarea id="new-todo-text" placeholder="New todo"/>
        </div>
        <div className="delete">
           <button type="button" id="delete-all"
              className="btn btn-primary">Delete All</button>
        </div>
        <div id="todos" className="todos">
           {this.props.todos.map(function(todo) {
          	return <RTodo key={todo.id} {...todo} />;
           })}
        </div>
        <script src="/js/prettyDate.js"></script>
        <script src="/js/rtodo.js"></script>
        <script src="/js/rtodos.js"></script>
      </DefaultLayout>
    );
  }
});

module.exports = Todos;

The code above requires some explanation. Unlike with Dust.js, both inclusion into a common layout template and instantiation of partials is done through React.js component model. Notice that we imported DefaultLayout component that is our standard page boilerplate. The payload of the page is simply specified as content of the instantiated component in the ‘render’ method above.

Another important point is that unlike Dust.js, properties are not automatically passed down the component hierarchy – we need to explicitly do it (notice the strange “{ …this.props }” expression in the DefaultLayout declaration – what I am saying is ‘pass all the properties down to the child component’). We can also define new properties, which I am doing by passing ‘selection’ that will be used by the header component (to highlight the ‘React’ link).

Another important section of the template is where I am instantiating RTodo component (a single Todo card). Flow control can be tricky in JSX because the entire template is one giant return statement, so everything needs to evaluate to an expression. Notice the trick with using the array map to iterate over the list of todos and render each child todo component.

This code will produce a page very similar to the one with Dust.js, with identical results. In fact, it is possible to go back and forth because both pages are using the same REST service for the model.

JSX compiler

So far we took care of the server side. As with Dust.js, we can compile components we need on the client side, this time using jsx compiler that comes by installing ‘react-tools’:


#!/bin/bash
node_modules/react-tools/bin/jsx --extension jsx views/ public/js/ rtodo

We can compile any number of components and place them into the JS directory under /public folder so that Express can serve them to the browser.

The client side script is very similar to the one used by the Dust.js page. The only difference is in the ‘Add’ action handler:

var socket = io.connect('/');
socket.on('todos', function (message) {
  if (message.type=='add') {
    var newTodo = document.createElement('div');
    React.render(React.createElement(RTodo, message.state),
              newTodo);
    $(".todos").prepend(newTodo);
  }
  ...

The code is remarkably similar – instead of calling ‘dust.render’ to render the partial using the element we received via the Socket.io message, we ask React to render the compiled element into a new DOM element we created on the fly. We then prepend this element into the parent DIV.

Commentary and comparisons

First off, I would say that this second attempt at writing an isomorphic app was a success because I was able to replicate Dust.js example from part I with identical behaviour. However, it is not as good a fit for React.js. A better example would see us modifying a model and asking React.js to re-render an existing DOM branch. Now that I feel reasonably comfortable around React.js, I think I will create something more dynamic for it in the near future. A true React-y way of doing the list of todos would be to simply re-render the entire list on each Socket.io message. We would let React.js figure out that all it needs to do is insert a new Todo DIV into the parent node. This way we would not need to create DOM elements ourselves, as in the code above.

After spending a year working with Dust.js, JSX took some getting used to. As I said, I hated Jade because it created an additional layer of abstraction between me and HTML, and I never quite knew what final HTML it will produce. JSX evokes the same feelings in me, but the error/correction loop has shortened as I learned it more. In addition, the value I get in return is much higher with JSX than with Jade.

Nevertheless, certain things will not get better with time. JSX is awkward to work with when it comes to logic. Remember, the entire template is an expression, so statements are really hard to fit in. Tertiary conditonals work, and as you saw, it is possible to use tricks with maps to iterate over children in a collection. I still prefer Dust.js for straightforward pages, but I can see how React.js can work with components in a very dynamic app.

I like React.js component model, as well as the fact that code and markup are close to each other – for a developer this is very useful. I also like the fact that, JSX quirks aside, there is much less magic compared to Angular.js. Of course, React.js is really just a View of MVC, so it is not a fair comparison. On the other hand, I am now dying to hook it up into Backbone as a view – it feels like a great combination (and of course, there are already articles on exploiting this exact combination). The more I think and read about it, Backbone models/collections/router and React.js views may just end up being my favorite stack for writing highly dynamic apps with server side bonus for SEO and initial experience.

A word of caution

If your system has elements of both a site and an app, use micro-services and implement site portions with a more straightforward templating solution (as I already covered in the previous blog post). This is going to make content authoring easier and increase the number of content providers with cursory knowledge of HTML that will feel confident authoring and/or modifying something like Dust.js templates. Leave React.js for 10x developers working on the highly dynamic ‘app’ portions of the system. By the way, this is an area where micro-services shine – this kind of partitioning is one of their key selling points. You can easily have micro-services using Dust.js and micro-services using React.js (and as I have already shown, even mixed in the same Node app).

One of the downsides of a mixed system (one using both Dust.js and React.js) is that sometimes content pages have dynamic component sprinkled in them. The challenge is invoking those component without requiring your casual developers to be afraid to touch such pages. Invoking React.js component in a Dust.js page would require inserting JavaScript tags, which is less then ideal. This is where Web Components are much easier to reason about, and there are already attempts to bridge the two worlds – invoking React.js components as custom Web Components.

And that’s a wrap

As before, you can browse the source code as IBM DevOps Services project, and the latest version of the app is running in Bluemix. In the final instalment of this trilogy, I will make our example a bit more dynamic (to let React.js show its true potential), and add some structure using Backbone.js. Until then, React away!

© Dejan Glozic, 2015

Micro-Services for Dysfunctional Teams

Jan Steen, Argument over a Card Game, Wikimedia Commons.
Jan Steen, Argument over a Card Game, Wikimedia Commons.

Update: I have received a ton of feedback on this post, and some of the well meaning criticism is concerned with the term ‘dysfunctional’, considering it a bit ‘judgy’ from somebody that is supposed to help these same teams. Apart from yielding a catchy title, Hacker News reader was spot on when he declared my use of the word as ‘term of endearment’ more than anything else. Not unlike a smart person calling herself ‘stupid’ or a workaholic calling himself ‘lazy’ for sleeping in one morning. In the proceeding article, ‘dysfunctional’ are most teams made from real people, and the opposite is the ideal we are all striving towards, always just beyond our reach.

I am back from Las Vegas and IBM Interconnnect 2015, and fully recovered from the onslaught on the senses. Man, does that city ever shut up. Time to return to regular programming. Today topic is my surprising realization of the main backers of micro-services in large enterprises. As they say in click baits, it’s not who you think.

For the last year or so I was a vocal evangelist for both Node.js and micro-services in IBM and elsewhere (using former as the platform of choice for the latter). Or as a dear former colleague of mine kindly put ‘evangelist, coach, and referee’. That role put me in contact with a number of teams finding themselves on the verge of the now familiar ‘from monolith to micro-services’ journey.

What I find over and over again is that micro-services appeal to leadership more than the developers. This is a somewhat confusing revelation considering micro-services are considered an architectural approach, and project managers are not supposed to fall in love with an architecture (at best, they are weary of it because ‘architecture’ is typically a code word for more boxes and increased cost and time to delivery). And yet.

Micro-services are not (only) about technology

When I am asked to do an elevator pitch about advantages of micro-services, this list typically comes to mind:

  1. Individually deployable pieces of running software each responsible for a small number of tasks
  2. Each micro-service can be implemented using a different stack
  3. Horizontal scalability decisions can be made at a micro-service level

When you analyze this list, neither point is really making your system better from a purely technical point of view. In fact, a monolithic system is definitely easier to work with when you are alone or have a small, ‘war room’ kind of a team. When a monolith is relatively small, deploying it is not a big deal, and cookie cutter scaling does not seem too wasteful (assuming the monolith does not depend on in-memory state that is hard to distribute).

Each of the points actually promises to fix long-standing systemic problems of very large teams responsible for equally large monoliths that are at the bursting point.

Breaking the logjam

The promise of individually deployable pieces seems to always light a fire in project managers’ eyes. I don’t blame them – most large monolithic systems are a bitch to deploy. If they use compiled languages such as Java, the build times are nontrivial. With every new line of code, deploy times keep growing, and it increasingly feels that there must be a better way to do this.

Monoliths are the first thing we build in the cloud because that’s what we used to do for on-premise deployment. Turns out, the price we pay to get the monolith built and deployed is too steep given the high bar set by ‘born in the cloud’ unicorns. Therefore, breaking up the monolith into smaller, more manageable parts seems as natural as mitosis is for single-cell organisms.

Beyond solving the sheer size problem, micro-services promise to solve the ‘different rate of change’ problem. As I have blogged recently, a typical system today have elements of Web sites, as well as Web apps rolled into one. Elements acting as a site have a tendency of wanting to change more often than the app part. Site sections tend to have a lot of marketing material that is time sensitive, while app sections are trickier and need to be changed more carefully (and may require data migration every once in a while). I often joke that these types of systems feel like a donkey and a horse strapped to the same harness – they just cannot find the right rhythm. One of them is either too fast or too slow. In fact, a lot of systems feel like we have a donkey, a horse, a cow and a goat all trying to pull the carriage together – not a pretty picture (funny though).

In these kinds of situations, micro-services offer an organizational, or governance solution, not a technical one. They often result in more moving parts and more complexity, but the relief of letting the metaphorical donkey and the horse run at their own pace is too hard to resist, overhead be damned. The alternative is having a complex process executed with utmost precision, and so far I know only one team (Facebook) that can pull it off with any regularity. Micro-services offer a more realistic alternative for the rest of us (the ‘dysfunctional teams’ from the title, which is really most of the teams).

No more intergalactic technology consensus

Anybody who tried to get a number of teams in a large organization to agree on a common technology can sympathize with this. We are all human, and tend to have passionate and strong opinions on technologies we like and hate. Put enough of these strong opinions together, and they tend to cancel each other out, leaving no common ground. This is bad news for the poor architect that needs to pick an approach for a large project. I once heard a saying learned through the hard won experience: “Even if we agree on a common technology or approach on Monday, we will slide back into disagreement by Thursday”.

In this context, micro-services offer not as much of a solution as “let’s just agree to disagree”. The focus is moved from common technology to common interfaces, integration techniques, protocols for passing data around. There is enough understanding about the advantages of stable protocols and APIs, so this part is much easier to close with a solid and lasting agreement.

A word of caution: I personally don’t think that, just because we could write each micro-service in a different technology, we should. There is much to be said about code reuse, and micro-services quickly minted by Yeomen generators tend to yield more productive teams than ‘let’s write the same authentication library in 6 different languages’. We found that by limiting our choices to Node.js and Java, we can move faster.

Nevertheless, it is just a matter of time until a new platform is touted as revolutionary or trending. When the time comes, we can risk one micro-service without betting the farm on it. Just in case Go does not turn out to be the giant killer it is touted to be, for example.

Cookie cutter is no fun with giant cookies

Finally, making clustering decisions at a micro-service level is more of a bean counter than architectural issue. Just clustering a small monolith is very simple – put a load-balancer in front of the monolith copies and you are done (again, assuming the monolith nodes do not critically depend on in-memory data that need to be kept in sync).

As the monolith grows, it needs more CPU and RAM to operate properly, times number of nodes. As it normally happens, ‘heat points’ are not distributed evenly across the monolith – there are sections that are working very hard, and sections that are barely moving. Cookie-cutter clustering becomes more and more expensive, with an increased percentage of unused and therefore wasted capacity.

Micro-services promise to be more efficient at using resources because we can make individual clustering decisions. We can beef up busy nodes and run a relatively small number of instances of rarely used micro-services. This is a purely economic (and ecological) issue – if we didn’t care about waste, we could just continue to run multiple monolith instances.

Of course, this is all assuming our monolith is clusterable to begin with. If it is not, micro-services become a way out for a system that has hit a limit of its ability to scale.

Keep the excitement to yourself

Next time you are in position to pitch micro-services to a worried project manager or product owner, don’t forget that technology is really not what you are selling – you are selling a solution for process, governance, cost of operation and scalability issues, not a technology. You are selling the ability to fix a typo on a prominent page of your large system within minutes without touching the rest of the system. You are promising the ability to maneuver an oil tanker as if it was a canoe, in a world full of oil tankers.

You can still be in love with the technology, just make it our little secret. I’ll never tell.

© Dejan Glozic, 2015

Should I Build a Site or an App? Yes!

Minnesota State Capitol Woodworkers Toolbox, circa 1900, Wikimedia Commons.
Minnesota State Capitol Woodworkers Toolbox, circa 1900, Wikimedia Commons.

Yes, I know. I stopped blogging to take a desperately needed break. Then I returned only to be hit with a mountain of fresh, ‘hit the ground running’, honest to God January work that knocked the air out of my lungs and pinned me down for a while. Then an IBM colleague tried to ask me a Dust.js question, my doors were closed due to a meeting, and he found his answer in one of my blog posts.

So my blog is actually semi-useful, but it will stop being so without new content, so here is the first 2015 instalment. It is about one of my favorite hobbies – being annoyed with people being Wrong on the Internet. Judging by various discussion threads, developers are mostly preoccupied by these topics:

  1. All the reasons why AngularJS is awesome/sucks and will be the next jQuery/die in agony when 2.0 ships (if it ever ships/it will be awesome/cannot wait).
  2. Picking the right client side MVC framework (lots of people out there frozen into inaction while looking at the subtle differences of TODO app implementations in 16 different incarnations)
  3. Declaring client side single-page apps ‘the cool way’ and server side rendering ‘the old way’ of Web development

These topics are all connected, because if you subscribe to the point of view in (3), you either pray at the church of AngularJS (1) or you didn’t drink the Kool-Aid and subsequently need to pick an alternative framework (2).

Dear fellow full-stack developers and architects, that’s pure nonsense. I didn’t put an image of a toolbox at the top because @rands thinks it will nicely fit Restoration Hardware catalog. It is a metaphor of all the things we learn along the way and stash in our proverbial tool box.

Sites and apps

The boring and misleading discussion ‘server or client side apps’ has its origin in the evolution of the Web development. The Web started as a collection of linked documents with strong emphasis on indexing, search and content. Meanwhile, desktop applications were all about programming – actions, events, widgets, panes. Managing content in desktop apps was not as easy as on the Web. As a flip side, having application-like behaviour on the Web was hard to achieve at first.

When Ajax burst onto the scene, this seemed possible at last, but many Ajax apps were horrible – they broke the Back button, didn’t respect the Web, were slow to load due to tons of JavaScript (the dreaded blank page), and the less I say about hashes and hash bangs in URLs, the better.

It is 2015 now and the situation is much better (and at least one IBM Fellow concurs). Modern Ajax apps are created with more predictable structure thanks to the client side MV* frameworks such as BackboneJS, AngularJS, EmberJS etc. HTML5 pushState allows us to go back to deep linking. That still does not mean that you should use a hammer to drill a hole in the wall. Right tool for the right job.

And please don’t look at native mobile apps in envy (they talk to the server using JSON APIs only, I should do that too). They are physically installed on the devices, while your imposter SPA needs to be sent over mobile networks before anything can be seen on the screen (every bit of your overbuilt, 1MB+ worth of JavaScript fatness). Yes, I know about caching. No, your 1MB+ worth of JavaScript still needs to be parsed every time with the underpowered JavaScript engine of the mobile browser.

But I digress.

So, when do you take out site tools instead of Web app tools? There are a few easy questions to ask:

  1. Can people reach pages of your app without authenticating?
  2. Do you care about search engine optimization of those pages? (I am curious to find people who answer ‘No’ to this question)
  3. Are your pages mostly linked content with a little bit of interactivity?

If this describes your project, you would be better off writing a server-side Web app (say, using NodeJS, express and a rendering engine like Handlebars or Dust.js), with a bit of jQuery and Bootstrap with a custom theme to round things up.

Conversely, these may be the questions to ask if you think you need a single-page app:

  1. Do people need to log in in order to use my site?
  2. Do I need a lot of complex interactive behaviour with smooth transition similar to native apps?
  3. Do I expect users to spend a lot of time in my app doing something creative and/or collaborative?

What if I need both?

Most people actually need both. Your site must have a landing page, some marketing content, documentation, support – all mostly content based, open to search engine crawlers and must be quick to download (i.e. no large JS libraries please).

Then there is the walled up section where you need to log in, and then interact with stuff you created. This part is an app.

The thing is, people tend to think they need to pick an approach first, then do everything using that single approach. When site people discuss with app people on the Internet, they sound to me like Abbott and Costello’s ‘Who’s on First?’ routine. Site people want the home page to be fast, and don’t want to wait for AngularJS to download. They also don’t want content people to learn Angular to produce new pages. App people shudder at the thought of implementing all the complex interactions by constantly redrawing the entire page (sooner or later Web 1.0 is mentioned).

The thing is, they are both right and wrong at the same time. It may appear they want to have their cake and eat it too, but that is fairly easy to do. All you need to do is apply some care in how your site is structured, and give up on the ideological prejudice. Once you view server and client side techniques as mere tools in the toolbox, all kinds of opportunities open up.

Mixing and matching

The key in mixing sites and apps is your navigational structure. Where SPA people typically lose it is when they assume EVERYTHING in their app must be written in their framework of choice. This is not necessary, and most frameworks are embeddable. If you construct your site navigation using normal deep links, you can construct your navigational areas (for example, your site header) on the server and just use these links as per usual. Your ‘glue’ navigational areas should not be locked in the client side MV* component model because they will not work on the server for the content pages.

What this means is that you should not write your header as an Angular directive or a jQuery plug-in. Send it as plain HTML from the server, with some vanilla JavaScript for dynamic effects. Keep your options wide open.

For this to work well, the single page apps that are folded into this structure need to enable HTML5 mode in their routers so that you can transparently mix and match server and client side content.

Now add micro-services and stir for 10 minutes

To make things even more fun, these links can be proxied to different apps altogether if your site is constructed using micro-services. In fact, you can create a complex site that mixes server-side content with several SPAs (handled by separate micro-services). This is the ultimate in flexibility, and if you are careful, you can still maintain a single site experience for the user.

To illustrate the point, take a look at the demo I have created for the Full Stack Toronto conference last year. It is still running on Bluemix, and the source code is on GitHub. If you look at the header, it has several sections listed. They are powered by multiple micro-services (Node apps with Nginx proxy in front). It uses the UI composition technique described in one of the previous posts. The site looks like this when you click on ‘AngularJS’ link:

fsto-angular

The thing is, this page is really a single-page app folded in, and a NodeJS micro-service sends AngularJS content to the browser, where it takes over. In the page, there are two Angular ‘pages’ that are selectable with two tabs. Clicking on the tabs activates Angular router with HTML5 mode enabled. As a result, these ‘pages’ have normal URLs (‘/angular-seed/view1’ and ‘/angular-seed/view2’).

Of course, when clicking on the links in the browser, Angular router will handle them transparently, but if you bookmark the deep URL and paste in the browser address bar, the browser will now hit the server first. The NodeJS service is designed to handle all links under /angular-seed/* and will simply serve the app, allowing Angular router to take over when loaded.

The really nice thing is that Angular SPA links can sit next to links such as ‘About’ that are a plain server-side page rendered using express and Dust.js. Why wrestle with Angular when a straightforward HTML page will do?

Floor wax and dessert topping

There you go – move along, nothing to see here. There is no point in wasting time on Reddit food fights. A modern Web project needs elements of server and client side approaches because most projects have heterogeneous needs. Once you accept that, real fun begins when you realize you can share between the server and the client using a technique called ‘isomorphic apps’. We will explore these techniques in one of the future posts.

© Dejan Glozic, 2015