With React, I Don’t Need to Be a Ninja Rock Star Unicorn

San Diego Comic-Con 2011 - Lego Ninja, The Conmunity - Pop Culture Geek from Los Angeles, CA, USA

Followers of my blog may remember my microservice period. There was a time I could not shut up about them. Now, several blogs and not a peep. Am I over microservices? Not by a long stretch.

For us, microservices are gravity now. I remember an interview with Billy Corgan of The Smashing Pumpkins where, when pressed about his choice of guitar strings, he answered: “I use them”. That’s how I feel about microservices – now that we live and breathe them every day, they are not exciting, they are air. The only people who get exited about air are SCUBA divers, I suppose, particularly if they are running low.

ReactJS, on the other hand, is interesting to us because we are still figuring it out. For years we were trying to have our cake and eat it too – merge the benefits of server and client side rendering. I guess my answer to ‘vanilla or chocolate ice cream’ is ‘yes please’, and with React, I can have my chocolate sundae for breakfast, lunch and dinner.

The problem with ninja rock star unicorns

The magical creature from the title is of course the sought after 10x developer. He/she knows all the modern frameworks, sometimes reads about the not so modern ones just for laughs, and thrives where others are banging their heads against their desks repeatedly.

Rock star developers not only do not shy away from frameworks with high barrier of entry such as Angular, they often write there own, even more sophisticated and intricate. Nothing wrong with that, except that you cannot hire a full team of them, and even if you could, I doubt team dynamics would be particularly great. The reality of today’s developer job market is that you will likely staff a team with great, competent and potentially passionate developers. I say potentially because in many cases their passion will depend on you as a leader and your ability to instill it.

The React connection

This is where React comes into play. Careful readers of this blog may remember my aversion to JavaScript frameworks in general. For modestly interactive sites, you can go a long way with just Node.js, Express, a templating library such as Dust.js and a sprinkle of jQuery for a good measure. However, a highly dynamic app driven by REST APIs is too much of a challenge for jQuery or vanilla JS alone. I am not saying it cannot be done, but by the same token, while you can cut your grass with box cutters, it is massively less efficient than a lawn mower. At the end of the day, you need the right tools for the right job, and that means some kind of a JavaScript library or a framework.

What kept me away from Angular for the longest time was the opinionated nature of it, and the extent to which it seeks to define your entire world. Angular is a cult – you cannot be in it part time.

“Angular is a cult – you cannot be only a part time member.”

Not an Angular cult member

I have already written about why I got attracted to React from a technical point of view. There are great things you can do with isomorphic apps when you combine great React libraries. But these are all technical reasons. The main reason I am attracted to React is its philosophy.

We are all idiots at times

Angular used to pride itself as a ‘super-heroic JavaScript framework’. Last time I checked, they removed it from the home page (although it still appears in Google searches – ironic, I know). I presume they meant that the framework itself gives you super-hero powers, not that you need to be a super-hero developer in order to use it, but sometimes I felt that way.

I am singling out Angular somewhat unfairly – most MVC JavaScript framework approach the problem by giving you tools to carefully wire up elements on the page with events, react to watched variables, surgically change styles, properties, collections and so on. Sounds great in the beginning, until you scale to a real-world application, and things become really complex.

This complexity may not be a big deal while you are in the thick of it, coding like a beast. You may be a great developer. The problem is that the moment you turn your head away from that code, you start the ‘idiot’ clock – until that time when you no longer remember how everything fits together.

Now, if you are looking at your own code and cannot figure out how it works, what are the chances another team member will? I long ago proclaimed that dumb code is good and smart code is bad. Not bad code, just straightforward, easy to understand, ‘no software patent here’ code. Your future self will be grateful, future maintainers doubly so.

React allows us to write boring code

Let me summarize the key React philosophy in one sentence:

“Something changed in my application’s state. Better re-render it.”

React in a nutshell

I cannot emphasize enough the importance of this approach. It does not say “watch 50 variables and do surgically change DOM elements and properties when something happens in one component, then cascade those surgical changes to other components also watching those components”. Behind this, of course, is React’s ingenious approach of using a virtual DOM and only updating the real DOM with the actual changes between the two. You can read about it on React’s web page.

After many years of ‘surgical’ JavaScript DOM manipulations, there is something completely counter-intuitive about ‘just re-render’ approach. It feels like it should not work. It feels wasteful to keep creating all those JavaScript objects, until you realize that they are really cheap, and that the true cost are actual DOM manipulations.

In fact, you can use this approach with any JavaScript rendering engine – Mustache, Handlebars, Dust. The only problem is – if you try the ‘something changed, re-render the component’ approach there, templates will re-render into inner HTML, and that is wasteful. It is also potentially disruptive if users are interacting with form elements you just recycled under their feet. React, on the other hand, will not do it – it will carefully update the DOM elements and properties around the form controls.

Increase velocity without increasing bug rate

The key design goal of React was to help real world projects running code in production. Developers of modern cloud applications are under constant pressure from product management to increase velocity. The same product management of course expects you to maintain quality of your apps, which is very hard. It is not hard to imagine that shortening the cycles will increase the bug rate, unless the code we write is simplified. Writing ‘surgical’, intricate code in less time is asking for trouble, but React is easy to understand. There is uniformity in its approach, a repeatability that is reassuring and easy for people who didn’t write the code originally to understand when they pick it up.

Developers take great pride in their work, and sometimes they get carried away by thinking that code is their deliverable. They are wrong. What you are delivering are user experiences, and code is just means to an end. In the future, we will just explain what we want to some more powerful Siri or Cortana and our app will come to existence. Until then, we should use whatever allows us to deliver it with high velocity but without bugs that would normally come with it.

For my team, React is just the ticket. As is often in life, YMMV.

© Dejan Glozic, 2015

ReactJS: The Day After

A man with an excruciating headache, Wikimedia Commons
A man with an excruciating headache, Wikimedia Commons

The other day I stumbled upon a funny Onion fake news report of the local man whose one-beer plan went terribly awry. Knowing how I professed undying love to ReactJS in the previous article, and extrapolating from life that after every night on the town comes the morning of reckoning, it is time to revisit my latest infatuation.

Alas, those expecting me to declare my foolishness and heartbreak with ReactJS are hoping in vain. Instead, what you will get here is a sober (ha) account of the problems, gotchas and head scratchers we encountered running ReactJS in production. We continue to use it and plan to build our next set of micro services using it, but we have a more realistic view of it now. So let’s dive in.

  1. Code Splitting – First off, my example didn’t just use ReactJS, but also react-router and react-engine. This amazing trio together allowed us to realize the dream of isomorphic apps, where you start rendering on the server, let the browser quickly render the initial content, load JavaScript, mount React components and continue with the same code on the client.
    Nevertheless, when we got past the small example, we realized that we need to split the code we initially bundled together using browserify. At the time of this writing, code splitting is not entirely painless. React-router in its version 0.13 has examples that all presume the use of Webpack to build your JavaScript. We are using browserify and must suffer until React-router 1.0 arrives. In the mean time, we can use react-router-proxy-loader, which allows us to asynchronously load code from a bundle that does not expect Webpack.

  2. React-engine growing pain – As any new library, react-engine has some rough edges. We are happy to report that one of the issues we had with it (the inability to control how react-router is being instantiated) has already been resolved. We are hoping to be able to make react-engine omit some of the data it sends to the client because it is only ever used for server-side rendering.
  3. ReactJS id properties – React attaches ‘reactid’ data property to almost all DOM elements, using ids that are sometimes very long, resulting in situations like:
    <span data-reactid=".ejv9lnvzeo.1.2.3.0.0.$7c87c148-e1a4-4cb8-81f8-c5e74be7684b.0.1.0.0">Hello</span>
    

    If you are using gzip for the markup (as you should), these strings compress very well, but you still end up with a very messy and hard to read HTML when you view source. React team is debating back and forth on the need of these properties and they may disappear at some point in the future. I for one will not miss them.

  4. Fussy with the whitespace – While you may think when working with JSX that you are coding in HTML, you are not, and nowhere is it more apparent than when you try to add some free text in the body of HTML elements, or to mix free text and elements. JSX converts snippets of text into spans at will, resulting in HTML that bears little resemblance to the initial JSX.
    I wish there was a better way to do this. I know all the virtues of React and how JSX is most decidedly not HTML, but some things like free-form text with some embedded tags should not result in a flurry of spans (and the hated data-reactid properties).
  5. Fussy with JavaScript tags – Inserting JavaScript tags in JSX is easy if you are referencing external JS files, but if you try to inline some JavaScript right there, JSX can through you curveball after curveball until you give up and extract that code into a file. This is not a show stopper but it is annoying when you want to inline couple of lines. From the maintainability point of view, it is probably better to keep JavaScript in its own file, so I am not going to protest too loudly.

ReactJS and Web Components

As with any JS framework, making a choice is normally followed by a little nagging voice in your head concerned that you chose wrong. When it comes to religious choices (AngularJS vs ReactJS vs EmberJS etc.), there is little you can do – you just need to make a leap of faith, make sure the framework works for your particular use case and jump.

However, Web Components are something else – they promise to be ‘the native Web’ at some point, so choosing between Web Components and ReactJS is not a religious debate. Even today, with the shims it is possible to run Web Components in browsers not supporting them natively, and natively in Chrome. A growing body of reusable Web components is something you don’t want to be left out of if you are Reactified to the max.

Luckily, Andrew Rota helped out with his presentation on complementarity of ReactJS and Web Components at the recent ReactJS Conf 2015. It is worth the watch, and the skinny is that since about October 2014, custom components are a fair game in JSX. This means that you can place HTML imports in the head element, and then freely use custom components in JSX the same way you would native HTML elements.

In fact, you are not loosing out on the promise of ReactJS virtual DOM. React treats custom components the same way as native HTML components – it will compare your new render to the current DOM state and only change what needs changing (adding, removing, or changing elements and properties that are not the same). This means that you can extend the power of ReactJS to Web Components.

Of course, there are some caveats, but it turns out that things you need to care about when writing Web Components for ReactJS consumption are generally applicable. Writing small components, extremely well encapsulated, that do not leak or make assumptions about the page they are running in, or try to insert stuff outside their own boundary.

No turning back

So this turned to be a click bait of sorts, for we are not turning back from ReactJS, just learning how to use it efficiently and how to be better at it. Stay tuned for the new cool stuff we were able to do with it.

© Dejan Glozic, 2015

PayPal, You Got Me At ‘Isomorphic ReactJS’

Juvet_mirror_01

I love PayPal’s engineering department. There, I’ve said it. I have followed what Jeff Harrell and the team have been doing ever since I started reading about their wholesale jump into Node.js/Dust.js waters. In fact, their blogs and presentations finally convinced me to push for Node.js in my IBM team as well. I had a pleasure of talking to them at conferences multiple times and I continue to like their overall approach.

PayPal is at its core a no-nonsense enterprise that moved from Java to Node.js. Everything I have seen coming from them had the same pragmatic approach, with concerns you can expect from running Node.js in production – security, i18n, converting a large contingent of Java engineers to Node.js.

More recently, I kept tab on PayPal’s apparent move to from Dust.js to ReactJS. Of course, this time around we learned faster and were already playing with React ourselves (using Dust.js for simpler, content-heavy pages and reserving ReactJS for more dynamic use cases). However, we haven’t really started pushing on ReactJS because I was still looking at how to take advantage of React’s ability to render on the server.

Well, the wait is over. PayPal has won my heart again by releasing a React engine that connects the dots in a way so compatible with what we needed that made me jump with joy. Unlike the version I used for my previous blog post, this one allows server side components to be client-mountable, offering true isomorphic goodness. Finally, a fat-free yogurt that does not taste like paper glue.

Curate and connect

The key importance of PayPal’s engine is in what it brings together. One of the reasons React has attracted so much attention lately is its ability to render into a string on the server, then render to the real DOM on the client using the same code. This is made possible by using NodeJS, which is by now our standard stack (I haven’t written a line of Java code for more than a year, on my honour).

But that is not enough – in order to carry over into the client with the same code, you need ‘soft’ page switching – showing boxes inside other boxes and updating the browser history as these boxes are swapped. This has been brought to us now by another great library – react-router. This module inspired by Ember’s amazing router is quickly becoming ‘the’ router for React applications.

What PayPal did in their engine was connect all these great libraries, then write important glue code to put it all together. It is now possible to mix normal server side templates with pages that start their life on the server, then continue on the client with the state preserved and ready to go as soon as JavaScript is loaded.

Needless to say, this was the solution we were looking for. As far as we are concerned, this will put an end to needless ‘server vs client’ wars, and allow us to have our cake and eat it too. Mmm, cake.

Show us some sample code

OK, let’s get our hands dirty. What many people need when writing applications is a hybrid between a site and an app – the ability to put together a web site that has one or more single-page apps embedded in it. We will build an example site that has two plain ReactJS pages rendered on the server, while the third page is really an SPA taking advantage of react-engine and the ability to go full isomorphic.

We will start by creating a shared layout JSX component to be consumed by all other pages:

var React = require('react');
var Header = require('./header.jsx');

module.exports = React.createClass({

  render: function render() {
    var bundle;

    if (this.props.addBundle)
      bundle = <script src='/bundle.js'/>;

    return (
      <html>
        <head>
          <meta charSet='utf-8' />
          <title>
            {this.props.title}
          </title>
          <link rel="stylesheet" href="/css/styles.css"/>
        </head>
        <body>
          <Header {...this.props}></Header>
          <div className="main-content">
             {this.props.children}
          </div>
        </body>
        {bundle}
      </html>
    );
  }
});

We extracted common header as a separate component that we required and inlined:

var React = require('react');

module.exports = React.createClass({

  displayName: 'header',

  render: function render() {
    var linkClass = 'header-link';
    var linkClassSelected = 'header-link header-selected';

    return (
      <section className='header' id='header'>
        <div className='header-title'>{this.props.title}</div>
        <nav className='header-links'>
          <ul>
            <li className={this.props.selection=='header-home'?linkClassSelected:linkClass} id='header-home'>
              <a href='/'>Home</a>
            </li>
            <li className={this.props.selection=='header-page2'?linkClassSelected:linkClass} id='header-page2'>
              <a href='/page2'>Page 2</a>
            </li>
            <li className={this.props.selection=='header-spa'?linkClassSelected:linkClass} id='header-spa'>
              <a href='/spa/section1'>React SPA</a>
            </li>
          </ul>
        </nav>
      </section>
    );
  }
});

The header shows three main pages – ‘Home’, ‘Page2’ and ‘React SPA’. The first two are plain server side pages that are rendered by express and sent to the client as HTML:

var Layout = require('./layout.jsx');
var React = require('react');

module.exports = React.createClass({

  render: function render() {

    return (
      <Layout {...this.props}>
        <h2>Home</h2>
        <p>An example of a plain server-side ReactJS page.</p>
      </Layout>
    );
  }
});

On to the main course

The third page (‘React SPA’) is where all the fun is. Here, we want to create a single-page app so that when we navigate to it by clicking on its link in the header, all subsequent navigations inside it are client-side. However, true to our isomorphic requirement, we want the initial content of ‘React SPA’ page to be rendered on the server, after which react-router and React component will take over.

To show the potential of this approach, we will build a very useful layout – a page with a left nav containing three links (Section 1, 2 and 3), each showing different content in the content area of the page. If you have seen such a page once, you saw it a million times – this layout is internet’s bread and butter.

We start building our SPA top-down. Our top level ReactJS component will reuse Layout component:

var Layout = require('./layout.jsx');
var React = require('react');
var Nav = require('./nav.jsx');
var Router = require('react-router');

module.exports = React.createClass({

  render: function render() {

    return (
      <Layout {...this.props} addBundle='true'>
        <Nav {...this.props}/>
        <Router.RouteHandler {...this.props}/>
      </Layout>
    );
  }
});

We have loaded left nav as a Nav component:

var React = require('react');
var Link = require('react-router').Link;

module.exports = React.createClass({

  displayName: 'nav',

  render: function render() {
    var activeClass = 'left-nav-selected';

    return (
      <section className='left-nav' id='left-nav'>
        <div className='left-nav-title'>{this.props.name}</div>;
        <nav className='left-nav-links'>
          <ul>
            <li className='left-nav-link' id='nav-section1'>
              <Link to='section1' activeClassName={activeClass}>Section 1</Link>
            </li>
            <li className='left-nav-link' id='nav-section2'>
              <Link to='section2' activeClassName={activeClass}>Section 2</Link>
            </li>
            <li className='left-nav-link' id='nav-section3'>
              <Link to='section3' activeClassName={activeClass}>Section 3</Link>
            </li>       
          </ul>
        </nav>
      </section>
    );
  }
});

This looks fairly simple, except for one crucial difference: instead of adding plain ‘a’ tags for links, we used Link components coming from react-router module. They are the key for the magic here – on the server, they will render normal links, but with ‘breadcrumbs’ allowing React router to mount click listeners on them, and cancel normal navigation behaviour. Instead, they will cause React components registered as handlers for these links to be shown. In addition, browser history will be maintained so that back button and address bar works as expected for these ‘soft’ navigations.

Component RouteHandler is responsible for executing the information specified in our route definition:

var Router = require('react-router');
var Route = Router.Route;

var SPA = require('./views/spa.jsx');
var Section1 = require('./views/section1.jsx');
var Section2 = require('./views/section2.jsx');
var Section3 = require('./views/section3.jsx');

var routes = module.exports = (
  <Route path='/spa' handler={SPA}>;
    <Route name='section1' handler={Section1} />;
    <Route name='section2' handler={Section2} />;
    <Route name='section3' handler={Section3} />;
	<Router.DefaultRoute handler={Section1} />;
  </Route>
);

As you can infer, we are not declaring all the routes for our site, just the section for the single-page app (under the ‘/spa’ path). There we have built three subpaths and designated React components as handlers for these routes. When a Link component whose ‘to’ property is equal to the route name is activated, the component designated as handler will be shown.

Server needs to cooperate

In order to get our HTML5 push state enabled router to work, we need server side cooperation. In the olden days when SPAs were using hashes to ensure client side navigation is not causing page reloading, we didn’t need to care about the server because hashes stayed on the client. Those days are over and we want true deep URLs on the client, and we can have them using HTML5 push state support.

However, once we start using true links for everything, we need to tell the server to not try to render pages that belong to the client. We can do this in express like this:

app.get('/', function(req, res) {
  res.render('home', {
    title: 'React Engine Demo',
    name: 'Home',
    selection: 'header-home'
  });
});

app.get('/page2', function(req, res) {
  res.render('page2', {
    title: 'React Engine Demo',
    name: 'Page 2',
    selection: 'header-page2'
  });
});

app.get('/spa*', function(req, res) {
  res.render(req.url, {
    title: 'SPA - React Engine Demo',
    name: 'React SPA',
    selection: 'header-spa'
  });
});

Notice that we have defined controllers for two routes using normal ‘res.render’ approach, but the third one is special. First off, we have instructed express to not try to render any pages under /spa by sending them all to the React router. Notice also that instead of sending normal view names in res.render, we are passing entire URL coming from the request. This particular detail is what makes ‘react-engine’ ingenious – the ability to mix react-router and normal views by looking for the presence of the leading ‘/’ sign.

A bit more boilerplate

Now that we have all these pieces, what else do we need to get this to work? First off, we need the JS file to configure the react router on the client, and start the client side mounting:

var Routes = require('./routes.jsx');
var Client = require('react-engine/lib/client');

// Include all view files. Browserify doesn't do
// this automatically as it can only operate on
// static require statements.
require('./views/**/*.jsx', {glob: true});

// boot options
var options = {
  routes: Routes,

  // supply a function that can be called
  // to resolve the file that was rendered.
  viewResolver: function(viewName) {
    return require('./views/' + viewName);
  }
};

document.addEventListener('DOMContentLoaded', function onLoad() {
  Client.boot(options);
});

And to round it all, we need to deliver all this JavaScript, and JSX templates, to the client somehow. There are several ways to approach JavaScript modularization on the client, but since we are using Node.js and sing the isomorphic song, what can be more apt than using Browserify to carry over CommonJS into the client? The following command line will gather the entire dependency tree for index.js into one tidy bundle:


browserify -t reactify -t require-globify public/index.js -o public/bundle.js

If you circle back all the way to Layout.jsx, you will notice that we are including a sole script tag for /bundle.js.

The complete source code for this app is available on GitHub. When we run ‘npm install’ to bring all the dependencies, then ‘npm start’ to run browserify and start express, we get the following:

react-engine-demo

When we click on header links, they cause full page reload, rendered by express server. However, clicks on the left nav links cause the content of the SPA page to change without a page reload. Meanwhile, the address bar and browser history is dutifully updated, and deep links are available for sharing.

Discussion

You can probably tell that I am very excited about this approach because it finally brings together fast initial rendering and SEO-friendly server-side pages, with full dynamic ability of client side apps. All excitement aside, we need to remember that this is just views – we would need to write more code to add action dispatcher and data stores in order to implement full Flux architecture.

Performance wise, the combined app renders very quickly, but one element sticks out. Bundle.js in its full form is about 800KB of JavaScript, which is a lot. When running the command to minify it, it is trimmed down to 279KB, and when compression is enabled in express, it goes further down to 62.8KB to send down the wire. We should bear in mind that this is ALL JavaScript we need – ReactJS, as well as our own components. It should also be noted that this JavaScript is loaded asynchronously and that we are sending content from the server already – we will not see a white page while script is being downloaded and parsed.

In a more complex application, we would probably want to segregate JavaScript into more than one bundle so that we can load code in chunks as needed. Luckily, react-router already addressed this.

The app is deployed on Bluemix – you can access it at http://react-engine-demo.mybluemix.net. Try it out, play with the source code and let me know what you think.

Great job, PayPal! As for me, I am completely sold on ReactJS for most real world applications. We will be using this approach in our current project from now on.

© Dejan Glozic, 2015

The Art of Who Does What

Jacob Duck, Dividing the Spoils, 1635, Wikimedia Commons
Jacob Duck, Dividing the Spoils, 1635, Wikimedia Commons

Job posting: If you like my blog and would like to work on the stuff I write about, come and join my Toronto team.

Novelist Erich Maria Remarque claimed in one of his novels that civilization is a thin veneer, covering the primal urges of savages ready to grab each others’ throats at a moment’s notice. You can attest to that at times of temporary breakdowns such as power outages, elevator malfunctions, or heated sports events. In the relatively rational and orderly corridors of corporate life, nothing brings the primitive reptilian brain to the forefront like the activity of divvying up the work.

There are perfectly rational explanations for why this happens. Work is a big part of our identity. It is now spilling out into our personal brand as well – your LinkedIn and Twitter profiles say what you do for living. You will go great lengths and exert a lot of effort in order to move that needle from ‘Something’ to ‘Architect Something’ to ‘Senior Architect Something’. You want these changes to reflect on your LinkedIn profile as an upward progression, a well managed and deliberate career narrative.

Titles aside, most people want to do meaningful work, and want to grow their career by kicking ass, not by marking the time. A precondition for this is to be assigned a great and meaningful task that you can crush, being amazing as you are.

Divvy up

In corporations large and small, there comes a time where in one part of the room, you have a few nervous people, and in another, a pile of stuff to be done. Call it a work pie – it has to be cut and divided among them. Now remember the ‘civilization as a thin veneer’ and you can imagine the emotional undercurrents of such a situation.

Divvying up the work is tightly connected to hiring, and those two activities may act like the chicken and the egg in many situations. The amount of work is also very important. If this is the only pie to divide, tensions will be much higher than if the pies keep coming – people who didn’t get their piece will be much more relaxed if a new one will arrive in 5 minutes.

In my professional life, I have observed several situations I will try to enumerate here.

Grow with work

What is more fitting to start with than, well, a startup. When you have two starry-eyed founders and a lofty vision (according to HBO’s Silicon Valley, which must include a desire to ‘make the world a better place’), there is this infinite pie that never gets smaller no matter how much you cut it. Founders work day and night and ‘who does what’ discussion is very short and efficient. Nothing will matter if work is not done before the VC money runs out, so getting it done is primary consideration.

According to the guys of 37signals, most startups are careful with hiring and really only hire when they reach practical limits.

The right time to hire is when there’s more work than you can handle for a sustained period of time.

Jason Fried & David Heinemeier Hansson, ‘Rework’

In this mode, dividing the work is a low ceremony affair – everybody wears many hats, and there is a sense of ‘we are all in this together’. This is a good phase.

Lions eat first

When companies are growing quickly and they are having a lot of impact, careers take care of themselves. And when companies aren’t growing quickly or their missions don’t matter as much, that’s when stagnation and politics come in.

Google’s Erich Schmidt to Sheryl Sandberg

When Facebook’s Sheryl Sandberg got this advice from Google CEO Erich Schmidt, it highlighted the moment when the pendulum swings and there is more people than work. In most real world organizations, the amount of things to do grows and shrinks faster than the teams do. After all, you cannot be hiring and firing people all the time – it is expensive and bad for morale.

In those situations, politics goes into full swing, and managers closer to the executives in control of the pie get the choicest pieces, leaving the scraps for the less connected and less savvy. This was and still is a reality in many big corporations to one degree or another, but things are changing there as well. New organizational trends already brought us flatter structures. Matrixed organizations with feature teams are more often the norm, with models like the one used by Spotify being all the rage recently.

There are varying reasons why certain teams tend to run away with key pieces of the mission. They can have a track record of delivery, which is fair. They can also have a critical mass required to take on an important mission. They can also be tapped to own the technical area (say, compilers) and have the expertise. But sometimes there is the inertia that crosses over into politics. A team that traditionally owned an area may be a bad choice to take that technology to the cloud because of the skills mismatch. In many of these discussions, the results will not seem fair to the outside observer not in on the subtle political undercurrents of the situation.

Second wind

A comical side-effect of the ‘lions eat first’ model is that eventually lions get stuffed silly and cannot take another bite. In my time, I was often far from the main power centres, so I developed the art of the second wind to perfection. Here how it is plaid:

  1. The first pie arrives to the table
  2. Lions eat first, stuff themselves until they cannot breathe
  3. After a while, an unexpected pie arrives. Lions watch it with sad eyes, unable to do anything.
  4. You jump in and run away with the whole new pie.

As I said before, this all depends on the availability of the new pie. But even in the olden days it was possible. The first division of ‘who does what’ is normally based on the very imprecise ideas of what the new project is all about. After a while, reality sets in, holes are identified, new requirements emerge, and this is where you can jump in and get that work.

Or you can invent a whole new pie. You can actively look for gaps in the vision, notice the opportunity, prototype something and demo it to the executives. When there is not enough pie on the table, create more by innovating.

Note that this model is becoming more of a norm lately. Everything is speeding up, cycles are getting shorter, feature teams (or should I say ‘squads’) are formed and dissolved at a faster rate. This is good news, because there is nothing like office politics to sap enthusiasm and energy from bright and starry-eyed new hires. I am happy we are slowly moving away from politics-ledden job partition, if only out of necessity brought on by the tectonic shifts in the industry.

Getting ahead of the HR curve

Alas, where there are people, there will always be some amount of politics. The corporation does not even have to be that big to get into the bizarre game of req tickets:

Reqs vanish randomly, often without notice, without reason, and at the least convenient time.

Rands in Repose

First of all, if you have hiring tickets, congratulations – it means you are on a project that is growing (assuming these are not backfills). You may even be in the coveted ‘startup in a large organization’ situation, where you are trying to grow a 1.0 project and are staffing like crazy.

This is an often comical situation because you are trying to be two things at the same time. You are trying to move fast, build a team and be nimble, while at the same time dealing with a corporate machine that is not designed for that. You are growing against a fluid plans and visions that change daily (or should I say ‘pivot’). And you never know when the executives championing the new startup culture will succumb to bean counters’ nagging and rein in the mad hiring sprout.

You could say that true startups grow with concrete work, and that this is not a very startup-like behaviour, and you would be right. However, there is logic to it:

  1. You are growing a team with a general skill set, kind of like a local competency centre, or to use the Spotify parlour, a ‘chapter’. You can form guilds as needed, but your chapter will be more stable and build an enviable track record that will attract new work in the future.
  2. You are building a centre of gravity that will assist you in the upcoming ‘who does what’ discussions. You want to be the path of least resistance for tasks that look related and up your team’s skill alley.
  3. And of course, a req ticket unused is a req ticket lost.

Be the baker

Based on everything I said so far, it appears that in order to change the conversation, it is better to ensure that pies are coming than to entangle in the ugly politics of wrestling over a scarce resource. Build a team of great skilled developers, preferably able to do full-stack development and do many things with aplomb, and then unleash the innovation that creates new pies out of thin air. It is better to be the one creating the new work than fighting over it.

Now if you excuse me, all this talk about pie made me hungry. Mmmm, pie.

© Dejan Glozic, 2015

Same Company, New Job

Verrazano-Narrows Bridge: The Beginning, Metropolitan Transportation Authority of the State of New York, Wikimedia Commons
Verrazano-Narrows Bridge: The Beginning, Metropolitan Transportation Authority of the State of New York, Wikimedia Commons

If you read my bio, you can find out that I come from Europe. In that part of the world, if you hop in a car and drive in any direction, you will enter another country that same day. In contrast, you can drive thousands of miles, cross three time zones and many different geographic and climate regions and still be in the USA. Many people there had rich lives doing very different things while never really needing their passport.

I feel the same about career change. In Silicon Valley, people normally switch companies when they feel like doing something different, because companies are small, young and focused. But when you work for a company as large and multi-faceted as IBM, you have the other option – to change what you are working on without having to go through the new employee orientation again and learn where coffee is. And that is exactly what I am doing.

I am moving on to a new job, from DevOps to Data Analytics. I don’t need to spend a lot of time making a case why cloud data and analytics is currently exploding as an area. The amount of data we are amassing is unprecedented and relentlessly growing. With the new data that IoT devices will bring to the table in the coming years, we have a dire need to collect, store and most importantly, make some sense of it all, otherwise what’s the point?

Of course, I was not looking for an entirely clean start. I spent more than a year blogging about Node.js, micro-services, message brokers, authentication, UI composition. I simply intend to employ all the great stuff I have carefully curated for presenting and visualizing the data and the results of data analytics in the cloud. All the lessons of clustering, high availability, caching and DevOps automation will also come in handy in the new job. In other words, what is changing is ‘What’ but not the ‘How’ part of the equation.

Come join me

In the book ‘Being Geek’ by Michael Loop (aka Rands in Repose), the following section in the chapter ‘The Deliberate Career’ best describes my next adventure:

A start-up is more likely to be in a state where it’s hiring lots of people, aggressively attacking new problems, and having a sense of urgency. Still, you can find the same attributes in a large company in a specific group that has been tasked with the new and sexy. This hybrid might be the best of both worlds – the urgency of a start-up supported by the stability of an established company.

As far as large companies go, there is rarely something as thrilling and filled with opportunities as being at the beginning of a 1.0 version of a product or service. And to further underline the similarity, we would not be a start-up (even in an established company) if we were not aggressively hiring.

If you live in Toronto, have enjoyed my blog so far (or are intrigued by this post and binge-read it backwards), and like the following areas:

  • Node.js micro-services
  • Dust.js
  • HTML5/CSS3
  • Angular, Backbone, React, Web Components
  • Message brokers
  • REST APIs
  • Web sockets

come and join the team I am building. Drop me an email, tweet me a message, send me a carrier pigeon – whichever way you choose to reach me, but remember: we have a sense of urgency, so don’t take too long. This stuff will not get built on its own.

© Dejan Glozic, 2015

Oy With the Gamification Already!

Lady Katrana Prestor ~ Human Onyxia, World of Warcraft, 2014, Stephan Shubert via Wikimedia Commons
Lady Katrana Prestor ~ Human Onyxia, World of Warcraft, 2014, Stephan Shubert via Wikimedia Commons

“Hi, everybody. My name is Dejan and, … well…, I don’t play games (gasp). There, I’ve said it. This feels so liberating, I am a bit lightheaded. I think I’m going to sit down now.”

I cheated a bit in my pretend-address to the local Non-Gamers Anonymous chapter. I did play Microsoft Flight Simulator obsessively for years, but that is more of a gateway drug to real flight lessons than a multi-person shooter. Most people who tried it lost interest when they realized they cannot fire at other planes, promptly crashed their Boeing 737 and moved on to fight trolls and solders in a futuristic dystopia.

All these gaming-intolerant impulses kicked into high gear when I read the Spotify engineering model white paper. In case you missed it, Spotify is creating a stir with their new way of work organization. In a nutshell, they organize people into co-located feature teams called squads that have all they need to deliver a feature relatively independently. Several squads are organized into tribes that tend to be limited to about 100 people to prevent social connections breakdown.

Since squads are self-reliant, it is easy to envision a situation where the same problem is solved multiple times by squads that don’t communicate. To avoid this massive waste, like-minded squad members organize into chapters that share the same general knowledge (Web UI, iOS/Android, Design, Test) and a line manager. This provides organizational glue and prevents duplication. Finally, chapters are connected into guilds in a looser way, ensuring sharing of ideas and best practices.

The gamers are coming!

One of the first concerns that people have voiced was ‘how is this different from matrixed organizations’. I find guilty pleasure in observing these kinds of questions because they remind me of another debate closer to home, this one on how micro-services are nothing more than SOA.

But listen – oy with the gamification already! I explained the basic premise to my 18 year old son (an avid gamer) and even he was rolling his eyes (calling the lingo ‘juvenile’). The World of Nerddom is spilling into the rest of the reality with a vengeance that sometimes verges on bullying (yes, I get the irony). Case in point: a presenter at a recent NodeSummit suffered ironic remarks by the MC for daring to bring a Windows laptop to the stage, and not the all-beloved Mac (and I am typing this on a sweet new MacBook Pro; I just don’t like bullies, male or female). And now there is a growing chance another outgrowth of that world will become your everyday working reality.

Spotify is a young, rapidly growing company, and the main source of music for my teenage daughter. I am sure that game-playing millenials that I can see in the company photos feel very comfortable with guilds, tribes and squads. Their model is irresistible in that it addresses so many paint points that feed Dilbert cartoons. Their two-part video is smart, wonderfully animated and easy to follow, and many of the messages will ring true and soothe your pain if you spent any amount of time in an old enterprise work process.

What I find problematic is when those same enterprises latch on it and try to apply it in their own (very different) context. One of the reasons they would do it is the assumption that a successful implementation in a fast moving company gives it a seal of approval. Some of it is sheer survival instinct – everybody needs to move fast these days, and if your traditional org chart is slowing you down, you need to change if you want to be around in five years. Finally, and to be fair to large enterprises, it is really hard to find a true command-and-control organization these days – some variation of Scrum or Kanban is a norm virtually everywhere. Spotify provides a simplifying refinement that attempts to address the observed shortcomings.

It is not a religion

I see two problems with adopting Spotify model as-is:

  • It is a moving target. White paper authors themselves pointed out that it is entirely possible that by the time you implemented the squad/tribe/chapter/guild model, Spotify will have moved on to the next refinement of it. A kitschy version: you can’t capture the wind or the waterfall – you end up with dead air and stale water, respectively (rim shot).
  • It uses gamer-friendly terms. It assumes that everybody in the industry is a gamer and is instantly familiar and reacts positively to the images these names evoke. I cannot help but giggle imagining a bank IT shop where executives arrive and declare: “all right people, all of you on this floor are now the Stonehoof tribe. Stay tuned for the org chart to find out which squad and chapter you belong to. Guild masters are currently working on their corresponding chapter lists”. It is not even a generational thing – believe it or not, there are young people who have better things to do than kill hours working on their WoW reputation (and virtual gold). And yes, there are middle-aged clan leaders. Sadly.

Test out carefully

There are many worthy ideas in the Spotify engineering model. Some of them are a refinement of the matrixed models from the past. Most can be used without all the gaming jargon that goes with them. Discussions I had so far point at exactly that – savvy organizations will filter out the startup exuberance and latch on the more lasting nuggets. All of them should be treated as an experiment in the event they end up working only for Spotify (or in the event Spotify has already outgrown them).

And finally, the goal is to enable teams (squads?) to be agile and deliver results with the speed of the cloud. If that does not pan out, you just spent a lot of money re-arranging chairs on the Titanic. And called yourselves silly names that should be left behind once you reached your twenties.

Pardon the grumpiness. Hey, I may end up liking it after I live it for a while. Now if you excuse me, I have to go work on my LARP uniform. War is in the air.

© Dejan Glozic, 2015

Don’t Take Micro-Services Off-Road

Fred Bauder, 2009, Wikimedia Commons
Fred Bauder, 2009, Wikimedia Commons

I own an Acura TL 2006. It’s a great car. Every day I derive great pleasure driving it to work. It has a tight sporty suspension, precise steering, comfortable leather seats and an awesome audio system.

At the same time, I know better than to take it off-road. Its high performance tires are optimized for asphalt traction and low rolling resistance, not gravel or soil. It does not have enough clearance for rocks, or 4×4 drive required for rough terrain. If I did take it off-road, I could erroneously conclude that it is an awful car, which I know not to be true. I would have simply used it for something it was never designed to do.

I used this example to explain the concern I have seeing the evolution of the industry’s relationship with the micro-service architecture. It was just a matter of time people until people started taking their micro-service Acuras off-road and then writing how they are awful cars.

Original success stories

Architectures and approaches normally turn into trends because enough use cases exist to corroborate their genuine usefulness when solving a particular problem or a class of problems. Otherwise, only architecture astronauts would care. In the case of micro-services before they were trendy, enough companies built monoliths beyond their manageability. They had a real problem on their hands – a large application that fundamentally clashed with the modern ways of scaling, managing and evolving large systems in the cloud. Through some trial and error, they reinvented their properties as a loose collections of micro-services with independent scalability, life cycle and data concerns. Netfix, Groupon, Paypal, SoundCloud are just a small sample of companies running micro-services in production with success.

It is important to remember this because the trendiness of micro-services threatens to compel developers to try them out in contexts where they are not meant to be used, resulting in the projects overturned in the mud. This is bad news for all of us who derive genuine benefits from such an architecture.

Things to avoid

It is therefore good to try to arrive at a useful list of use cases where micro-services are not a good choice. It will keep us more honest, keep the micro-service hype at bay and prevent some failures that would sour people to an otherwise sound technical approach:

  1. Don’t start with micro-services – this one is a no-brainer. Micro-services attempt to solve problems of scale. When you start, your app is tiny. Even if it is not, it is just you or maybe you and couple more developers. You know it intimately and can rewrite it over a weekend. The app is small enough that you can easily reason about it. There is a reason why we use the word ‘monolith’ – it implies a rock big enough that it can kill you if it falls on you. When you start, your app is more like a pebble. It takes certain amount of time and effort by a growing number of developers to even approach monolith (and therefore micro-service) territory.
  2. Don’t even think about micro-services without DevOps – micro-services cause an explosion of moving parts. It is insane to attempt it without serious deployment and monitoring automation. You should be able to push a button and get your app deployed. In fact, you should not even do anything – committing code should get your app deployed through the commit hooks that trigger the delivery pipelines (at least in development – you still need some manual checks and balances for deploying into production).
  3. Try not to manage your own infrastructure – micro-services often introduce multiple databases, message brokers, data caches and similar services that all need to be maintained, clustered and kept in top shape. It really helps if your first attempt at micro-services is free from such concerns. A PaaS such as Cloud Foundry or Heroku will allow you to be functional faster and with less headache than with an IaaS, providing that your micro-services are PaaS-friendly.
  4. Don’t create too many micro-services – each new micro-service adds overhead. Cumulative overhead may outstrip the benefits of the architecture if you go crazy. It is better to err on the side of larger services and only split when they end up containing parts with conflicting demands for scaling, life cycle and/or data. Making them too small will simply transfer complexity away from the micro-services and into the service integration task.
  5. Don’t share micro-services between systems – I listed this final point here for completeness, but it is so important that it requires to be broken into its own section.

On micro-service sharing

I have seen many a fiery debate about the difference between micro-services and SOA. There are many similarities (it is hard to argue that micro-service architecture, or MSA is revisiting SOA principles). More recently I have formed a fairly strong opinion that a key differentiation between MSA and SOA is that of ambition.

When you go back and read about the lofty goals of SOA proponents, it is easy to notice that the aim was much higher. MSA success stories didn’t attempt to reinvent the world around catalogs of reusable services, systems that are discovering those services through registries, etc. At the beginning of every MSA success story is a team that grew their simple application too fast without refactoring along the way and hit the maintainability wall.

If you carefully read ‘monolith to micro-services’ blog posts, you will notice that the end result is the same thing. Groupon team has not created a ‘catalog of social coupon services to be assembled into coupon applications’ – they rebuilt Groupon Web site. They broke the monolith into small pieces and rebuilt it again. As far as their end users are concerned, the monolith is still there – the site was rebuilt in mid-air.

Since I think that micro-services are pragmatic and sane revisiting of SOA, it is apt to assume that creating reusable micro-services is low on the list of priorities. Yes, a micro-service needs to be individually deployable and be flexible enough that it can be bound to other services dynamically (minimally through some kind of a configuration on startup). You need to be able to deploy each service to multiple logical ‘spaces’ (DEV, QA, STAGING, PROD). But each logical micro-service instance is part of a single distributed monolith, re-imagined in a cloud-friendly way.

From a monolith to a – distributed monolith?

Where am I going with all this? I am a bit concerned that the industry noise will ruin micro-services by taking them outside their comfort zone. Too many people are taking them to the areas where they shouldn’t, and I don’t want the inevitable backlash to overshoot. Micro-services are a solution for the Big Ball of Mud architecture, but the alternative micro-service system is still a big ball. This ball made up of many small balls, is cleaner and easier to manage, deploy, scale and evolve, and can be inflated bigger than the old ball without exploding, but it is fundamentally the same thing.

Any attempts at nano-services, trying to deploy micro-services manually, using them because they are trendy without real need, or re-using them between multiple systems will result in a disappointment we don’t really need at the moment.

Are micro-services SOA? No, and please let’s keep it that way.

© Dejan Glozic, 2015

Vive la Révolution App

746px-Le_Barbier_Dichiarazione_dei_diritti_dell'uomo
Source: Wikimedia Commons

This post is a based on a presentation I made on a dare – something a former colleague proposed with only a title and a description, and it was up to me as the replacement to provide the actual content. It sort of reminds me of a debate club, where you are told that the topic is ‘App Revolution’, and you have 20 minutes to argue the ‘Pro’ position. What follows is my attempt to do it justice. Have fun (and mercy).

When we are confronted with the topic of revolutions, most of my North American friends immediately conjure up the sound of Yankee Doodle and the picture of George Washington crossing the Delaware River (I saw it last year in The Met – boy, is that painting big!). Being of European descent, my thoughts give preference to the French Revolution. It has essentially given us the modern European society, with milestone documents such as ‘Declaration of the Rights of Men and Citizen’ shown above. It has also given us the guillotine, which is sad but as Jacques Mallet du Pen famously quipped, all revolutions devour their children. What can you do – it’s a revolution, so16,594 people are bound to lose their heads, give or take.

One of the indispensable aspects of revolutions are slogans, something you can easily chant at the large group gatherings. Something catchy, such as ‘Freedom, Equality and Fraternity’ in the case of the French Revolution. Or as Blackadder interpreted it ‘Freedom, Equality and fewer fat bastards eating all the pie’.

As you correctly noticed, these slogans often call for three things. If that is true, and we are indeed witnessing an App Revolution, what would our slogan be? What three things would we want from our oppressors?

We are fighting for the freedom and abundance of data, infrastructure and architecture.

– Oppressed developers everywhere.

Note that when I say ‘freedom’, I don’t necessarily mean ‘completely free’. We know we will need to pay for some of it, hence the word ‘abundance’. While food in Western society is not exactly free, it is definitely abundant. You can go into any supermarket and leave with a whole rotisserie chicken for a few dollars. During the French Revolution, only the aforementioned fat bastards could afford it. That’s progress.

Hence, let me try to explain why we are fighting for these three things.

Freedom of Data

You have probably heard the phrase that we are living in an age of ‘API Economy’. What does that actually mean? In the past, data was a by-product of people using your application. Over time, your app’s database would fill up with data. The thinking was that the app is the product, and data is just internal by-product, a consequence of app usage. More recently, data started to take off as something that can be as important, or in some cases the only product you provide.

While in the past tacking on an API to your app would be an afterthought, something you may consider for partner or customer integrations, most modern systems are now built by first building the API for the data, then building up various clients that consume it. Your own clients are just ‘reference implementation’ of hopefully many other consumers of your APIs that will follow.

freedom-of-data
Source: IBM

Even music is going API these days. Sound engineers are now expected to provide stems of mastered music (drums, bass, guitars, keyboards, vox) so that remixers can easily provide derivative value without the hassle of sampling fully mixed songs (the audio equivalent of screen-scraping). What are stems but audio APIs?

Why is this important to us? Because when you open up your APIs, you become a platform, and platforms foster app eco-systems, with apps creating new value in many unexpected ways. Today, the most coveted place for any company is not to create a consumer product, but to create a platform that offers data and API, and creates a flourishing eco-system of apps built to take advantage of it. API discovery is now in the vogue, catalogs are sprouting, and all you need is to subscribe, obtain the authentication key and start building your innovative abstraction on top of it, or by combining multiple data sources in an innovative way. You can be data mining, providing innovative interfaces, analytics, or integrations with other systems.

If you are building a mobile app, all you need is a laptop and a phone to test your app. However, if you need anything in the back end you need to build a companion server-side app, which leads us to…

Freedom of Infrastructure

When I was a child, my parents bought me a Meccano kit. In those days, giving a child a box full of tiny sharp metal objects was consider totally cool. I quickly built all the possible toys based on the accompanied booklet, but sneaky bastards from Meccano also put a picture of a crane on the box that would require something like 10 sets to build. Since then, I developed this realization that I need to find a discipline in which I will not be limited by a box of finite number of parts.

meccanoengine001
Source: Meccano Beam Engine, Liskeard Museum

That’s why I chose software engineering – it is rare you will run out of files, or classes, or functions or variables the way you can run out of Meccano panels or tiny nuts and bolts.

However, once you venture into Web development, you hit the infrastructure version of Meccano. Your database, your server, your front end proxy all need to be hosted on physical boxes, and Mordac The Preventer from Information Services can make your life miserable in a hurry.

This is why Cloud is so important for our revolution. Regardless of where you fall on your ‘as a Service’ comfort level, you can use an IaaS or PaaS or SaaS to stand up your apps in minutes. Assuming you have found free or abundant source of data, your app can now be running and stay running without the need to worry about the messy sysadmin details or melted boards.

It does not end with just seeing your app running, either – you can jump into the third freedom that is the final cornerstone of our revolution.

Freedom of Architecture

In the dark ages of IT, it used to be that architecture was for the rich, and The Big Ball of Mud was for the rest of us. While you instinctively know that you should not be cashing those objects in memory, who is going to stand up, maintain and cluster Redis for it. You know that a message broker would be a real answer for your particular problem, but don’t have the stomach to stand up and administer RabbitMQ, or any of the popular alternatives. There is no accident that the famous Martin Fowler’s book from 2002 is called Patterns of Enterprise Application Architecture. At that time, only an enterprise could afford to provision and maintain all the boxes that such an architecture requires.

north-star
Source: Dejan Glozic

That same Martin Fowler not talks about Polyglot Persistence – the approach where apps in a distributed system choose different types of databases that perfectly suit their diverse needs, instead of underpowered MySql for everything. And he is not using the word ‘ enterprise’ this time, fully aware that a nerd hacking away on his Mac in Starbucks can provision such a system in minutes. App revolution indeed.

All together now

When we put our three demands together, great things can happen. To illustrate how far we have come, consider the system that I made the attendees of an IBM Interconnect 2015 lab build over the course of 2 hours:

lab5400
Source: Dejan Glozic

This system is just a toy, designed to teach modern micro-service architecture, and yet it would require that we stand up several servers, install and configure a ton of software, and build our own user management system:

  1. It uses Facebook for delegated authentication and to tap into Facebook’s data. No need to stand up anything, just register as a Facebook developer, obtain your client ID and secret and off you go.
  2. It deploys complex infrastructure (two Node.js app servers, a proxy, a data cache) to Bluemix PaaS within a matter of minutes, all using just a Web browser. In a pinch you could do it on a bus using your iPad, while also debating someone totally wrong on the Internet.
  3. It uses serious architecture (OAuth2 provider, Nginx proxy, Node.js micro-services, session sharing via Redis store) that was unheard of for non-institutional developers in the past.

Platforms everywhere

Of course, the notion of a platform is not limited to the Web. In fact, some of you may have initially thought the article is about mobile apps. Phones are huge app ecosystems, and so are the upcoming wearable platforms, of which iWatch is just the latest example.

Venturing further away from the classic Web apps, cars are now becoming rife with platforms unleashing the app revolution of sorts. Consider Apple’s CarPlay that Scott Rich wrote about in O’Reilly Radar – a platform for apps in your car, tapping at the latent and closed data world and opening it up as a new app eco system. It is a different context but the model seems to be the same: create a platform, open up the data through APIs, and unleash the inventions of app revolutionaries hunched over their laptops around the world.

Means of production

In the past, the control of data, infrastructure and architecture were limiting factors for the masses of developers around the world. Creativity and ideas are dispersed far more equitably than the control over resources would make you believe. At least in the area of software development, the true app revolution is in removing these control points and allowing platforms and eco systems to let the best ideas bubble up.

Whether you are a guy at a reclaimed wood desk overlooking San Francisco’s Mission district, or a girl in Africa at a reclaimed computer in a school built by a humanitarian mission, we are approaching the time when we will only be limited by our creativity, and by our ability to dream and build great apps. And that, my fellow developers, is worth fighting for.

© Dejan Glozic, 2015

Isomorphic Apps Part 2: Node, React.js, and Socket.io

Two Heads, 1930, Wikimedia Commons
Two Heads, 1930, Wikimedia Commons

When I was a kid, I went to the movies to watch Mel Brooks’ “History of The World, Part I”. I had a great time and could not wait for the sequel (that featured, among other things, Hitler on ice, a Viking funeral and laser-shooting rabbis in ‘Jews in Space’ teaser). Alas, ‘Part II’ never came. Determined to not subject my faithful readers to such a disappointment, here comes the promised part II of my ‘Isomorphic Apps’ trilogy.

In the first part of this story, we created an isomorphic app by taking advantage of the fact that we can use Dust.js as an Express view engine, and then compile partials into JavaScript and re-use them on the client as needed. In order to compare approaches with only one variable changed, we will switch to React.js for the view.

What’s the deal with React.js

React.js is attracting a lot of attention these days due to the novel approach it has taken to building dynamic Web apps. At the heart of the approach is the notion of a virtual DOM. React.js components manipulate an abstraction of a DOM that is then transformed into the physical DOM in a highly optimized fashion. Even more ingeniously, browser’s DOM is only one of the possible transformations: virtual DOM can be also serialized into plain HTML, which makes it possible to use it on the server. Even more recently, it can be serialized into native code to address mobile (and even desktop) UI components.

I am old enough to remember Java’s “Write once, run anywhere” slogan, and this looks like new generation’s attempt to make a run for this chimera. But even putting React native on a side for a moment, the fact that you can render on the server makes React supremely suitable for isomorphic apps, something Angular.js is lacking.

React.js is also refreshingly simple to figure out. Angular.js has this famous adoption roller coaster, and sometimes when you don’t get an Angular peculiarity, you feel the fault is with you, not Angular. React.js took an approach that life is short, and we can do better things with our time than figure out the maddening quirks of a complex framework. There is no two-way binding (because it has shown to be a double-edged sword – see what I did here). When the model changes, you just naively rebuild the view (sometimes referred to as ‘write pages like it’s the 90s’). Seems massively suboptimal, but remember that you are only rebuilding the virtual DOM – React.js figures out the actual delta and only applies the delta against the real DOM. And since most of the performance (or lack thereof) lies in the physical DOM, React.js promises fast apps without writing a lot of code for smart and surgical updating on model changes.

Configuring React.js as an Express view engine

Alright, I hope this wet your appetite for some coding. We will start by cloning the page from part I and adding another view engine in app.js (because I am cheap/lazy and don’t want to run another app for this). For this we need to install react on the server, as well as the express view adapter.

We will start by installing ‘react’ and ‘express-react-views’ and configuring the jsx view engine:


var react = require('express-react-views');

...

app.engine('jsx', react.createEngine());
app.set('view engine', 'jsx');

The last line above should only be set if you will use JSX as the only view engine for Express. In my case, I had to omit that line because I am already serving some Dust pages, and you can only set one default engine. The only thing I lost this way was the ability to find JSX templates without the extension – they can still be rendered when extension is included.

The controller for our React.js page is almost identical to the one we wrote for Dust.js:


var model = require('../models/todos');

module.exports.get = function(req, res) {
   model.list(req.user, function(err, todos) {
      res.render('isomorphic_react.jsx',
         { title: 'React - Isomorphic', user: req.user, todos: todos });
   });
};

Most of the fun happens in the view, as expected. React.js requires some getting used to. For starters, JSX syntax is actually XML (and not even XHTML), so all elements require termination. Many attribute names require camel case, which is very annoying (I always hated Jade for this mental transformation, and now JSX is doing the same for me). At least the JSX transformer is yelling at you in the console about possible errors you made, so fixing up your JSX is not too hard:

var React = require('react');
var DefaultLayout = require('./rlayout');
var RTodo = require('./rtodo');

var Todos = React.createClass({
  render: function() {
    return (
      <DefaultLayout { ...this.props} selection="react">
        <h1>Using React.js for View</h1>
        <h2>Todos</h2>
        <div className="new">
           <textarea id="new-todo-text" placeholder="New todo"/>
        </div>
        <div className="delete">
           <button type="button" id="delete-all"
              className="btn btn-primary">Delete All</button>
        </div>
        <div id="todos" className="todos">
           {this.props.todos.map(function(todo) {
          	return <RTodo key={todo.id} {...todo} />;
           })}
        </div>
        <script src="/js/prettyDate.js"></script>
        <script src="/js/rtodo.js"></script>
        <script src="/js/rtodos.js"></script>
      </DefaultLayout>
    );
  }
});

module.exports = Todos;

The code above requires some explanation. Unlike with Dust.js, both inclusion into a common layout template and instantiation of partials is done through React.js component model. Notice that we imported DefaultLayout component that is our standard page boilerplate. The payload of the page is simply specified as content of the instantiated component in the ‘render’ method above.

Another important point is that unlike Dust.js, properties are not automatically passed down the component hierarchy – we need to explicitly do it (notice the strange “{ …this.props }” expression in the DefaultLayout declaration – what I am saying is ‘pass all the properties down to the child component’). We can also define new properties, which I am doing by passing ‘selection’ that will be used by the header component (to highlight the ‘React’ link).

Another important section of the template is where I am instantiating RTodo component (a single Todo card). Flow control can be tricky in JSX because the entire template is one giant return statement, so everything needs to evaluate to an expression. Notice the trick with using the array map to iterate over the list of todos and render each child todo component.

This code will produce a page very similar to the one with Dust.js, with identical results. In fact, it is possible to go back and forth because both pages are using the same REST service for the model.

JSX compiler

So far we took care of the server side. As with Dust.js, we can compile components we need on the client side, this time using jsx compiler that comes by installing ‘react-tools’:


#!/bin/bash
node_modules/react-tools/bin/jsx --extension jsx views/ public/js/ rtodo

We can compile any number of components and place them into the JS directory under /public folder so that Express can serve them to the browser.

The client side script is very similar to the one used by the Dust.js page. The only difference is in the ‘Add’ action handler:

var socket = io.connect('/');
socket.on('todos', function (message) {
  if (message.type=='add') {
    var newTodo = document.createElement('div');
    React.render(React.createElement(RTodo, message.state),
              newTodo);
    $(".todos").prepend(newTodo);
  }
  ...

The code is remarkably similar – instead of calling ‘dust.render’ to render the partial using the element we received via the Socket.io message, we ask React to render the compiled element into a new DOM element we created on the fly. We then prepend this element into the parent DIV.

Commentary and comparisons

First off, I would say that this second attempt at writing an isomorphic app was a success because I was able to replicate Dust.js example from part I with identical behaviour. However, it is not as good a fit for React.js. A better example would see us modifying a model and asking React.js to re-render an existing DOM branch. Now that I feel reasonably comfortable around React.js, I think I will create something more dynamic for it in the near future. A true React-y way of doing the list of todos would be to simply re-render the entire list on each Socket.io message. We would let React.js figure out that all it needs to do is insert a new Todo DIV into the parent node. This way we would not need to create DOM elements ourselves, as in the code above.

After spending a year working with Dust.js, JSX took some getting used to. As I said, I hated Jade because it created an additional layer of abstraction between me and HTML, and I never quite knew what final HTML it will produce. JSX evokes the same feelings in me, but the error/correction loop has shortened as I learned it more. In addition, the value I get in return is much higher with JSX than with Jade.

Nevertheless, certain things will not get better with time. JSX is awkward to work with when it comes to logic. Remember, the entire template is an expression, so statements are really hard to fit in. Tertiary conditonals work, and as you saw, it is possible to use tricks with maps to iterate over children in a collection. I still prefer Dust.js for straightforward pages, but I can see how React.js can work with components in a very dynamic app.

I like React.js component model, as well as the fact that code and markup are close to each other – for a developer this is very useful. I also like the fact that, JSX quirks aside, there is much less magic compared to Angular.js. Of course, React.js is really just a View of MVC, so it is not a fair comparison. On the other hand, I am now dying to hook it up into Backbone as a view – it feels like a great combination (and of course, there are already articles on exploiting this exact combination). The more I think and read about it, Backbone models/collections/router and React.js views may just end up being my favorite stack for writing highly dynamic apps with server side bonus for SEO and initial experience.

A word of caution

If your system has elements of both a site and an app, use micro-services and implement site portions with a more straightforward templating solution (as I already covered in the previous blog post). This is going to make content authoring easier and increase the number of content providers with cursory knowledge of HTML that will feel confident authoring and/or modifying something like Dust.js templates. Leave React.js for 10x developers working on the highly dynamic ‘app’ portions of the system. By the way, this is an area where micro-services shine – this kind of partitioning is one of their key selling points. You can easily have micro-services using Dust.js and micro-services using React.js (and as I have already shown, even mixed in the same Node app).

One of the downsides of a mixed system (one using both Dust.js and React.js) is that sometimes content pages have dynamic component sprinkled in them. The challenge is invoking those component without requiring your casual developers to be afraid to touch such pages. Invoking React.js component in a Dust.js page would require inserting JavaScript tags, which is less then ideal. This is where Web Components are much easier to reason about, and there are already attempts to bridge the two worlds – invoking React.js components as custom Web Components.

And that’s a wrap

As before, you can browse the source code as IBM DevOps Services project, and the latest version of the app is running in Bluemix. In the final instalment of this trilogy, I will make our example a bit more dynamic (to let React.js show its true potential), and add some structure using Backbone.js. Until then, React away!

© Dejan Glozic, 2015

Micro-Services for Dysfunctional Teams

Jan Steen, Argument over a Card Game, Wikimedia Commons.
Jan Steen, Argument over a Card Game, Wikimedia Commons.

Update: I have received a ton of feedback on this post, and some of the well meaning criticism is concerned with the term ‘dysfunctional’, considering it a bit ‘judgy’ from somebody that is supposed to help these same teams. Apart from yielding a catchy title, Hacker News reader was spot on when he declared my use of the word as ‘term of endearment’ more than anything else. Not unlike a smart person calling herself ‘stupid’ or a workaholic calling himself ‘lazy’ for sleeping in one morning. In the proceeding article, ‘dysfunctional’ are most teams made from real people, and the opposite is the ideal we are all striving towards, always just beyond our reach.

I am back from Las Vegas and IBM Interconnnect 2015, and fully recovered from the onslaught on the senses. Man, does that city ever shut up. Time to return to regular programming. Today topic is my surprising realization of the main backers of micro-services in large enterprises. As they say in click baits, it’s not who you think.

For the last year or so I was a vocal evangelist for both Node.js and micro-services in IBM and elsewhere (using former as the platform of choice for the latter). Or as a dear former colleague of mine kindly put ‘evangelist, coach, and referee’. That role put me in contact with a number of teams finding themselves on the verge of the now familiar ‘from monolith to micro-services’ journey.

What I find over and over again is that micro-services appeal to leadership more than the developers. This is a somewhat confusing revelation considering micro-services are considered an architectural approach, and project managers are not supposed to fall in love with an architecture (at best, they are weary of it because ‘architecture’ is typically a code word for more boxes and increased cost and time to delivery). And yet.

Micro-services are not (only) about technology

When I am asked to do an elevator pitch about advantages of micro-services, this list typically comes to mind:

  1. Individually deployable pieces of running software each responsible for a small number of tasks
  2. Each micro-service can be implemented using a different stack
  3. Horizontal scalability decisions can be made at a micro-service level

When you analyze this list, neither point is really making your system better from a purely technical point of view. In fact, a monolithic system is definitely easier to work with when you are alone or have a small, ‘war room’ kind of a team. When a monolith is relatively small, deploying it is not a big deal, and cookie cutter scaling does not seem too wasteful (assuming the monolith does not depend on in-memory state that is hard to distribute).

Each of the points actually promises to fix long-standing systemic problems of very large teams responsible for equally large monoliths that are at the bursting point.

Breaking the logjam

The promise of individually deployable pieces seems to always light a fire in project managers’ eyes. I don’t blame them – most large monolithic systems are a bitch to deploy. If they use compiled languages such as Java, the build times are nontrivial. With every new line of code, deploy times keep growing, and it increasingly feels that there must be a better way to do this.

Monoliths are the first thing we build in the cloud because that’s what we used to do for on-premise deployment. Turns out, the price we pay to get the monolith built and deployed is too steep given the high bar set by ‘born in the cloud’ unicorns. Therefore, breaking up the monolith into smaller, more manageable parts seems as natural as mitosis is for single-cell organisms.

Beyond solving the sheer size problem, micro-services promise to solve the ‘different rate of change’ problem. As I have blogged recently, a typical system today have elements of Web sites, as well as Web apps rolled into one. Elements acting as a site have a tendency of wanting to change more often than the app part. Site sections tend to have a lot of marketing material that is time sensitive, while app sections are trickier and need to be changed more carefully (and may require data migration every once in a while). I often joke that these types of systems feel like a donkey and a horse strapped to the same harness – they just cannot find the right rhythm. One of them is either too fast or too slow. In fact, a lot of systems feel like we have a donkey, a horse, a cow and a goat all trying to pull the carriage together – not a pretty picture (funny though).

In these kinds of situations, micro-services offer an organizational, or governance solution, not a technical one. They often result in more moving parts and more complexity, but the relief of letting the metaphorical donkey and the horse run at their own pace is too hard to resist, overhead be damned. The alternative is having a complex process executed with utmost precision, and so far I know only one team (Facebook) that can pull it off with any regularity. Micro-services offer a more realistic alternative for the rest of us (the ‘dysfunctional teams’ from the title, which is really most of the teams).

No more intergalactic technology consensus

Anybody who tried to get a number of teams in a large organization to agree on a common technology can sympathize with this. We are all human, and tend to have passionate and strong opinions on technologies we like and hate. Put enough of these strong opinions together, and they tend to cancel each other out, leaving no common ground. This is bad news for the poor architect that needs to pick an approach for a large project. I once heard a saying learned through the hard won experience: “Even if we agree on a common technology or approach on Monday, we will slide back into disagreement by Thursday”.

In this context, micro-services offer not as much of a solution as “let’s just agree to disagree”. The focus is moved from common technology to common interfaces, integration techniques, protocols for passing data around. There is enough understanding about the advantages of stable protocols and APIs, so this part is much easier to close with a solid and lasting agreement.

A word of caution: I personally don’t think that, just because we could write each micro-service in a different technology, we should. There is much to be said about code reuse, and micro-services quickly minted by Yeomen generators tend to yield more productive teams than ‘let’s write the same authentication library in 6 different languages’. We found that by limiting our choices to Node.js and Java, we can move faster.

Nevertheless, it is just a matter of time until a new platform is touted as revolutionary or trending. When the time comes, we can risk one micro-service without betting the farm on it. Just in case Go does not turn out to be the giant killer it is touted to be, for example.

Cookie cutter is no fun with giant cookies

Finally, making clustering decisions at a micro-service level is more of a bean counter than architectural issue. Just clustering a small monolith is very simple – put a load-balancer in front of the monolith copies and you are done (again, assuming the monolith nodes do not critically depend on in-memory data that need to be kept in sync).

As the monolith grows, it needs more CPU and RAM to operate properly, times number of nodes. As it normally happens, ‘heat points’ are not distributed evenly across the monolith – there are sections that are working very hard, and sections that are barely moving. Cookie-cutter clustering becomes more and more expensive, with an increased percentage of unused and therefore wasted capacity.

Micro-services promise to be more efficient at using resources because we can make individual clustering decisions. We can beef up busy nodes and run a relatively small number of instances of rarely used micro-services. This is a purely economic (and ecological) issue – if we didn’t care about waste, we could just continue to run multiple monolith instances.

Of course, this is all assuming our monolith is clusterable to begin with. If it is not, micro-services become a way out for a system that has hit a limit of its ability to scale.

Keep the excitement to yourself

Next time you are in position to pitch micro-services to a worried project manager or product owner, don’t forget that technology is really not what you are selling – you are selling a solution for process, governance, cost of operation and scalability issues, not a technology. You are selling the ability to fix a typo on a prominent page of your large system within minutes without touching the rest of the system. You are promising the ability to maneuver an oil tanker as if it was a canoe, in a world full of oil tankers.

You can still be in love with the technology, just make it our little secret. I’ll never tell.

© Dejan Glozic, 2015