Oy With the Gamification Already!

Lady Katrana Prestor ~ Human Onyxia, World of Warcraft, 2014, Stephan Shubert via Wikimedia Commons
Lady Katrana Prestor ~ Human Onyxia, World of Warcraft, 2014, Stephan Shubert via Wikimedia Commons

“Hi, everybody. My name is Dejan and, … well…, I don’t play games (gasp). There, I’ve said it. This feels so liberating, I am a bit lightheaded. I think I’m going to sit down now.”

I cheated a bit in my pretend-address to the local Non-Gamers Anonymous chapter. I did play Microsoft Flight Simulator obsessively for years, but that is more of a gateway drug to real flight lessons than a multi-person shooter. Most people who tried it lost interest when they realized they cannot fire at other planes, promptly crashed their Boeing 737 and moved on to fight trolls and solders in a futuristic dystopia.

All these gaming-intolerant impulses kicked into high gear when I read the Spotify engineering model white paper. In case you missed it, Spotify is creating a stir with their new way of work organization. In a nutshell, they organize people into co-located feature teams called squads that have all they need to deliver a feature relatively independently. Several squads are organized into tribes that tend to be limited to about 100 people to prevent social connections breakdown.

Since squads are self-reliant, it is easy to envision a situation where the same problem is solved multiple times by squads that don’t communicate. To avoid this massive waste, like-minded squad members organize into chapters that share the same general knowledge (Web UI, iOS/Android, Design, Test) and a line manager. This provides organizational glue and prevents duplication. Finally, chapters are connected into guilds in a looser way, ensuring sharing of ideas and best practices.

The gamers are coming!

One of the first concerns that people have voiced was ‘how is this different from matrixed organizations’. I find guilty pleasure in observing these kinds of questions because they remind me of another debate closer to home, this one on how micro-services are nothing more than SOA.

But listen – oy with the gamification already! I explained the basic premise to my 18 year old son (an avid gamer) and even he was rolling his eyes (calling the lingo ‘juvenile’). The World of Nerddom is spilling into the rest of the reality with a vengeance that sometimes verges on bullying (yes, I get the irony). Case in point: a presenter at a recent NodeSummit suffered ironic remarks by the MC for daring to bring a Windows laptop to the stage, and not the all-beloved Mac (and I am typing this on a sweet new MacBook Pro; I just don’t like bullies, male or female). And now there is a growing chance another outgrowth of that world will become your everyday working reality.

Spotify is a young, rapidly growing company, and the main source of music for my teenage daughter. I am sure that game-playing millenials that I can see in the company photos feel very comfortable with guilds, tribes and squads. Their model is irresistible in that it addresses so many paint points that feed Dilbert cartoons. Their two-part video is smart, wonderfully animated and easy to follow, and many of the messages will ring true and soothe your pain if you spent any amount of time in an old enterprise work process.

What I find problematic is when those same enterprises latch on it and try to apply it in their own (very different) context. One of the reasons they would do it is the assumption that a successful implementation in a fast moving company gives it a seal of approval. Some of it is sheer survival instinct – everybody needs to move fast these days, and if your traditional org chart is slowing you down, you need to change if you want to be around in five years. Finally, and to be fair to large enterprises, it is really hard to find a true command-and-control organization these days – some variation of Scrum or Kanban is a norm virtually everywhere. Spotify provides a simplifying refinement that attempts to address the observed shortcomings.

It is not a religion

I see two problems with adopting Spotify model as-is:

  • It is a moving target. White paper authors themselves pointed out that it is entirely possible that by the time you implemented the squad/tribe/chapter/guild model, Spotify will have moved on to the next refinement of it. A kitschy version: you can’t capture the wind or the waterfall – you end up with dead air and stale water, respectively (rim shot).
  • It uses gamer-friendly terms. It assumes that everybody in the industry is a gamer and is instantly familiar and reacts positively to the images these names evoke. I cannot help but giggle imagining a bank IT shop where executives arrive and declare: “all right people, all of you on this floor are now the Stonehoof tribe. Stay tuned for the org chart to find out which squad and chapter you belong to. Guild masters are currently working on their corresponding chapter lists”. It is not even a generational thing – believe it or not, there are young people who have better things to do than kill hours working on their WoW reputation (and virtual gold). And yes, there are middle-aged clan leaders. Sadly.

Test out carefully

There are many worthy ideas in the Spotify engineering model. Some of them are a refinement of the matrixed models from the past. Most can be used without all the gaming jargon that goes with them. Discussions I had so far point at exactly that – savvy organizations will filter out the startup exuberance and latch on the more lasting nuggets. All of them should be treated as an experiment in the event they end up working only for Spotify (or in the event Spotify has already outgrown them).

And finally, the goal is to enable teams (squads?) to be agile and deliver results with the speed of the cloud. If that does not pan out, you just spent a lot of money re-arranging chairs on the Titanic. And called yourselves silly names that should be left behind once you reached your twenties.

Pardon the grumpiness. Hey, I may end up liking it after I live it for a while. Now if you excuse me, I have to go work on my LARP uniform. War is in the air.

© Dejan Glozic, 2015

Advertisements

Don’t Take Micro-Services Off-Road

Fred Bauder, 2009, Wikimedia Commons
Fred Bauder, 2009, Wikimedia Commons

I own an Acura TL 2006. It’s a great car. Every day I derive great pleasure driving it to work. It has a tight sporty suspension, precise steering, comfortable leather seats and an awesome audio system.

At the same time, I know better than to take it off-road. Its high performance tires are optimized for asphalt traction and low rolling resistance, not gravel or soil. It does not have enough clearance for rocks, or 4×4 drive required for rough terrain. If I did take it off-road, I could erroneously conclude that it is an awful car, which I know not to be true. I would have simply used it for something it was never designed to do.

I used this example to explain the concern I have seeing the evolution of the industry’s relationship with the micro-service architecture. It was just a matter of time people until people started taking their micro-service Acuras off-road and then writing how they are awful cars.

Original success stories

Architectures and approaches normally turn into trends because enough use cases exist to corroborate their genuine usefulness when solving a particular problem or a class of problems. Otherwise, only architecture astronauts would care. In the case of micro-services before they were trendy, enough companies built monoliths beyond their manageability. They had a real problem on their hands – a large application that fundamentally clashed with the modern ways of scaling, managing and evolving large systems in the cloud. Through some trial and error, they reinvented their properties as a loose collections of micro-services with independent scalability, life cycle and data concerns. Netfix, Groupon, Paypal, SoundCloud are just a small sample of companies running micro-services in production with success.

It is important to remember this because the trendiness of micro-services threatens to compel developers to try them out in contexts where they are not meant to be used, resulting in the projects overturned in the mud. This is bad news for all of us who derive genuine benefits from such an architecture.

Things to avoid

It is therefore good to try to arrive at a useful list of use cases where micro-services are not a good choice. It will keep us more honest, keep the micro-service hype at bay and prevent some failures that would sour people to an otherwise sound technical approach:

  1. Don’t start with micro-services – this one is a no-brainer. Micro-services attempt to solve problems of scale. When you start, your app is tiny. Even if it is not, it is just you or maybe you and couple more developers. You know it intimately and can rewrite it over a weekend. The app is small enough that you can easily reason about it. There is a reason why we use the word ‘monolith’ – it implies a rock big enough that it can kill you if it falls on you. When you start, your app is more like a pebble. It takes certain amount of time and effort by a growing number of developers to even approach monolith (and therefore micro-service) territory.
  2. Don’t even think about micro-services without DevOps – micro-services cause an explosion of moving parts. It is insane to attempt it without serious deployment and monitoring automation. You should be able to push a button and get your app deployed. In fact, you should not even do anything – committing code should get your app deployed through the commit hooks that trigger the delivery pipelines (at least in development – you still need some manual checks and balances for deploying into production).
  3. Try not to manage your own infrastructure – micro-services often introduce multiple databases, message brokers, data caches and similar services that all need to be maintained, clustered and kept in top shape. It really helps if your first attempt at micro-services is free from such concerns. A PaaS such as Cloud Foundry or Heroku will allow you to be functional faster and with less headache than with an IaaS, providing that your micro-services are PaaS-friendly.
  4. Don’t create too many micro-services – each new micro-service adds overhead. Cumulative overhead may outstrip the benefits of the architecture if you go crazy. It is better to err on the side of larger services and only split when they end up containing parts with conflicting demands for scaling, life cycle and/or data. Making them too small will simply transfer complexity away from the micro-services and into the service integration task.
  5. Don’t share micro-services between systems – I listed this final point here for completeness, but it is so important that it requires to be broken into its own section.

On micro-service sharing

I have seen many a fiery debate about the difference between micro-services and SOA. There are many similarities (it is hard to argue that micro-service architecture, or MSA is revisiting SOA principles). More recently I have formed a fairly strong opinion that a key differentiation between MSA and SOA is that of ambition.

When you go back and read about the lofty goals of SOA proponents, it is easy to notice that the aim was much higher. MSA success stories didn’t attempt to reinvent the world around catalogs of reusable services, systems that are discovering those services through registries, etc. At the beginning of every MSA success story is a team that grew their simple application too fast without refactoring along the way and hit the maintainability wall.

If you carefully read ‘monolith to micro-services’ blog posts, you will notice that the end result is the same thing. Groupon team has not created a ‘catalog of social coupon services to be assembled into coupon applications’ – they rebuilt Groupon Web site. They broke the monolith into small pieces and rebuilt it again. As far as their end users are concerned, the monolith is still there – the site was rebuilt in mid-air.

Since I think that micro-services are pragmatic and sane revisiting of SOA, it is apt to assume that creating reusable micro-services is low on the list of priorities. Yes, a micro-service needs to be individually deployable and be flexible enough that it can be bound to other services dynamically (minimally through some kind of a configuration on startup). You need to be able to deploy each service to multiple logical ‘spaces’ (DEV, QA, STAGING, PROD). But each logical micro-service instance is part of a single distributed monolith, re-imagined in a cloud-friendly way.

From a monolith to a – distributed monolith?

Where am I going with all this? I am a bit concerned that the industry noise will ruin micro-services by taking them outside their comfort zone. Too many people are taking them to the areas where they shouldn’t, and I don’t want the inevitable backlash to overshoot. Micro-services are a solution for the Big Ball of Mud architecture, but the alternative micro-service system is still a big ball. This ball made up of many small balls, is cleaner and easier to manage, deploy, scale and evolve, and can be inflated bigger than the old ball without exploding, but it is fundamentally the same thing.

Any attempts at nano-services, trying to deploy micro-services manually, using them because they are trendy without real need, or re-using them between multiple systems will result in a disappointment we don’t really need at the moment.

Are micro-services SOA? No, and please let’s keep it that way.

© Dejan Glozic, 2015

Vive la Révolution App

746px-Le_Barbier_Dichiarazione_dei_diritti_dell'uomo
Source: Wikimedia Commons

This post is a based on a presentation I made on a dare – something a former colleague proposed with only a title and a description, and it was up to me as the replacement to provide the actual content. It sort of reminds me of a debate club, where you are told that the topic is ‘App Revolution’, and you have 20 minutes to argue the ‘Pro’ position. What follows is my attempt to do it justice. Have fun (and mercy).

When we are confronted with the topic of revolutions, most of my North American friends immediately conjure up the sound of Yankee Doodle and the picture of George Washington crossing the Delaware River (I saw it last year in The Met – boy, is that painting big!). Being of European descent, my thoughts give preference to the French Revolution. It has essentially given us the modern European society, with milestone documents such as ‘Declaration of the Rights of Men and Citizen’ shown above. It has also given us the guillotine, which is sad but as Jacques Mallet du Pen famously quipped, all revolutions devour their children. What can you do – it’s a revolution, so16,594 people are bound to lose their heads, give or take.

One of the indispensable aspects of revolutions are slogans, something you can easily chant at the large group gatherings. Something catchy, such as ‘Freedom, Equality and Fraternity’ in the case of the French Revolution. Or as Blackadder interpreted it ‘Freedom, Equality and fewer fat bastards eating all the pie’.

As you correctly noticed, these slogans often call for three things. If that is true, and we are indeed witnessing an App Revolution, what would our slogan be? What three things would we want from our oppressors?

We are fighting for the freedom and abundance of data, infrastructure and architecture.

– Oppressed developers everywhere.

Note that when I say ‘freedom’, I don’t necessarily mean ‘completely free’. We know we will need to pay for some of it, hence the word ‘abundance’. While food in Western society is not exactly free, it is definitely abundant. You can go into any supermarket and leave with a whole rotisserie chicken for a few dollars. During the French Revolution, only the aforementioned fat bastards could afford it. That’s progress.

Hence, let me try to explain why we are fighting for these three things.

Freedom of Data

You have probably heard the phrase that we are living in an age of ‘API Economy’. What does that actually mean? In the past, data was a by-product of people using your application. Over time, your app’s database would fill up with data. The thinking was that the app is the product, and data is just internal by-product, a consequence of app usage. More recently, data started to take off as something that can be as important, or in some cases the only product you provide.

While in the past tacking on an API to your app would be an afterthought, something you may consider for partner or customer integrations, most modern systems are now built by first building the API for the data, then building up various clients that consume it. Your own clients are just ‘reference implementation’ of hopefully many other consumers of your APIs that will follow.

freedom-of-data
Source: IBM

Even music is going API these days. Sound engineers are now expected to provide stems of mastered music (drums, bass, guitars, keyboards, vox) so that remixers can easily provide derivative value without the hassle of sampling fully mixed songs (the audio equivalent of screen-scraping). What are stems but audio APIs?

Why is this important to us? Because when you open up your APIs, you become a platform, and platforms foster app eco-systems, with apps creating new value in many unexpected ways. Today, the most coveted place for any company is not to create a consumer product, but to create a platform that offers data and API, and creates a flourishing eco-system of apps built to take advantage of it. API discovery is now in the vogue, catalogs are sprouting, and all you need is to subscribe, obtain the authentication key and start building your innovative abstraction on top of it, or by combining multiple data sources in an innovative way. You can be data mining, providing innovative interfaces, analytics, or integrations with other systems.

If you are building a mobile app, all you need is a laptop and a phone to test your app. However, if you need anything in the back end you need to build a companion server-side app, which leads us to…

Freedom of Infrastructure

When I was a child, my parents bought me a Meccano kit. In those days, giving a child a box full of tiny sharp metal objects was consider totally cool. I quickly built all the possible toys based on the accompanied booklet, but sneaky bastards from Meccano also put a picture of a crane on the box that would require something like 10 sets to build. Since then, I developed this realization that I need to find a discipline in which I will not be limited by a box of finite number of parts.

meccanoengine001
Source: Meccano Beam Engine, Liskeard Museum

That’s why I chose software engineering – it is rare you will run out of files, or classes, or functions or variables the way you can run out of Meccano panels or tiny nuts and bolts.

However, once you venture into Web development, you hit the infrastructure version of Meccano. Your database, your server, your front end proxy all need to be hosted on physical boxes, and Mordac The Preventer from Information Services can make your life miserable in a hurry.

This is why Cloud is so important for our revolution. Regardless of where you fall on your ‘as a Service’ comfort level, you can use an IaaS or PaaS or SaaS to stand up your apps in minutes. Assuming you have found free or abundant source of data, your app can now be running and stay running without the need to worry about the messy sysadmin details or melted boards.

It does not end with just seeing your app running, either – you can jump into the third freedom that is the final cornerstone of our revolution.

Freedom of Architecture

In the dark ages of IT, it used to be that architecture was for the rich, and The Big Ball of Mud was for the rest of us. While you instinctively know that you should not be cashing those objects in memory, who is going to stand up, maintain and cluster Redis for it. You know that a message broker would be a real answer for your particular problem, but don’t have the stomach to stand up and administer RabbitMQ, or any of the popular alternatives. There is no accident that the famous Martin Fowler’s book from 2002 is called Patterns of Enterprise Application Architecture. At that time, only an enterprise could afford to provision and maintain all the boxes that such an architecture requires.

north-star
Source: Dejan Glozic

That same Martin Fowler not talks about Polyglot Persistence – the approach where apps in a distributed system choose different types of databases that perfectly suit their diverse needs, instead of underpowered MySql for everything. And he is not using the word ‘ enterprise’ this time, fully aware that a nerd hacking away on his Mac in Starbucks can provision such a system in minutes. App revolution indeed.

All together now

When we put our three demands together, great things can happen. To illustrate how far we have come, consider the system that I made the attendees of an IBM Interconnect 2015 lab build over the course of 2 hours:

lab5400
Source: Dejan Glozic

This system is just a toy, designed to teach modern micro-service architecture, and yet it would require that we stand up several servers, install and configure a ton of software, and build our own user management system:

  1. It uses Facebook for delegated authentication and to tap into Facebook’s data. No need to stand up anything, just register as a Facebook developer, obtain your client ID and secret and off you go.
  2. It deploys complex infrastructure (two Node.js app servers, a proxy, a data cache) to Bluemix PaaS within a matter of minutes, all using just a Web browser. In a pinch you could do it on a bus using your iPad, while also debating someone totally wrong on the Internet.
  3. It uses serious architecture (OAuth2 provider, Nginx proxy, Node.js micro-services, session sharing via Redis store) that was unheard of for non-institutional developers in the past.

Platforms everywhere

Of course, the notion of a platform is not limited to the Web. In fact, some of you may have initially thought the article is about mobile apps. Phones are huge app ecosystems, and so are the upcoming wearable platforms, of which iWatch is just the latest example.

Venturing further away from the classic Web apps, cars are now becoming rife with platforms unleashing the app revolution of sorts. Consider Apple’s CarPlay that Scott Rich wrote about in O’Reilly Radar – a platform for apps in your car, tapping at the latent and closed data world and opening it up as a new app eco system. It is a different context but the model seems to be the same: create a platform, open up the data through APIs, and unleash the inventions of app revolutionaries hunched over their laptops around the world.

Means of production

In the past, the control of data, infrastructure and architecture were limiting factors for the masses of developers around the world. Creativity and ideas are dispersed far more equitably than the control over resources would make you believe. At least in the area of software development, the true app revolution is in removing these control points and allowing platforms and eco systems to let the best ideas bubble up.

Whether you are a guy at a reclaimed wood desk overlooking San Francisco’s Mission district, or a girl in Africa at a reclaimed computer in a school built by a humanitarian mission, we are approaching the time when we will only be limited by our creativity, and by our ability to dream and build great apps. And that, my fellow developers, is worth fighting for.

© Dejan Glozic, 2015

Isomorphic Apps Part 2: Node, React.js, and Socket.io

Two Heads, 1930, Wikimedia Commons
Two Heads, 1930, Wikimedia Commons

When I was a kid, I went to the movies to watch Mel Brooks’ “History of The World, Part I”. I had a great time and could not wait for the sequel (that featured, among other things, Hitler on ice, a Viking funeral and laser-shooting rabbis in ‘Jews in Space’ teaser). Alas, ‘Part II’ never came. Determined to not subject my faithful readers to such a disappointment, here comes the promised part II of my ‘Isomorphic Apps’ trilogy.

In the first part of this story, we created an isomorphic app by taking advantage of the fact that we can use Dust.js as an Express view engine, and then compile partials into JavaScript and re-use them on the client as needed. In order to compare approaches with only one variable changed, we will switch to React.js for the view.

What’s the deal with React.js

React.js is attracting a lot of attention these days due to the novel approach it has taken to building dynamic Web apps. At the heart of the approach is the notion of a virtual DOM. React.js components manipulate an abstraction of a DOM that is then transformed into the physical DOM in a highly optimized fashion. Even more ingeniously, browser’s DOM is only one of the possible transformations: virtual DOM can be also serialized into plain HTML, which makes it possible to use it on the server. Even more recently, it can be serialized into native code to address mobile (and even desktop) UI components.

I am old enough to remember Java’s “Write once, run anywhere” slogan, and this looks like new generation’s attempt to make a run for this chimera. But even putting React native on a side for a moment, the fact that you can render on the server makes React supremely suitable for isomorphic apps, something Angular.js is lacking.

React.js is also refreshingly simple to figure out. Angular.js has this famous adoption roller coaster, and sometimes when you don’t get an Angular peculiarity, you feel the fault is with you, not Angular. React.js took an approach that life is short, and we can do better things with our time than figure out the maddening quirks of a complex framework. There is no two-way binding (because it has shown to be a double-edged sword – see what I did here). When the model changes, you just naively rebuild the view (sometimes referred to as ‘write pages like it’s the 90s’). Seems massively suboptimal, but remember that you are only rebuilding the virtual DOM – React.js figures out the actual delta and only applies the delta against the real DOM. And since most of the performance (or lack thereof) lies in the physical DOM, React.js promises fast apps without writing a lot of code for smart and surgical updating on model changes.

Configuring React.js as an Express view engine

Alright, I hope this wet your appetite for some coding. We will start by cloning the page from part I and adding another view engine in app.js (because I am cheap/lazy and don’t want to run another app for this). For this we need to install react on the server, as well as the express view adapter.

We will start by installing ‘react’ and ‘express-react-views’ and configuring the jsx view engine:


var react = require('express-react-views');

...

app.engine('jsx', react.createEngine());
app.set('view engine', 'jsx');

The last line above should only be set if you will use JSX as the only view engine for Express. In my case, I had to omit that line because I am already serving some Dust pages, and you can only set one default engine. The only thing I lost this way was the ability to find JSX templates without the extension – they can still be rendered when extension is included.

The controller for our React.js page is almost identical to the one we wrote for Dust.js:


var model = require('../models/todos');

module.exports.get = function(req, res) {
   model.list(req.user, function(err, todos) {
      res.render('isomorphic_react.jsx',
         { title: 'React - Isomorphic', user: req.user, todos: todos });
   });
};

Most of the fun happens in the view, as expected. React.js requires some getting used to. For starters, JSX syntax is actually XML (and not even XHTML), so all elements require termination. Many attribute names require camel case, which is very annoying (I always hated Jade for this mental transformation, and now JSX is doing the same for me). At least the JSX transformer is yelling at you in the console about possible errors you made, so fixing up your JSX is not too hard:

var React = require('react');
var DefaultLayout = require('./rlayout');
var RTodo = require('./rtodo');

var Todos = React.createClass({
  render: function() {
    return (
      <DefaultLayout { ...this.props} selection="react">
        <h1>Using React.js for View</h1>
        <h2>Todos</h2>
        <div className="new">
           <textarea id="new-todo-text" placeholder="New todo"/>
        </div>
        <div className="delete">
           <button type="button" id="delete-all"
              className="btn btn-primary">Delete All</button>
        </div>
        <div id="todos" className="todos">
           {this.props.todos.map(function(todo) {
          	return <RTodo key={todo.id} {...todo} />;
           })}
        </div>
        <script src="/js/prettyDate.js"></script>
        <script src="/js/rtodo.js"></script>
        <script src="/js/rtodos.js"></script>
      </DefaultLayout>
    );
  }
});

module.exports = Todos;

The code above requires some explanation. Unlike with Dust.js, both inclusion into a common layout template and instantiation of partials is done through React.js component model. Notice that we imported DefaultLayout component that is our standard page boilerplate. The payload of the page is simply specified as content of the instantiated component in the ‘render’ method above.

Another important point is that unlike Dust.js, properties are not automatically passed down the component hierarchy – we need to explicitly do it (notice the strange “{ …this.props }” expression in the DefaultLayout declaration – what I am saying is ‘pass all the properties down to the child component’). We can also define new properties, which I am doing by passing ‘selection’ that will be used by the header component (to highlight the ‘React’ link).

Another important section of the template is where I am instantiating RTodo component (a single Todo card). Flow control can be tricky in JSX because the entire template is one giant return statement, so everything needs to evaluate to an expression. Notice the trick with using the array map to iterate over the list of todos and render each child todo component.

This code will produce a page very similar to the one with Dust.js, with identical results. In fact, it is possible to go back and forth because both pages are using the same REST service for the model.

JSX compiler

So far we took care of the server side. As with Dust.js, we can compile components we need on the client side, this time using jsx compiler that comes by installing ‘react-tools’:


#!/bin/bash
node_modules/react-tools/bin/jsx --extension jsx views/ public/js/ rtodo

We can compile any number of components and place them into the JS directory under /public folder so that Express can serve them to the browser.

The client side script is very similar to the one used by the Dust.js page. The only difference is in the ‘Add’ action handler:

var socket = io.connect('/');
socket.on('todos', function (message) {
  if (message.type=='add') {
    var newTodo = document.createElement('div');
    React.render(React.createElement(RTodo, message.state),
              newTodo);
    $(".todos").prepend(newTodo);
  }
  ...

The code is remarkably similar – instead of calling ‘dust.render’ to render the partial using the element we received via the Socket.io message, we ask React to render the compiled element into a new DOM element we created on the fly. We then prepend this element into the parent DIV.

Commentary and comparisons

First off, I would say that this second attempt at writing an isomorphic app was a success because I was able to replicate Dust.js example from part I with identical behaviour. However, it is not as good a fit for React.js. A better example would see us modifying a model and asking React.js to re-render an existing DOM branch. Now that I feel reasonably comfortable around React.js, I think I will create something more dynamic for it in the near future. A true React-y way of doing the list of todos would be to simply re-render the entire list on each Socket.io message. We would let React.js figure out that all it needs to do is insert a new Todo DIV into the parent node. This way we would not need to create DOM elements ourselves, as in the code above.

After spending a year working with Dust.js, JSX took some getting used to. As I said, I hated Jade because it created an additional layer of abstraction between me and HTML, and I never quite knew what final HTML it will produce. JSX evokes the same feelings in me, but the error/correction loop has shortened as I learned it more. In addition, the value I get in return is much higher with JSX than with Jade.

Nevertheless, certain things will not get better with time. JSX is awkward to work with when it comes to logic. Remember, the entire template is an expression, so statements are really hard to fit in. Tertiary conditonals work, and as you saw, it is possible to use tricks with maps to iterate over children in a collection. I still prefer Dust.js for straightforward pages, but I can see how React.js can work with components in a very dynamic app.

I like React.js component model, as well as the fact that code and markup are close to each other – for a developer this is very useful. I also like the fact that, JSX quirks aside, there is much less magic compared to Angular.js. Of course, React.js is really just a View of MVC, so it is not a fair comparison. On the other hand, I am now dying to hook it up into Backbone as a view – it feels like a great combination (and of course, there are already articles on exploiting this exact combination). The more I think and read about it, Backbone models/collections/router and React.js views may just end up being my favorite stack for writing highly dynamic apps with server side bonus for SEO and initial experience.

A word of caution

If your system has elements of both a site and an app, use micro-services and implement site portions with a more straightforward templating solution (as I already covered in the previous blog post). This is going to make content authoring easier and increase the number of content providers with cursory knowledge of HTML that will feel confident authoring and/or modifying something like Dust.js templates. Leave React.js for 10x developers working on the highly dynamic ‘app’ portions of the system. By the way, this is an area where micro-services shine – this kind of partitioning is one of their key selling points. You can easily have micro-services using Dust.js and micro-services using React.js (and as I have already shown, even mixed in the same Node app).

One of the downsides of a mixed system (one using both Dust.js and React.js) is that sometimes content pages have dynamic component sprinkled in them. The challenge is invoking those component without requiring your casual developers to be afraid to touch such pages. Invoking React.js component in a Dust.js page would require inserting JavaScript tags, which is less then ideal. This is where Web Components are much easier to reason about, and there are already attempts to bridge the two worlds – invoking React.js components as custom Web Components.

And that’s a wrap

As before, you can browse the source code as IBM DevOps Services project, and the latest version of the app is running in Bluemix. In the final instalment of this trilogy, I will make our example a bit more dynamic (to let React.js show its true potential), and add some structure using Backbone.js. Until then, React away!

© Dejan Glozic, 2015

Micro-Services for Dysfunctional Teams

Jan Steen, Argument over a Card Game, Wikimedia Commons.
Jan Steen, Argument over a Card Game, Wikimedia Commons.

Update: I have received a ton of feedback on this post, and some of the well meaning criticism is concerned with the term ‘dysfunctional’, considering it a bit ‘judgy’ from somebody that is supposed to help these same teams. Apart from yielding a catchy title, Hacker News reader was spot on when he declared my use of the word as ‘term of endearment’ more than anything else. Not unlike a smart person calling herself ‘stupid’ or a workaholic calling himself ‘lazy’ for sleeping in one morning. In the proceeding article, ‘dysfunctional’ are most teams made from real people, and the opposite is the ideal we are all striving towards, always just beyond our reach.

I am back from Las Vegas and IBM Interconnnect 2015, and fully recovered from the onslaught on the senses. Man, does that city ever shut up. Time to return to regular programming. Today topic is my surprising realization of the main backers of micro-services in large enterprises. As they say in click baits, it’s not who you think.

For the last year or so I was a vocal evangelist for both Node.js and micro-services in IBM and elsewhere (using former as the platform of choice for the latter). Or as a dear former colleague of mine kindly put ‘evangelist, coach, and referee’. That role put me in contact with a number of teams finding themselves on the verge of the now familiar ‘from monolith to micro-services’ journey.

What I find over and over again is that micro-services appeal to leadership more than the developers. This is a somewhat confusing revelation considering micro-services are considered an architectural approach, and project managers are not supposed to fall in love with an architecture (at best, they are weary of it because ‘architecture’ is typically a code word for more boxes and increased cost and time to delivery). And yet.

Micro-services are not (only) about technology

When I am asked to do an elevator pitch about advantages of micro-services, this list typically comes to mind:

  1. Individually deployable pieces of running software each responsible for a small number of tasks
  2. Each micro-service can be implemented using a different stack
  3. Horizontal scalability decisions can be made at a micro-service level

When you analyze this list, neither point is really making your system better from a purely technical point of view. In fact, a monolithic system is definitely easier to work with when you are alone or have a small, ‘war room’ kind of a team. When a monolith is relatively small, deploying it is not a big deal, and cookie cutter scaling does not seem too wasteful (assuming the monolith does not depend on in-memory state that is hard to distribute).

Each of the points actually promises to fix long-standing systemic problems of very large teams responsible for equally large monoliths that are at the bursting point.

Breaking the logjam

The promise of individually deployable pieces seems to always light a fire in project managers’ eyes. I don’t blame them – most large monolithic systems are a bitch to deploy. If they use compiled languages such as Java, the build times are nontrivial. With every new line of code, deploy times keep growing, and it increasingly feels that there must be a better way to do this.

Monoliths are the first thing we build in the cloud because that’s what we used to do for on-premise deployment. Turns out, the price we pay to get the monolith built and deployed is too steep given the high bar set by ‘born in the cloud’ unicorns. Therefore, breaking up the monolith into smaller, more manageable parts seems as natural as mitosis is for single-cell organisms.

Beyond solving the sheer size problem, micro-services promise to solve the ‘different rate of change’ problem. As I have blogged recently, a typical system today have elements of Web sites, as well as Web apps rolled into one. Elements acting as a site have a tendency of wanting to change more often than the app part. Site sections tend to have a lot of marketing material that is time sensitive, while app sections are trickier and need to be changed more carefully (and may require data migration every once in a while). I often joke that these types of systems feel like a donkey and a horse strapped to the same harness – they just cannot find the right rhythm. One of them is either too fast or too slow. In fact, a lot of systems feel like we have a donkey, a horse, a cow and a goat all trying to pull the carriage together – not a pretty picture (funny though).

In these kinds of situations, micro-services offer an organizational, or governance solution, not a technical one. They often result in more moving parts and more complexity, but the relief of letting the metaphorical donkey and the horse run at their own pace is too hard to resist, overhead be damned. The alternative is having a complex process executed with utmost precision, and so far I know only one team (Facebook) that can pull it off with any regularity. Micro-services offer a more realistic alternative for the rest of us (the ‘dysfunctional teams’ from the title, which is really most of the teams).

No more intergalactic technology consensus

Anybody who tried to get a number of teams in a large organization to agree on a common technology can sympathize with this. We are all human, and tend to have passionate and strong opinions on technologies we like and hate. Put enough of these strong opinions together, and they tend to cancel each other out, leaving no common ground. This is bad news for the poor architect that needs to pick an approach for a large project. I once heard a saying learned through the hard won experience: “Even if we agree on a common technology or approach on Monday, we will slide back into disagreement by Thursday”.

In this context, micro-services offer not as much of a solution as “let’s just agree to disagree”. The focus is moved from common technology to common interfaces, integration techniques, protocols for passing data around. There is enough understanding about the advantages of stable protocols and APIs, so this part is much easier to close with a solid and lasting agreement.

A word of caution: I personally don’t think that, just because we could write each micro-service in a different technology, we should. There is much to be said about code reuse, and micro-services quickly minted by Yeomen generators tend to yield more productive teams than ‘let’s write the same authentication library in 6 different languages’. We found that by limiting our choices to Node.js and Java, we can move faster.

Nevertheless, it is just a matter of time until a new platform is touted as revolutionary or trending. When the time comes, we can risk one micro-service without betting the farm on it. Just in case Go does not turn out to be the giant killer it is touted to be, for example.

Cookie cutter is no fun with giant cookies

Finally, making clustering decisions at a micro-service level is more of a bean counter than architectural issue. Just clustering a small monolith is very simple – put a load-balancer in front of the monolith copies and you are done (again, assuming the monolith nodes do not critically depend on in-memory data that need to be kept in sync).

As the monolith grows, it needs more CPU and RAM to operate properly, times number of nodes. As it normally happens, ‘heat points’ are not distributed evenly across the monolith – there are sections that are working very hard, and sections that are barely moving. Cookie-cutter clustering becomes more and more expensive, with an increased percentage of unused and therefore wasted capacity.

Micro-services promise to be more efficient at using resources because we can make individual clustering decisions. We can beef up busy nodes and run a relatively small number of instances of rarely used micro-services. This is a purely economic (and ecological) issue – if we didn’t care about waste, we could just continue to run multiple monolith instances.

Of course, this is all assuming our monolith is clusterable to begin with. If it is not, micro-services become a way out for a system that has hit a limit of its ability to scale.

Keep the excitement to yourself

Next time you are in position to pitch micro-services to a worried project manager or product owner, don’t forget that technology is really not what you are selling – you are selling a solution for process, governance, cost of operation and scalability issues, not a technology. You are selling the ability to fix a typo on a prominent page of your large system within minutes without touching the rest of the system. You are promising the ability to maneuver an oil tanker as if it was a canoe, in a world full of oil tankers.

You can still be in love with the technology, just make it our little secret. I’ll never tell.

© Dejan Glozic, 2015