PayPal, You Got Me At ‘Isomorphic ReactJS’


I love PayPal’s engineering department. There, I’ve said it. I have followed what Jeff Harrell and the team have been doing ever since I started reading about their wholesale jump into Node.js/Dust.js waters. In fact, their blogs and presentations finally convinced me to push for Node.js in my IBM team as well. I had a pleasure of talking to them at conferences multiple times and I continue to like their overall approach.

PayPal is at its core a no-nonsense enterprise that moved from Java to Node.js. Everything I have seen coming from them had the same pragmatic approach, with concerns you can expect from running Node.js in production – security, i18n, converting a large contingent of Java engineers to Node.js.

More recently, I kept tab on PayPal’s apparent move to from Dust.js to ReactJS. Of course, this time around we learned faster and were already playing with React ourselves (using Dust.js for simpler, content-heavy pages and reserving ReactJS for more dynamic use cases). However, we haven’t really started pushing on ReactJS because I was still looking at how to take advantage of React’s ability to render on the server.

Well, the wait is over. PayPal has won my heart again by releasing a React engine that connects the dots in a way so compatible with what we needed that made me jump with joy. Unlike the version I used for my previous blog post, this one allows server side components to be client-mountable, offering true isomorphic goodness. Finally, a fat-free yogurt that does not taste like paper glue.

Curate and connect

The key importance of PayPal’s engine is in what it brings together. One of the reasons React has attracted so much attention lately is its ability to render into a string on the server, then render to the real DOM on the client using the same code. This is made possible by using NodeJS, which is by now our standard stack (I haven’t written a line of Java code for more than a year, on my honour).

But that is not enough – in order to carry over into the client with the same code, you need ‘soft’ page switching – showing boxes inside other boxes and updating the browser history as these boxes are swapped. This has been brought to us now by another great library – react-router. This module inspired by Ember’s amazing router is quickly becoming ‘the’ router for React applications.

What PayPal did in their engine was connect all these great libraries, then write important glue code to put it all together. It is now possible to mix normal server side templates with pages that start their life on the server, then continue on the client with the state preserved and ready to go as soon as JavaScript is loaded.

Needless to say, this was the solution we were looking for. As far as we are concerned, this will put an end to needless ‘server vs client’ wars, and allow us to have our cake and eat it too. Mmm, cake.

Show us some sample code

OK, let’s get our hands dirty. What many people need when writing applications is a hybrid between a site and an app – the ability to put together a web site that has one or more single-page apps embedded in it. We will build an example site that has two plain ReactJS pages rendered on the server, while the third page is really an SPA taking advantage of react-engine and the ability to go full isomorphic.

We will start by creating a shared layout JSX component to be consumed by all other pages:

var React = require('react');
var Header = require('./header.jsx');

module.exports = React.createClass({

  render: function render() {
    var bundle;

    if (this.props.addBundle)
      bundle = <script src='/bundle.js'/>;

    return (
          <meta charSet='utf-8' />
          <link rel="stylesheet" href="/css/styles.css"/>
          <Header {...this.props}></Header>
          <div className="main-content">

We extracted common header as a separate component that we required and inlined:

var React = require('react');

module.exports = React.createClass({

  displayName: 'header',

  render: function render() {
    var linkClass = 'header-link';
    var linkClassSelected = 'header-link header-selected';

    return (
      <section className='header' id='header'>
        <div className='header-title'>{this.props.title}</div>
        <nav className='header-links'>
            <li className={this.props.selection=='header-home'?linkClassSelected:linkClass} id='header-home'>
              <a href='/'>Home</a>
            <li className={this.props.selection=='header-page2'?linkClassSelected:linkClass} id='header-page2'>
              <a href='/page2'>Page 2</a>
            <li className={this.props.selection=='header-spa'?linkClassSelected:linkClass} id='header-spa'>
              <a href='/spa/section1'>React SPA</a>

The header shows three main pages – ‘Home’, ‘Page2’ and ‘React SPA’. The first two are plain server side pages that are rendered by express and sent to the client as HTML:

var Layout = require('./layout.jsx');
var React = require('react');

module.exports = React.createClass({

  render: function render() {

    return (
      <Layout {...this.props}>
        <p>An example of a plain server-side ReactJS page.</p>

On to the main course

The third page (‘React SPA’) is where all the fun is. Here, we want to create a single-page app so that when we navigate to it by clicking on its link in the header, all subsequent navigations inside it are client-side. However, true to our isomorphic requirement, we want the initial content of ‘React SPA’ page to be rendered on the server, after which react-router and React component will take over.

To show the potential of this approach, we will build a very useful layout – a page with a left nav containing three links (Section 1, 2 and 3), each showing different content in the content area of the page. If you have seen such a page once, you saw it a million times – this layout is internet’s bread and butter.

We start building our SPA top-down. Our top level ReactJS component will reuse Layout component:

var Layout = require('./layout.jsx');
var React = require('react');
var Nav = require('./nav.jsx');
var Router = require('react-router');

module.exports = React.createClass({

  render: function render() {

    return (
      <Layout {...this.props} addBundle='true'>
        <Nav {...this.props}/>
        <Router.RouteHandler {...this.props}/>

We have loaded left nav as a Nav component:

var React = require('react');
var Link = require('react-router').Link;

module.exports = React.createClass({

  displayName: 'nav',

  render: function render() {
    var activeClass = 'left-nav-selected';

    return (
      <section className='left-nav' id='left-nav'>
        <div className='left-nav-title'>{}</div>;
        <nav className='left-nav-links'>
            <li className='left-nav-link' id='nav-section1'>
              <Link to='section1' activeClassName={activeClass}>Section 1</Link>
            <li className='left-nav-link' id='nav-section2'>
              <Link to='section2' activeClassName={activeClass}>Section 2</Link>
            <li className='left-nav-link' id='nav-section3'>
              <Link to='section3' activeClassName={activeClass}>Section 3</Link>

This looks fairly simple, except for one crucial difference: instead of adding plain ‘a’ tags for links, we used Link components coming from react-router module. They are the key for the magic here – on the server, they will render normal links, but with ‘breadcrumbs’ allowing React router to mount click listeners on them, and cancel normal navigation behaviour. Instead, they will cause React components registered as handlers for these links to be shown. In addition, browser history will be maintained so that back button and address bar works as expected for these ‘soft’ navigations.

Component RouteHandler is responsible for executing the information specified in our route definition:

var Router = require('react-router');
var Route = Router.Route;

var SPA = require('./views/spa.jsx');
var Section1 = require('./views/section1.jsx');
var Section2 = require('./views/section2.jsx');
var Section3 = require('./views/section3.jsx');

var routes = module.exports = (
  <Route path='/spa' handler={SPA}>;
    <Route name='section1' handler={Section1} />;
    <Route name='section2' handler={Section2} />;
    <Route name='section3' handler={Section3} />;
	<Router.DefaultRoute handler={Section1} />;

As you can infer, we are not declaring all the routes for our site, just the section for the single-page app (under the ‘/spa’ path). There we have built three subpaths and designated React components as handlers for these routes. When a Link component whose ‘to’ property is equal to the route name is activated, the component designated as handler will be shown.

Server needs to cooperate

In order to get our HTML5 push state enabled router to work, we need server side cooperation. In the olden days when SPAs were using hashes to ensure client side navigation is not causing page reloading, we didn’t need to care about the server because hashes stayed on the client. Those days are over and we want true deep URLs on the client, and we can have them using HTML5 push state support.

However, once we start using true links for everything, we need to tell the server to not try to render pages that belong to the client. We can do this in express like this:

app.get('/', function(req, res) {
  res.render('home', {
    title: 'React Engine Demo',
    name: 'Home',
    selection: 'header-home'

app.get('/page2', function(req, res) {
  res.render('page2', {
    title: 'React Engine Demo',
    name: 'Page 2',
    selection: 'header-page2'

app.get('/spa*', function(req, res) {
  res.render(req.url, {
    title: 'SPA - React Engine Demo',
    name: 'React SPA',
    selection: 'header-spa'

Notice that we have defined controllers for two routes using normal ‘res.render’ approach, but the third one is special. First off, we have instructed express to not try to render any pages under /spa by sending them all to the React router. Notice also that instead of sending normal view names in res.render, we are passing entire URL coming from the request. This particular detail is what makes ‘react-engine’ ingenious – the ability to mix react-router and normal views by looking for the presence of the leading ‘/’ sign.

A bit more boilerplate

Now that we have all these pieces, what else do we need to get this to work? First off, we need the JS file to configure the react router on the client, and start the client side mounting:

var Routes = require('./routes.jsx');
var Client = require('react-engine/lib/client');

// Include all view files. Browserify doesn't do
// this automatically as it can only operate on
// static require statements.
require('./views/**/*.jsx', {glob: true});

// boot options
var options = {
  routes: Routes,

  // supply a function that can be called
  // to resolve the file that was rendered.
  viewResolver: function(viewName) {
    return require('./views/' + viewName);

document.addEventListener('DOMContentLoaded', function onLoad() {

And to round it all, we need to deliver all this JavaScript, and JSX templates, to the client somehow. There are several ways to approach JavaScript modularization on the client, but since we are using Node.js and sing the isomorphic song, what can be more apt than using Browserify to carry over CommonJS into the client? The following command line will gather the entire dependency tree for index.js into one tidy bundle:

browserify -t reactify -t require-globify public/index.js -o public/bundle.js

If you circle back all the way to Layout.jsx, you will notice that we are including a sole script tag for /bundle.js.

The complete source code for this app is available on GitHub. When we run ‘npm install’ to bring all the dependencies, then ‘npm start’ to run browserify and start express, we get the following:


When we click on header links, they cause full page reload, rendered by express server. However, clicks on the left nav links cause the content of the SPA page to change without a page reload. Meanwhile, the address bar and browser history is dutifully updated, and deep links are available for sharing.


You can probably tell that I am very excited about this approach because it finally brings together fast initial rendering and SEO-friendly server-side pages, with full dynamic ability of client side apps. All excitement aside, we need to remember that this is just views – we would need to write more code to add action dispatcher and data stores in order to implement full Flux architecture.

Performance wise, the combined app renders very quickly, but one element sticks out. Bundle.js in its full form is about 800KB of JavaScript, which is a lot. When running the command to minify it, it is trimmed down to 279KB, and when compression is enabled in express, it goes further down to 62.8KB to send down the wire. We should bear in mind that this is ALL JavaScript we need – ReactJS, as well as our own components. It should also be noted that this JavaScript is loaded asynchronously and that we are sending content from the server already – we will not see a white page while script is being downloaded and parsed.

In a more complex application, we would probably want to segregate JavaScript into more than one bundle so that we can load code in chunks as needed. Luckily, react-router already addressed this.

The app is deployed on Bluemix – you can access it at Try it out, play with the source code and let me know what you think.

Great job, PayPal! As for me, I am completely sold on ReactJS for most real world applications. We will be using this approach in our current project from now on.

© Dejan Glozic, 2015

Should I Build a Site or an App? Yes!

Minnesota State Capitol Woodworkers Toolbox, circa 1900, Wikimedia Commons.
Minnesota State Capitol Woodworkers Toolbox, circa 1900, Wikimedia Commons.

Yes, I know. I stopped blogging to take a desperately needed break. Then I returned only to be hit with a mountain of fresh, ‘hit the ground running’, honest to God January work that knocked the air out of my lungs and pinned me down for a while. Then an IBM colleague tried to ask me a Dust.js question, my doors were closed due to a meeting, and he found his answer in one of my blog posts.

So my blog is actually semi-useful, but it will stop being so without new content, so here is the first 2015 instalment. It is about one of my favorite hobbies – being annoyed with people being Wrong on the Internet. Judging by various discussion threads, developers are mostly preoccupied by these topics:

  1. All the reasons why AngularJS is awesome/sucks and will be the next jQuery/die in agony when 2.0 ships (if it ever ships/it will be awesome/cannot wait).
  2. Picking the right client side MVC framework (lots of people out there frozen into inaction while looking at the subtle differences of TODO app implementations in 16 different incarnations)
  3. Declaring client side single-page apps ‘the cool way’ and server side rendering ‘the old way’ of Web development

These topics are all connected, because if you subscribe to the point of view in (3), you either pray at the church of AngularJS (1) or you didn’t drink the Kool-Aid and subsequently need to pick an alternative framework (2).

Dear fellow full-stack developers and architects, that’s pure nonsense. I didn’t put an image of a toolbox at the top because @rands thinks it will nicely fit Restoration Hardware catalog. It is a metaphor of all the things we learn along the way and stash in our proverbial tool box.

Sites and apps

The boring and misleading discussion ‘server or client side apps’ has its origin in the evolution of the Web development. The Web started as a collection of linked documents with strong emphasis on indexing, search and content. Meanwhile, desktop applications were all about programming – actions, events, widgets, panes. Managing content in desktop apps was not as easy as on the Web. As a flip side, having application-like behaviour on the Web was hard to achieve at first.

When Ajax burst onto the scene, this seemed possible at last, but many Ajax apps were horrible – they broke the Back button, didn’t respect the Web, were slow to load due to tons of JavaScript (the dreaded blank page), and the less I say about hashes and hash bangs in URLs, the better.

It is 2015 now and the situation is much better (and at least one IBM Fellow concurs). Modern Ajax apps are created with more predictable structure thanks to the client side MV* frameworks such as BackboneJS, AngularJS, EmberJS etc. HTML5 pushState allows us to go back to deep linking. That still does not mean that you should use a hammer to drill a hole in the wall. Right tool for the right job.

And please don’t look at native mobile apps in envy (they talk to the server using JSON APIs only, I should do that too). They are physically installed on the devices, while your imposter SPA needs to be sent over mobile networks before anything can be seen on the screen (every bit of your overbuilt, 1MB+ worth of JavaScript fatness). Yes, I know about caching. No, your 1MB+ worth of JavaScript still needs to be parsed every time with the underpowered JavaScript engine of the mobile browser.

But I digress.

So, when do you take out site tools instead of Web app tools? There are a few easy questions to ask:

  1. Can people reach pages of your app without authenticating?
  2. Do you care about search engine optimization of those pages? (I am curious to find people who answer ‘No’ to this question)
  3. Are your pages mostly linked content with a little bit of interactivity?

If this describes your project, you would be better off writing a server-side Web app (say, using NodeJS, express and a rendering engine like Handlebars or Dust.js), with a bit of jQuery and Bootstrap with a custom theme to round things up.

Conversely, these may be the questions to ask if you think you need a single-page app:

  1. Do people need to log in in order to use my site?
  2. Do I need a lot of complex interactive behaviour with smooth transition similar to native apps?
  3. Do I expect users to spend a lot of time in my app doing something creative and/or collaborative?

What if I need both?

Most people actually need both. Your site must have a landing page, some marketing content, documentation, support – all mostly content based, open to search engine crawlers and must be quick to download (i.e. no large JS libraries please).

Then there is the walled up section where you need to log in, and then interact with stuff you created. This part is an app.

The thing is, people tend to think they need to pick an approach first, then do everything using that single approach. When site people discuss with app people on the Internet, they sound to me like Abbott and Costello’s ‘Who’s on First?’ routine. Site people want the home page to be fast, and don’t want to wait for AngularJS to download. They also don’t want content people to learn Angular to produce new pages. App people shudder at the thought of implementing all the complex interactions by constantly redrawing the entire page (sooner or later Web 1.0 is mentioned).

The thing is, they are both right and wrong at the same time. It may appear they want to have their cake and eat it too, but that is fairly easy to do. All you need to do is apply some care in how your site is structured, and give up on the ideological prejudice. Once you view server and client side techniques as mere tools in the toolbox, all kinds of opportunities open up.

Mixing and matching

The key in mixing sites and apps is your navigational structure. Where SPA people typically lose it is when they assume EVERYTHING in their app must be written in their framework of choice. This is not necessary, and most frameworks are embeddable. If you construct your site navigation using normal deep links, you can construct your navigational areas (for example, your site header) on the server and just use these links as per usual. Your ‘glue’ navigational areas should not be locked in the client side MV* component model because they will not work on the server for the content pages.

What this means is that you should not write your header as an Angular directive or a jQuery plug-in. Send it as plain HTML from the server, with some vanilla JavaScript for dynamic effects. Keep your options wide open.

For this to work well, the single page apps that are folded into this structure need to enable HTML5 mode in their routers so that you can transparently mix and match server and client side content.

Now add micro-services and stir for 10 minutes

To make things even more fun, these links can be proxied to different apps altogether if your site is constructed using micro-services. In fact, you can create a complex site that mixes server-side content with several SPAs (handled by separate micro-services). This is the ultimate in flexibility, and if you are careful, you can still maintain a single site experience for the user.

To illustrate the point, take a look at the demo I have created for the Full Stack Toronto conference last year. It is still running on Bluemix, and the source code is on GitHub. If you look at the header, it has several sections listed. They are powered by multiple micro-services (Node apps with Nginx proxy in front). It uses the UI composition technique described in one of the previous posts. The site looks like this when you click on ‘AngularJS’ link:


The thing is, this page is really a single-page app folded in, and a NodeJS micro-service sends AngularJS content to the browser, where it takes over. In the page, there are two Angular ‘pages’ that are selectable with two tabs. Clicking on the tabs activates Angular router with HTML5 mode enabled. As a result, these ‘pages’ have normal URLs (‘/angular-seed/view1’ and ‘/angular-seed/view2’).

Of course, when clicking on the links in the browser, Angular router will handle them transparently, but if you bookmark the deep URL and paste in the browser address bar, the browser will now hit the server first. The NodeJS service is designed to handle all links under /angular-seed/* and will simply serve the app, allowing Angular router to take over when loaded.

The really nice thing is that Angular SPA links can sit next to links such as ‘About’ that are a plain server-side page rendered using express and Dust.js. Why wrestle with Angular when a straightforward HTML page will do?

Floor wax and dessert topping

There you go – move along, nothing to see here. There is no point in wasting time on Reddit food fights. A modern Web project needs elements of server and client side approaches because most projects have heterogeneous needs. Once you accept that, real fun begins when you realize you can share between the server and the client using a technique called ‘isomorphic apps’. We will explore these techniques in one of the future posts.

© Dejan Glozic, 2015

Swimming Against The Tide

True story: I visited Ember.js web site and saw three required hipster artifacts: ironic mustaches, cute animals and Ray-Ban Wayfarer glasses (on a cute animal). A tweet was in order and within minutes, it was favored by a competing client side framework by Google (Angular). Who would have guessed client side frameworks are so catty? I can almost picture Angular News guy clicking the ‘Favorite’ button and yelling ‘Oh, Burn!!’ And it wasn’t even a burn, I actually like Ember web site – it is so …. cute.

The reason I visited Ember (and Angular, and Backbone, and Knockout) was to figure out what was going on. There is this scene in a 2002 movie Gangs Of New York where Leonardo DiCaprio leads his gang of Dead Rabbits to fight the competing gang (the Natives), and he has to wade through a river of people running in the opposite direction to avoid cannons fired by the Navy from the harbor. Leonardo and his opponent, a pre-Lincoln Daniel Day-Lewis were so enthralled in their epic fight that they missed the wider context of the New York Draft Riots happening around them. Am I like Leo (minus the looks, fame and fortune), completely missing the wider historic context around me?

Not long ago, I posted a repentant manifesto of a recovered AJAX addict. I swore off the hard stuff and pledged to only consume client-side script in moderation. The good people from Twitter and 37Signals all went through the same trials and tribulations and adopted similar approach (or I adopted theirs). Most recently, Thomas Fuchs, the author of Zepto.js expressed a similar change of heart based on his experiences in getting the fledgling product Charm off the ground. Against that backdrop, the noise of client-side MVC frameworks mentioned above is reaching deafening levels, with all the people apparently not caring about the problems that burned us so much. So what gives?

There are currently two major camps in the Web development right now, and they mostly differ in the role they allocate for the server side. The server side guys (e.g. Twitter, Basecamp, Thomas, yours truly) have been burned by heavy JavaScript clients and want to render initial page on the server, subsequently using PJAX and modest amounts of JavaScript for delicious interactivity and crowd pleasers. Meanwhile, a large population of developers still want to develop one page Web apps that require the script to take over from the browser for the long periods of time, relegating the server to the role of a REST service provider. I don’t want to repeat myself here (kindly read my previous article) but the issues of JavaScript size, parsing time, performance, memory leaks, browser history, SEO didn’t go away – they still exist. Nevertheless, judging by the interest in Angular, Backbone, Ember and other client side JavaScript frameworks, a lot of people think the tradeoffs are worth it.

To be correct, there is a third camp populated mostly by LinkedIn engineering team. They are in a category for themselves because they are definitely not a one page app, yet they do use Dust.js for client side rendering. But they also use a whole mess of in-house libraries for binding pages to services, assembling them, delaying rendering when below the fold, etc. You can read it on their blog – suffice to say that similar to Facebook’s Big Pipe, the chances you can repeat their architecture in your project are fairly slim, so I don’t think their camp is of practical value to this discussion.

Mind you, nobody is arguing the return to the dark ages of Web 1.0. There is no discussion whether JavaScript is needed, only whether all the action is on the client or there is a more balanced division of labor with the server.

I thought long and hard (that is, couple of days tops) about the rise of JavaScript MVC frameworks. So far, this is what I came up with:

  1. Over the last few years, many people have written a lot of crappy, unmaintainable, messy jumble of JavaScript. They now realize the value of architecture, structure and good engineering (client or server).
  2. A lot of people realize that some really smart people writing modern JavaScript frameworks will probably do a better job providing this structure then themselves.
  3. Many projects are simply not large enough to hit the general client side scripting turning point. This puts them in a sweet spot for client side MVC – large enough to be a mess and benefit from structure, not large enough to be a real pig that makes desktop browsers sweat and mobile browsers kill your script execution due to consuming too much RAM.
  4. These projects are also not easily partitioned into smaller contexts that can be loaded as separate Web pages. As a result, they rely on MVC JavaScript frameworks to perform data binding, partitioning, routing and other management.
  5. Modern templating engines such as Mustache or Handlebars can run both on the client and on the server, opening up the option of rendering the initial page server side.
  6. JavaScript community is following the same path that Web 1.0 server side MVC went through: the raise of opinionated and prescriptive MVC frameworks that try to box you into good practices and increase your productivity at the price of control and freedom.
  7. The people using these frameworks don’t really have performance as their first priority.
  8. The people using these frameworks plan to write a separate or native mobile client.

There could be truth in this, or I could be way off base. Either way, my team has no intention of changing course. To begin with, we are allergic to loss of control these frameworks demand – we subscribe to the camp that frameworks are bad. More importantly, we like how snappy our pages are now and want to keep them that way. We intend to keep an eye on the client MVC frameworks and maybe one day we will hit a use case where client side data binding and templates will prove useful (say, if we attempt something like Gmail or Google Docs or Google Calendar). If that happens, we will limit it to that particular use case, instead of going all in.

Meanwhile, @scottjehl perfectly describes my current state of mind thusly:


© Dejan Glozic, 2013