We Don’t Have MongoDB – is CouchDB OK?

First Choice Colas, Wikimedia Commons, 2007, Erik1980
First Choice Colas, Wikimedia Commons, 2007, Erik1980

‘It can’t be helped,’ the saddlebum said. ‘What’s happened is, you’ve overloaded your analogizing faculty, thereby blowing a fuse. Accordingly, your perceptions have taken up the task of experimental normalization. This state is known as “metaphoric deformation”‘.

 

Robert Sheckley, “Mind Swap”, 1966.

One of the Sci-Fi books that left a deep impression on my young mind was Mind Swap by Robert Sheckley (no, I didn’t read it in 1966, in case you were wandering – I am not that old). It introduced a number of hilarious concepts that were surprisingly applicable to the real world, and not just the crazy ones he created in the book. One of them was the aforementioned ‘metaphoric deformation’. According to Robert, it is a disease that often befalls interstellar travellers, where a mind overwhelmed by a stimulation it cannot process and make sense of, starts generating more bearable hallucinations.

I think about metaphoric deformation a lot these days when the world of software development is changing at an unprecedented pace, and in multiple dimensions at once. In our development workflow, we create a set of roles for libraries, databases, languages and platforms. We understand that the things have changed and are willing to replace A for B but are desperately trying to keep the old habits in place, just replace the players.

Case in point – the MEAN stack. The open source world was happy with the LAMP stack (Linux, Apache, MySQL, PHP). Facebook now sits on the world-biggest mountain of PHP and stores precious big data in racks upon racks of MySQL because that’s what Mark reached for in his Harvard dorm room to create thefacebook.com (OK, I know they bolted Hack on top of PHP but still). Now that the world has changed unrecognizably, developers are searching for another four-letter stack lest they slip into a metaphoric deformation of their own.

And yet, that’s just a crutch. Let’s analyze what MEAN stands for:

  • M is for MongoDB – a goto NoSQL database for MySQL withdrawal, it recommends itself mostly because of the SQL-like query language that makes porting manageable.
  • E is for Express.js – a middleware framework for Node.js that allows you to create the familiar world of server side MVC.
  • A is for Angular.js – because if MVC on the server is good, MVC on the client must be great, and the good people of Google have thought about everything so that you don’t have to (and if you do, it’s kind of hard to wrestle the control back anyway).
  • N is for Node.js – because awesome.

See what we have done? We just recreated our old world using the new tech. Very human, but you will soon realize that the old world is gone for good and we will have to do better.

Fearful for the fragile minds of their fellow citizens, the first engineers to experiment with cars simply set out to create ‘horseless carriages’. Replace a horse with an engine, keep everything else as it has always been since the dawn of time. Now, look around you – does that 10 air bag, GPS-connecting, direct-fuel-injecting, microprocessor-controlled car that just whizzed by has any semblance to the horseless carriage to you?

Nowhere is this inclination to replace the parts but keep the structure more bizarre than in the current software world. I am not the first to notice – Forrest Norvel from New Relic has observed it in his presentation Beyond the MEAN Stack.

Forrest noticed that Node.js is all about modularity and freedom of choice. For every letter in the MEAN stack (except N, of course, lets not get crazy here) there is an alternative. Instead of MongoDB, there are other NoSQL databases. Instead of Express.js, there is Hapi.js (created by Eran Hammer, a person I deeply admire ever since he left OAuth 2.0 Workgroup with a bang). Even the author of Express, the prolific TJ Holowaychuk, moved on to the next generation framework Koa that uses ES6 generators and looks very promising. Instead of Angular, there is Ember.js, or Backbone.js, or Knockout.js, or none at all (why do you think you absolutely need MVC on the client if some jQuery or even just vanilla JS can suffice?).

Adrian Rossouw dared to ask the right question – why the crazy stampede to MongoDB? There are many situations where another NoSQL database is a better fit, so why instinctively reaching for ‘MySQL of NoSQLs’, other than to keep you from slipping into metaphoric deformation?

Due to some legal issues (AGPL license of MongoDB server is giving our lawyers heartburn), we were slow to make that instinctive choice ourselves. Then IBM bought Cloudant, and it was an easy decision – use our very own NoSQL DB, and let somebody else manage it to boot. True master-master replication can’t hurt either.

Not so fast. In case you are not familiar, Cloudant is a hosted CouchDB fork with the *Apache Lucene on top. Just looking at CouchDB itself, the goal was to do the key things well, not help developers transition to the new world by dragging in the old habits. We were staring at map/reduce views feeling the anxiety of the New, looking for the ways to make dynamic queries.

Turns out you can’t. That’s what Apache Lucene was there for – if you need to create ad hoc queries based on some computed constraints, you use Apache Lucene. Map/reduce views are for predefined queries – things you can craft in advance.

Paul Klicnik, a developer from our team, spent a lot of time banging his head against the Cloudant wall, looking wistfully over into the MongoDB land where he could have just formed a dynamic query using JavaScript. However, when he made the realization he could use Apache Lucene for that, things started looking up. In the end, we ended up with a very fast, solid implementation of a Node.js micro-service that uses nano module to persist data in Cloudant.

In Paul’s own words:

My experience is that map/reduce is a little tricky to get the hang of, but it’s very powerful. Now that I’ve experienced it for myself, I can say I like it a lot. I’m actually using a regular view with map/reduce for retrieving user filters, and it works great. There is something very simple and elegant about Couch. It’s focused on being very good in certain scenarios, and not trying to be a general purpose database which can solve every problem. I also like that the API is RESTful – I know it’s a small detail, but REST is already pretty natural for us, and it makes it easy to work with.

Apparently, Paul is firmly in the new world now – using CouchDB in a way it was intended, not by trying to pretend it is an SQL database where you don’t have to create tables any more.

Of course, not everything is perfect:

Without the Lucene functionality, we would not be able to use CouchDB for the type of querying we need to do. I have mixed feeling about this. So far, I have had zero problems with Lucene and it performs great, but I’m uneasy about the fact that someone essentially shoe-horned in a major feature. I view Cloudant as a two headed monster – one head being normal CouchDB that behaves as expected, the other a weird appendage that seems to be behaving normally but you’re never quite sure if it’ll bite your head off. I’m being a bit dramatic, but I’m not fond of solutions where someone alters the functionality and attempts to make a product do something that it was never intended to do, ultimately comprising the system on the way. I realize that this is a purist view; maybe under the covers everything is technically sound.

I have added Paul’s second comment in case you interpreted this post as an ad for Cloudant. We are just finding our way in the new world and trying to use it as intended, and sometimes it gets a little weird. I guess that’s half the fun.

As I always say, “si fueris Rōmae, Rōmānō vīvitō mōre (rough translation from Latin: “when in Rome, for the love of God stop eating at McDonald’s – there are like a million trattorias serving actual Italian pizza and stuff”). Stop searching for new stacks to recreate your familiar world and instead open your mind for the new ways of looking at things.

Or even better – don’t create a single stack. For each element of the old stack, go through several awesome choices available today, and take a pick based on the suitability for the task at hand. Contractors don’t have one hammer, one saw and one drill set – why should you?

Oh, and one more thing – stop drinking caffeinated sugary drinks. Both of them.

 

*The original version of the article claimed Cloudant uses ‘Lucene Elastic Search’. This was a mistake, although Apache Lucene (definitely not Elastic, or Lucene Search) is involved, as per company website.

© Dejan Glozic, 2014

Advertisements

Of Mice and Men

icestorm

The best laid schemes of mice and men
Often go awry.

Robert Burns, 1785.

Dear readers of this blog, you may have noticed a lapse in the rhythm that I faithfully followed from the very beginning (a new post every Tuesday, more recently switching to Monday). You may have inferred that I am ‘recharging’, away or simply taking a holiday break.

Far from it. I had a plan to look back at the blog and the topics I have covered over the last six months. For example, looking back at the post on databases, I have second thoughts about using SQL database in a way that is a better fit for something like MongoDB. Our practical experience is that schema changes in a rapid evolution environment are a real drag and being able to evolve the data on demand in a schema-less database would be really beneficial to us.

Other posts are holding up. We are as averse to the Extreme AJAX as ever, but we are now seeing use cases where some client-side structure (possibly using Backbone.js) would be beneficial, particularly when using Web Sockets to push updates to the client. We are still allergic to frameworks and their aggressive takeover of your life and control over code. And we are still unhappy with the horrible hacks required to implement client side templating. One day when Web Components are supported by all the modern browsers, we will remember this time as an ugly intermezzo when we had to resort to a lot of JavaScript in lieu of first class Web composition that Web Components provide (you can bookmark this claim and come to laugh at me if Web Components do not deliver on their promise).

As I said, I planned to cover all that, and also lined up a holiday project. We had recently implemented nice dashboards for our needs in Jazz Platform and Jazz Hub, and I was curious how hard it would be to re-implement the server side using Node.js and MongoDB (instead of JEE and SQL DB). The client side is already fine, using jQuery, Bootstrap and Require.js, and does not need to change. I also wanted to see how hard it would be to configure Express.js to use LinkedIn fork of the Dust.js templating library. PayPal team had a lot of fun recently moving their production applications to exactly that stack, and I was inspired by Bill Scott’s account of that effort.

The plan was to write the blog post on Sunday 22nd, and play with Node.js over the next two weeks. On Saturday 21st, eastern US and Canada were hit by a nasty ice storm. The storm deposited huge amounts of ice on the electrical wires and trees in midtown Toronto where I live. Ice is very heavy and after a while, huge mature trees that are otherwise great to have around started giving up, losing limbs and knocking down power lines in the process. By Sunday 22nd 3am we lost power, along with about a million people in Toronto at the peak.

Losing power in a low rise means no heat, no Internet or cable, no cooking and no hot water (add ‘no water’ for a high rise). We conserved our phones, following the progress of Toronto Hydro power restoration using the #darkTO hashtag on Twitter (everything else was useless). Toronto Hydro outage map crashed under load, resulting in Toronto Hydro posting the map via Twitter picture updates (kudos for Twitter robust infrastructure, which is more than I can say about Toronto Hydro servers). After a while, we drained our phones. I charged my phone from a fully charged MacBook Pro (an operation I was able to repeat twice before MacBook Pro lost its juice). We had four laptops to use as door stops/phone chargers. I could read some eBooks on a fully charged iPad, but somehow was not in the mood. Dinner was cold sandwiches by the candle light. Not as romantic in a cold room.

By Sunday night, the temperature in the apartment dropped to 19.5C (that’s 67F for my American friends). We slept fully clothed. On Monday morning we packed up and went to IBM building in Markham that had power to shower, get some core temperature back, eat a warm meal and charge all the devices. We also used the opportunity to book a hotel room in Toronto downtown (no big trees to knock down power lines – yey for the big soul-less downtown buildings). When we went back home to pack, the room temperature dropped to 18C. The temperature outside dropped to bitterly cold -10C, and going to -14C over night.

Over Monday night, the power was restored. We returned on Tuesday morning, only to find Internet and cable inoperative. Estimated time for repair – 22 hours. In addition, our building is somewhat old and its hot water furnace does not like going from 0 to full blast quickly, resulting in a temperature that was 2-3 degrees less than usual. It was a matter of time until I succumbed to common cold.

Internet and cable was restored on Wednesday, 4 days after the outage started. Over the last couple of days the winter outside let up a bit, allowing the ice on the trees to melt and furnace to bring the temperature to the normal levels. My cold is on the downswing, enough for me to write this blog post. I will still need to wait for the cold-induced watery eyes to return to normal before taking the planned photos for my 2014 Internet profiles.

Why am I writing all this? Just to show you that the real takeaway message for 2013 is not horrible first world problems we grapple with daily (should I use Node.js or not, should I use MongoDB or Couch DB or just stay with RDBs), but that most of us are privileged to live with the trappings of civilization allowing us to not worry about warm water, heat, food and clean clothing. On Wednesday when my daughter was seriously unhappy that internet was not back yet, I felt a bit like ‘the most ungrateful generation’ in the Louis CK show (“Everything is amazing and nobody is happy”). As I am writing this, there are still my fellow Torontonians without power. Their houses are probably icicles by now. My heart goes to them and hope they get power as soon as possible.

As for myself, I can tell you that the fact that when you write Node.js code you may find yourself in a callback hell didn’t mean much when I was sitting in a cold room, lit by candle light and frantically refreshing the #darkTO thread, while my battery was slowly draining. A lesson in humility and a reason to count your blessings delivered in the time of the year we normally see the re-runs of It’s a Wonderful Life on TV (if you still watch TV, that is).

Therefore, all of you reading this, have a great 2014! If you are in a warm room, have clean clothes and a warm meal, and can read this (i.e. your wifi is operational), your life is amazing and you are better off than many of your fellow human beings. Now go back to the most pressing topics of your lives, like this one:

© Dejan Glozic, 2013

Sitting on the Node.js Fence

Sat_on_the_Fence_-_JM_Staniforth

I am a long standing fan of Garrison Keillor and his Prairie Home Companion. I fondly remember a Saturday in which he delivered his widely popular News From Lake Wobegon, and that contained the following conundrum: “Is ambivalence a bad thing? Well, yes and no”. It also accurately explains my feeling towards Node.js and the exploding Node community.

As I am writing this, I can hear the great late Joe Strummer singing Should I Stay or Should I Go in my head (“If I stay it will be trouble, if I go it will be double”). It is really hard to ignore the passion that Node.js has garnered from its enthusiastic supporters, and if the list of people you are following on Twitter is full of JavaScript lovers, Node.js becomes Kim Kardashian of your Twitter feed (as in, every third tweet is about it).

In case you were somehow preoccupied by, say, living life to the fullest to notice or care about Node.js, it is a server-side framework written in JavaScript sitting on a bedrock of C++, compiled and executed by Google’s Chrome V8 engine. A huge cottage industry sprung around it, and whatever you may need for your Web development, chances are there is a module that does it somewhere on GitHub. Two years ago, in a widely re-tweeted event, Node.js overtook Ruby on Rails as the most watched GitHub repository.

Recently, in a marked trend of maturation from exploration and small side projects, some serious companies started deploying Node into production. I heard everything about how this US Thanksgiving most traffic to Walmart.com was from mobile devices, handled by Node.js servers.

Then I heard from Nick Hart how PayPal switched from JEE to Node.js. A few months ago I stumbled upon an article about Node.js taking over the enterprise (like it or not, to quote the author). I am sure there are more heartwarming stories traded at a recent #NodeSummit. Still, not to be carried away, apart from mobile traffic, Node.js only serves a tiny panel for the Walmart.com web site, so more of a ‘foot in the door’ than a total rewrite for the desktop. Even earlier, LinkedIn engineering team blogged about their use of Node.js for LinkedIn Mobile.

It is very hard to cut through the noise and get to the core of this surge in popularity. When you dig deeper, you find multiple reasons, not all of them technical. That’s why discussions about Node.js sometimes sound to me like the Abott and Costello’s ‘Who’s on First’ routine. You may be discussing technology while other participants may be discussing hiring perks, or skill profile of their employees, or context switching. So lets count the ways why Node.js is popular and note how only one of them is actually technical:

  1. Node.js is written from the ground up to be asynchronous. It is optimized to handle problems where there is a lot of waiting for I/O. Processing tasks not waiting for I/O while others are queued up can increase throughput without adding processing and memory overhead of the traditional ‘one blocking thread per request’ model (or even worse, ‘one process per request’). The sweet spot is when all you need to do is lightly process the result of I/O and pass it along. If you need to spend more time to do some procedural number-crunching, you need to spawn a ‘worker’ child process, at which point you are re-implementing threading.
  2. Node.js is using JavaScript, allowing front-end developers already familiar with it to extend their reach into the server. Most new companies (i.e. not the Enterprise) have a skill pool skewed towards JavaScript developers (compared to the Enterprise world where Java, C/C++, C# etc. and similar are in the majority).
  3. New developers love Node.js and JavaScript and are drawn to it like moths to a candelabra, so having Node.js projects is a significant attraction when competing for resources in a developer-starved new IT economy.

Notice how only the first point is actually engineering-related. This is so prevalent that it can fit into a tweet, like so:

Then there is the issue of storage. There have been many libraries made available for Node.js to connect to SQL databases, but that would be like showing up at an Arcade Fire concert in a suit. Oh, wait, that actually happened:

Never mind then. Back to Node.js: why not going all the way and use something like MongoDB, storing JSON documents and using JavaScript for DB queries? So now not only do front end developers not need to chase down server-side Java guys to change the served HTML or JSON output, they don’t have to chase DBAs to change the DB models. My point is that once you add Node.js, it is highly likely you will revamp your entire stack and architecture, and that is a huge undertaking even without deadlines to spice up your life with a wallop of stress (making you park in a wrong parking spot in your building’s garage, as I did recently).

Now let’s try to apply all this to a talented team in a big corporation:

  1. The team is well versed in both Java (on the server) and JavaScript (on the client), and have no need to chase down Java developers to ‘add a link between pages’ as in the PayPal’s case. The team also knows how to write decent SQL queries and handle the model side of things.
  2. The team uses IDEs chock-full of useful tools, debuggers, incremental builders and other tooling that make for a great workflow, particularly when the going gets tough.
  3. The team needs to deliver something new on a tight schedule and is concerned about adding to the uncertainty of ‘what to build’ by also figuring out ‘how to build it’.
  4. There is a fear that things that the team has grown to expect as gravity (debugging, logging etc.) is still missing or immature.
  5. While there is a big deal made about ‘context switching’, the team does it naturally – they go from Java to JavaScript without missing a beat.
  6. There is a whole slew of complicated licensing reasons why some Open Source libraries cannot be used (a problem startups rarely face).

Mind you, this is a team of very bright developers that eat JavaScript for breakfast and feel very comfortable around jQuery, Require.js, Bootstrap etc. When we overlay the reasons to use Node.js now, the only one that still remains is handling huge number of requests without paying the RAM/CPU tax of blocking I/O, one thread per request.

As if things were not murky enough, Servlets 3.1 spec supported by JEE 7.0 now comes with asynchronous processing of requests (using NIO) and also protocol upgrade (from HTTP/S to WebSockets). These two things together mean that the team above has the option of using non-blocking I/O from the comfort of their Java environment. They also mean that they can write non-blocking I/O code that pushes data to the client using WebSockets. Both things are currently the key technical reasons to jump ship from Java to Node. I am sure Oracle added both things feeling the heat from the Node.js, but now that they are here, they may be just enough reason to not make the transition yet, considering the cost and overhead.

Update: after posting the article, I remembered that Twitter has used Netty for a while now to get the benefit of asynchronous I/O coupled by performance of JVM. Nevertheless, the preceding paragraph is for JEE developers who can now easily start playing with async I/O without changing stacks. Move along, nothing to see here.

Then there is also the ‘Facebook effect’. Social scientists have noticed the emergence of a new kind of depression caused by feeling bad about your life compared to carefully curated projections of other people’s lives on Facebook. I am yet to see a posting like “my life sucks and I did nothing interesting today” or “I think my life is passing me by” (I subsequently learned that Sam Roberts did say that in Brother Down, but he is just pretending to be depressed, so he does not count). Is it possible that I am only hearing about Node.js success stories, while failed Node.js projects are quietly shelved never to be spoken of again?

Well, not everybody is suffering in silence. We all heard about the famous Walmart memory leak that is now thankfully plugged. Or, since I already mentioned MongoDB, how about going knee-deep into BSON to recover your data after your MongoDB had a hardware failure. Or a brouhaha about the MongoDB 100GB scalability warning? Or a sane article by Felix Geisendörfer on when to use, and more importantly, when NOT to use Node.js (as Node.js core alumnus, he should know better than many). These are all stories of the front wave of adopters and the inevitable rough edges that will be filed down over time. The question is simply – should we be the one to do that or somebody can do the filing for us?

In case I sound like I am justifying reasons why we are not using Node.js yet, the situation is quite the opposite. I completely love the all-JavaScript stack and play with it constantly in my pet projects. In a hilarious twist, I am acting like a teenage Justin Bieber fan and the (younger) team lead of the aforementioned team needs to bring me down with a cold dose of reality. I don’t blame him – I like the Node.js-based stack in principle, while he would have to burn the midnight oil making it work at the enterprise scale, and debugging hard problems that are inevitable with anything non-trivial. Leaving aside my bad case of FOMO (Fear Of Missing Out), he will only be persuaded with a problem where Node.js is clearly superior, not just another way of doing the same thing.

Ultimately, it boils down to whether you are trying to solve a unique problem or just play with a new and shiny technology. There is a class of problems that Node.js is perfectly suitable for. Then there are problems where it is not a good match. Leaving your HR and skills reasons aside, the proof is in the pudding (or as a colleague of mine would say, ‘the devil is in the pudding’). There has to be a unique problem where the existing stack is struggling, and where Node.js is clearly superior, to tip the scales.

While I personally think that Node.js is a big part of an exciting future, I have no choice but to agree with Zef Hemel that it is wise to pick your battles. I have to agree with Zef that if we were to rebuild something for the third time and know exactly what we are building, Node.js would be a great way to make that project fun. However, since we are in the ‘white space’ territory, choosing tried and true (albeit somewhat boring) building blocks and focusing the uncertainty on what we want to build is a good tradeoff.

And that’s where we are now – with me sitting on the proverbial fence, on a constant lookout for that special and unique problem that will finally open the Node.js floodgates for us. When that happens, you will be the first to know.

© Dejan Glozic, 2013

Making Gourmet Pizza in the Cloud

A while ago I had lunch with my wife in a Toronto midtown restaurant called “Grazie”. As I was perusing the menu, a small rectangle at the bottom attracted my attention. It read:

We strongly discourage substitutions. The various ingredients have been selected to complement each other. Substitutions will undermine the desired effect of the dish. Substitutions will slow down our service.

When I read it first, I filed it under ‘opinionated chef’ (as if there is any other kind) and also remembered another chef named Poppie from a Seinfeld episode, who opined on this topic thusly:

POPPIE: No, no. You can’t put cucumbers on a pizza.

KRAMER: Well, why not? I like cucumbers.

POPPIE: That’s not a pizza. It’ll taste terrible.

KRAMER: But that’s the idea, you make your own pie.

POPPIE: Yes, but we cannot give the people the right to choose any topping they want! Now on this issue there can be no debate!

Fast forward to 2013. This whole topic came back to my volatile memory as I was ruminating about what the cloud platform model means for web app developers. In the years before the cloud, we would carefully consider our storage solution, our server and client technology stack and other elements with an eye on how the application is going to be deployed. You wanted to minimize support and maintenance cost, administration headaches and also wanted to achieve economy of scale. A large web application is composed of components, and the economy of scale can be achieved if one architectural style is followed throughout, one database type, one web framework that supports components etc.

In this architecture, it is just not feasible to allow freedom for components to have a say in this matter. Maintaining three different databases or forcing three Web frameworks to cooperate is just not practical, even though these choices may be perfect for the components in question. The ‘good enough’ fit was a good compromise for consistency and ease of support and maintenance.

In contrast, a modern web application is now becoming a loose collection of smaller apps running in a cloud platform and communicating via agreed upon protocols. One of the key selling points of the cloud platform is that it is easy to write and publish an app and have it running within minutes. I am sure early successes are important for learning, and many apps will be tactical and time-boxed – up for a couple of months, then scrapped. But a much more important aspect of the new architecture to me is the opportunity to carefully curate the components of your federated application, the way an opinionated chef selects ingredients for a gourmet pizza.

Let me explain. A modern cloud platform such as EC2, Heroku or CloudFoundry provides for you a veritable buffet of databases (MySQL, Postgres, MongoDB, SQL Server), caching solutions (Memcache, Redis), message queues (RabbitMQ), language environments (Java, Scala, Python, Ruby, JavaScript), frameworks (Django, Spring, Node/Express, Play, Rails). You can then package a carefully curated collection of client-side JavaScript frameworks to include in our app. You have the freedom to choose, and not only in one way:

  1. You have the freedom to select the right database, server and client side that is the best fit for the problem that a single app is trying to solve. You can write computationally intensive app in Java or Scala, write front end app with a lot of I/O and back-and-forth chatter in Node.js, and another in Django or RubyOnRails because you simply like it and it makes you more productive. You don’t have to use whatever has been chosen by the entire team even though it does not fit your exact needs.
  2. You then have the freedom to compose your federated application out of individual apps cooperating as services, each one optimized for its particular task. As I said earlier, this would have been prohibitively expensive and impractical in the past, but is now attainable for smaller and smaller providers. The cloud platform has subsumed the role of the economy of scale vehicle that was previously required from large applications deployed on premise.

The vision of the polyglot future has been already declared for programming languages by Dave Fecak et al and also for storage solutions by Martin Fowler. A common thread in both is ‘best fit for the task’. In the former, additional angle was employment prospects of polyglot developers, while in the latter was a caveat that polyglot persistence comes with the expense of additional complexity.

I think that my point of view is subtly different: in the cloud platforms, complexity of managing many different systems is mitigated by the fact that the platform is responsible for most of it. On the other hand, while CFOs are salivating over the prospects of slashed IT expenses in hosted solutions, I am more intrigued by another unexpected side-effect: using the right tool for the right job and using multiple tools within the same project is now attainable for smaller players. All of a sudden, an application running multiple servers, each using a different database type is trivially possible for even the smallest startups.

We will never be asked again to put cucumber on our pizza just because the team as a whole has decided so, even though we hate cucumber. Conversely, we CAN put cucumber just on our pizza even though the rest of the team doesn’t like it if works wonderfully in our unique combination. In the new world of cloud platforms, we can all make gourmet pizza-style applications, each carefully put together, each uniquely delicious, and each surprisingly affordable. The opinionated chef from Grazie now seems almost prophetic: in the cloud, we DO want to select various ingredients to complement each other, because substitutions will undermine the desired effect and slow down our service(s).

You can almost pass the restaurant’s menu as the white paper.

© Dejan Glozic, 2013