Lean Startup Inc.

Y2K was a banner year for behavioral scientists. In the years leading up to the Internet bubble bust, most investors reported on their profiles that their risk tolerance was ‘high’. At that time, they assumed that the risk meant that they can get an even higher return on their speculative investments than they have hoped for in their greedy minds. Then came Y2K and the realization that ‘risk’ is a double-edged sword, and both sides are very, very sharp. Many investors got so burnt that for more than a decade they reacted to the stock market the way Bela Lugosi reacted to garlic (now Bauhaus told me in my youth that Bela Lugosi’s dead, but there will never be another Dracula for me but Bela).

The Lean Startup movement I already mentioned in one of my previous posts is meant first and foremost for actual startups. It applies perfectly to a couple of founders, a few first employees rapidly burning through Angel Investors’ money, walking around with starry eyes, dreams of becoming ‘the next Facebook’ and drawing a short list of islands they will buy once that happens. It is a mad race for getting to the business end of the ‘hockey stick’ chart – the part where revenue takes off to actually finance the business (or, in the case of Snapchat, becomes greater than zero). The challenge is to reach this point before venture capital runs out and the repo men cometh.

An expansive aspect of the methodology is that it can be applied to any new effort amid extreme uncertainty, even in huge multinationals or governments. Any corporate team attempting to start a version 1.0 of a product or a service is effectively a startup. Instead of VCs, there are executive sponsors, and instead of actual money, there is headcount – number of people working on the project until business declares that time’s up.

I am a big fan of Michael Lopp and his alter ego Rands. I followed his blog for years, an even bought his book Being Geek. In the chapter called ‘A Deliberate Career’, he cited a ‘third option’, apart from a startup and an established company:

There’s a constant threat in a start-up, and that’s the threat of failure. You can ignore it when you’re busily working three weekends straight, but it’s always there: “We could fail.” The larger company’s success has hidden this threat under a guise of predictability, domesticity, and sheer momentum.
Still, you can find the same [start-up] attributes in a large company in a specific group that has been tasked with the new and sexy.

It appears that working on an exciting new project in an established company is a win-win situation – all the fun of a startup but no risk. But there is a bug here. Fear is a powerful motivator. Remove it, and everything slows down, because if the cheque clears like clockwork, what’s the hurry?

In my recent post on client side frameworks, I kept mentioning Angular.js. On the surface, it seems to fit the description of the ‘best of both worlds’ project – Open Source exposure backed by security of Google. Then you have to remember that Google uses spaghetti approach to their services – throw a plateful towards the wall, and see which one will stick. So Angular.js is great, unless your team works on Closure or GWT or Dart, and you know that at any point in time the axe can fall and people will be leaving flowers for it in the graveyard of Google Services. That’s enough fear for me, thank you.

A friend from my youth went to spend some time in Greece. He didn’t have a lot of money and the beautiful island of Santorini was not exactly cheap, so he had to do all kinds of jobs to support himself. He told me and my wife something that stuck:

Not knowing where your next meal will come from does wonders to focus your mind.

Since lack of fear means lack of alertness, sense of urgency and get-go, teams working on new projects in large companies often lose sight of the possibility that the project may not result in an actual product. Your mind is playing tricks with you, similar to the DotCom investors who had ‘high risk tolerance’. Large companies don’t fail, you say. And you are right – not in the way startups do. With no buffer and no sheer mass to amortize the blow, any kind of snag can knock you down if you are a startup. Still, in a big company you can fail to see your vision through to completion, and see the joy of customers actually finding your product useful. You may still have your job, but unless you are in it just for the paycheque and your true passion lies elsewhere, that’s got to hurt a bit. And if your heart is not in it, what exactly are you doing all day? Checking Facebook? Laughing at Doge pictures?

A much healthier approach for you if you want to continue to be on the bleeding edge is to accept that all startups can fail, even those in big companies (in their own soft way). The fact that in the latter case the salary keeps landing in your bank account already puts you at an advantage over most of Silicon Valley. Accept both sides of risk, not just the good one, accept that ‘extreme uncertainty’ is a serious business (unlike extreme ironing), and act accordingly.

Oh, and read The Lean Startup book carefully. You will find plenty of examples of tactical failures along the way – failures to accurately predict customer base, feature set, the essence of the product value add, customer interest. While pivots are a normal part of the startup experience, they can be truly unsettling to a corporate developer used to stability and predictability.

Even in a big corporation, you cannot have it both ways. Either be truly lean, agile and ready for all kinds of curve balls, ready to turn on a dime, or move to a more established project already shipping products or services, where incremental innovation is more important than starting from scratch for the third time. In both real and corporate startups, if you are not afraid, it’s not a real startup.

© Dejan Glozic, 2013

The 12 Amp Limit


Sometimes movies I watch leave a lasting impression for completely unexpected reasons. Case in point – 1995 hit “Apollo 13”. The movie offered plenty of fodder for people who like to play Six Degrees of Kevin Bacon, and offered Tom Hanks in a clean 1970 NASA style haircut. A lot easier to take than the abomination on top of Robert Langdon’s head. Of course, the movie is packed with nail-biting scenes, if you bite your nails even if you know perfectly well that they are going to make it in the end. But that is not why I remember the movie.

Somewhere in the second half, the crew on the ground is trying to devise a plan of onboard system startup that is not going to suck all the juice from the remaining batteries. The battered ship’s power supply can only take up to 12 amps of current before it cries Uncle, and they have a nice analogue instrument with a needle to watch it creep up. Gary Sinise starts clean, carefully adds one by one module and before you know it, 12 amp limit is hit and the metaphorical crew is dead. Ultimately, the solution was found not in the proper startup sequence but in adding another partially shot battery useless on its own, but giving just the needed push when hooked up into the circuit as a booster.

I keep thinking of this scene looking back at software development projects I was on. It all starts new and shiny – real and metaphorical junk cleaned up, new ground broken, everything seems possible, we make crazy progress unencumbered by restrictions and customers. But soon enough, code piles up, requirements keep coming, constraints set in and before you know it, you hit the limit. It could be too many features, too much code, pages taking too long to load, UI too complex, too much customization needed – anything that marks the transition away from innocence into maturity. The problems you need to handle now are not cool, they are adult stuff.

When this limit is hit, different people react differently. Some people are forever chasing the new project high. As soon as it is gone, they start looking for another ‘new’ project in order to recreate the wonderful sense of freedom and freshness, the ‘new project smell’. Others accept the maturity process and ride it all the way. Then there is a group that actually thrives in a mature project environment – they like the structure and the boundaries it provides, engaging in a series of incremental improvements within the constraints of a mature project.

The point I am trying to argue here is that for all the allure of the early days of a project, it is the period after the metaphorical 12 amp limit that separates boys from men. The early days value your ability to dream, to conjure up new, exciting and innovative concepts, new ways of doing things, and things never done before. The next phase is all about execution, about turning that vision into reality, into something that actually delivers on the early promise. Many a startup crashed and burned on the ability to deliver, to scale when faced with the real world problems and customers. The bridge from an early prototype to a real product is the hardest one to cross. All the shortcuts and omissions you made in the rush to the prototype will come to a head and bury you if you are not careful.

For all the head swooning allure of the early days, it is the maturation of a new product that is the most exciting to me, and if you bail as soon as things become too complicated, too hard, not fun any more, you are missing out on the best part. Don Draper from Mad Men was described by a bitter girlfriend as one that ‘only likes the beginnings of things‘. Don’t be the Don Draper of software projects – see the project to the successful maturation and you will be rewarded with a payoff of a job completed, not merely started.

© Dejan Glozic, 2013

Plan for the Delivery Aftershocks

March_11_2010_aftershocksMy home country is not by itself an earthquake-prone region but we did get jolted ever once in a while with an echo of a truly damaging quake in the neighborhood. People who experienced earthquakes know that after the main event, a series of progressively smaller tremors are normal, indicating the plates settling into a new stable state. They are called ‘aftershocks’ and even though they are not nearly as damaging as the real deal, they can rattle the frail nerves.

As a team leader in various incarnations, I established The Rule of Aftershocks as it is applied to software integration. It works with such a casual certainty that each time we had a snafu caused by a big code delivery, my team would shrug their collective shoulders and say ‘yup, aftershocks’. This is how it normally plays out:

  1. You work on a big, sweeping feature that touches a lot of files. It is very exciting, and it is going to be great, that is, when you finally finish it.
  2. Weeks are passing by, you are working like mad. Your team mates are working too, delivering code changes around you into the repository. You are trying to keep up, frequently merging their code into your changes.
  3. The code starts to burn a hole in your hard disk, begging you to release it already. You test and test and test, trying to leave no stones unturned.
  4. Finally you deliver all 800 pounds of it. It immediately breaks the integration build because you forgot that it has a separate way of managing dependencies than the test builds you were running. You fix that (#1).
  5. Sanity test of the integration build fails because the database and/or the server software is slightly different than the one you used. IT SHOULD NOT MATTER, you say, these are all APIs, but somehow it still fails. You find out what the problem is (grumble, grumble) and fix it (#2).
  6. The build is now deployed and people are starting to use it. They discover all kinds of glitches only real-life use can uncover. You are fixing like mad, trying to stay ahead of the bug reports as they pour in. (#3+)
  7. After you fix all the obvious bugs, you get to the bottom of the barrel. People report mysterious, hard to diagnose and reproduce problems that seem to only happen every second Friday if it’s a full Moon and you had tuna sandwich for lunch. (#4)
  8. You forgo social life, family, natural light and even personal hygiene (if you work from home) trying to fix these maddening bugs. Eventually you do, after two milestones/sprints/whatever-you-call-iterations.

In the scenario above, your initial delivery of the code bomb counts as Event Zero, and I counted at least four aftershocks. Here is the maddening thing: it is really, really hard, if not impossible, to completely avoid them. No amount of testing and re-testing can spare you from them, it only affects their number and concentration. At some point your focus should be on minimizing their number, and ensuring they all occur early while the iron is still hot.

OK, so aftershocks are like death and taxes, if you can’t avoid them, why bother? Well, you should because they make you look bad as a developer or a team leader, and because you CAN do something about them. You simply need to gauge the size of the code you are about to release into the wild and leave the aftershock buffer in your plan. If somebody on your team is delivering a big code bomb, leave one iteration for the aftershock management. If you expect an epic code bomb to drop, leave two iterations. And woe unto you if you allow a Fat Bastard sized code delivery on the last Friday of the last coding iteration. Aftershocks cannot be completely avoided, but they can be managed and planned for. A prudent team lead front-loads big deliveries, accepting aftershocks as a price of progress, knowing that chasing zero aftershock chimera leads to an overly conservative team. You don’t want to become so afraid of braking anything that it leads to the heat death of the project.

As a side note, I would say that epic code bombs are themselves a problem – very few features require working in such large batches. Therefore, I would amend The Rule of Aftershocks to be: for a big code drop, plan one iteration aftershock buffer, and simply don’t allow code drops that require more. This compromise strikes a nice balance between making progress and causing people at the receiving end of your bugs to hate you with passion.

© Dejan Glozic, 2013

Don’t Get Attached To Your Code


Many years ago when I moved to Canada, my father-in-law came to visit. He was showing interest in what I did for living and I tried to explain to the best of my abilities. I failed miserably, leaving him befuddled that people are actually paying me money for lining up the bytes ‘just so’. For the longest time, software development, or ‘anything computers’ was a black art for most people, simultaneously feared and ridiculed (except when they needed advice on what computer to buy on Boxing Day or how to find lost files that they saved ‘in Word’). While those same people could not perform brain surgery or represent somebody in a trial, at least they understood the key ‘value propositions’ of those professions. What developers did every day was a mystery.

This all changed when people started carrying millions of lines of code running in their pockets – NOW they know what we do (sort of). But do we?

Sometimes when the light of the Indian summer sun hits my office window at a particular angle, I find myself caught in a ‘what is our contribution to humanity’ stream of thought. However cool what we work on is at the moment, its very nature is ephemeral. Teenage girls will not cry to our code surrounded by lit candles. Tourists will not make goofy pictures with our code precariously leaning in the background. And our code will not be the last to survive After Humans, giving the pyramids a run for their money (eat that, pharaohs!). No matter how important our code seems to us, an object of a lasting value it is not.

Ours is not the only profession where the fruits of thy labor are of a fleeting nature. Bakers used to wake up at 2am to produce beautiful bread that had to be eaten by the same evening lest it turns into a hard object you can bludgeon somebody to death with (I am talking artisan bread here, not the mutant Ninja variety that is sold in plastic bags nowadays). But at least they spent only a few hours on their creation. What about the wine makers? They toil year around, harvest the grapes, ferment them, let the wine sit in wooden caskets for years, bottle it with meticulous attention to detail. To what end? As Stereophile’s Michael Fremer used to say, no matter how expensive the wine, in the end you are left with memories and urine, and then only memories.

Developers invest a lot of time crafting their code. It is the ultimate expression of their intellect, and if they are not careful, even their souls and their very creative essence. I say ‘if they are not careful’ because code, like bread or wine, has an expiration date, and getting attached to an artifact of a fleeting nature is not wise and can lead to heartbreak. There are many ways a piece of code can end up on the chopping block: change in requirements, target environment, new OS or browser version that makes your code obsolete, refactoring, performance improvements, ‘what were we thinking’ moments, you name it. Or you can get assigned to a new task and somebody else (the horror!) ends up owning it.

Why do we invest so much personal value in code? It may be the effort required to craft it, or the sacrifices needed along the way (I wish I had a dollar for every perfect day I observed through the window of my office while writing the latest absolutely awesome installment of the future legacy code). Some people go as far as to invest a lot of meaning in the actual syntax and how all the statements and punctuation are lined up (the best way to turn such a developer into a ball of rage is to run their code through an automatic formatter). We can also write code with an intention to impress, which is a sure sign it will be too smart for its own good.

Another common reason for clinging to code is that it represents our self-worth and importance. If I give up code I own now, what will I do the whole day? Typically this is an illusion, similar to what Jerry Seinfeld was told as a kid (‘don’t eat cookies before lunch, you will ruin your appetite’). As a grown-up, Jerry now understands that even if you ruin that particular appetite, a perfectly good appetite is just around the corner – there is no danger of running out of appetites. Or problems for which new code needs to be written.

We should learn from those before us that engaged in professions that by their very nature do not produce long-lived objects (even though you could argue that the Cobol software still running in banks and airline reservations is pushing the meaning of the word ‘fleeting’). We should focus on the positive effect of our code: how many lives it improved, how much time it saved to its users, how much faster it made other developers for a while. A long gone bottle of wine that started a romance that blossomed into a lifelong marriage is worth its weight in gold. Good code can inspire, generate many more ideas, be a stepping stone to even greater heights. Even bad code can be a learning experience, at least as in ‘we should not do that again’.

So there you have it. Focus on the transcendental value of your code – what it means to your users and how it makes their life better, at least for a moment, and cherish that value. While physical manifestations of your code may succumb to the vagaries of the fast-moving industry (phone app development, anyone?), nobody can take away the memories and the learning that your code brought you.

And if you are still yearning for something physical to create, maybe you can take up painting. Or you can build a pyramid in your back yard. Even if it fails to become the world’s 8th wonder, you can still use it as a tool shed.

© Dejan Glozic, 2013

Avro Arrow, Tick-Tock and Small Batches

I have just returned from a month-long vacation, enjoying an overdue change of context. Among other things, it helped me free up enough brain cycles to play one of my favorite games – ‘look for a pattern’.

It appears (as my Jazz Platform colleagues tell me) that I have a giant database for a brain. Anything that enters is promptly tagged and also grouped together with existing entries that somehow match. This gift allows me to entertain/annoy friends and family with speeches that start with “that reminds of me of a movie/song/novel/article in which…”. I would like to take credit for this ability but there is nothing I did to develop it. Although I am diligently working on sabotaging it through my newly acquired taste for single malt whiskeys.

My colleagues are always amused that no matter how tangential the detour is, there is always a point where I bring the narrative back to what Ward Cunningham liked to call ‘aha moment’, where it all suddenly makes sense. Allow me to try to bring you to such a moment via a very meandering path.

In 1997 I had a lot of pleasure watching a two-part docudrama about a Canadian attempt at greatness via a bold new supersonic fighter jet design called Avro Arrow (also a reminder of how much weight Dan Aykroyd put on since The Blues Brothers). The movie caused a small (albeit polite) burst of patriotism in Canada. My takeaway from the movie was a detail where they attempted to build a completely new airframe, and a completely new engine at the same time. It was never done before – normally a new design would use tried and tested engines (Rolls-Royce, in this instance). My favorite quote of the government official displeased with this risk taking: “It’s a little bit like wearing two left shoes. It may be distinctive, but it’s not too bright”. File under: minimize number of variables.

Many years later, and completely unrelated, I stumbled upon a project model called Tick-Tock practiced by Intel. Using this model, Intel starts with an established micro-architecture and shrinks the process technology (a ‘tick’), followed by a new micro-architecture once the process is proven (a ‘tock’). If something goes wrong, it is easier to pinpoint why (and fire the right people, I guess). Variable minimization again.

Finally, this particular set of neurons in my index of a brain lit up again when I hit upon The power of small batches by Eric Ries (an excerpt from his book/movement/religion The Lean Startup). To quickly paraphrase, in a context of extreme uncertainty, working on many features simultaneously is not a good strategy because it does not help us learn when things fail, and does not help us build on successes (again, because we don’t know which one of the many things we did was the turning point). Making small number of changes, validating with users and either pivoting or persevering has more chances to result in project success, particularly when we are building something completely new. Variable minimization.

And there you have it: a bold new (albeit ill-fated) fighter jet project, a model for managing complexity of CPU evolution, and the light at the end of the tunnel for sleep-deprived and overly caffeinated startup founders. In all these stories, a common theme is the importance of keeping the number of variables down so that informed decisions can be made about the course of the project as early as possible.

Now, few of us will start a new fighter jet program or build a new CPU in our spare time, so these examples may seem entertaining but impractical. But they translate quite well into any technology project where something completely new is attempted. And that is exactly what we did in the Rational Jazz Platform project. As we were building the first prototype, we wanted to take the opportunity to reset the technology base, but it was deemed too risky considering that we were just figuring out what we were building in the first place. For this reason, we build the prototype using Jazz Foundation code base we intimately knew (our ‘tick’). Once the demo was finished and presented at IBM Innovate 2013, we reset and are now writing real code on a completely new stack (our ‘tock’).

With the experience of the prototype behind us, I am glad we didn’t go the ‘distinctive, but not too bright’ route of doing both at the same time. If we found out certain features were wrong and are not needed, it would really suck if we spent a lot of time building the stack to support them first.

Jolly good, let’s celebrate with a glass of Glenlivet, neat.

© Dejan Glozic, 2013

The Turtleneck and the Hoodie

I have always envied people with a clarity of purpose. As far as the memory reaches, I have been pulled into multiple directions. Not that this is a particularly rare affliction, but it can on occasion make one’s life more difficult than necessary. Being able to describe yourself as one thing makes introductions and elevator pitches easier.

Youthful exuberance aside, it soon becomes clear that in order to become really good at something, many interests and pursuits must lapse to the level of hobbies at best. Certainly a hobby is still better than ‘that thing I did in my previous life’. For me, playing guitar in a band, hi-fi, SCUBA, recording and producing others are all either hobbies or things I fondly remember when I look at the old photos. However, sometimes even the spring cleaning of responsible adulthood does not leave you refreshingly focused and defined. You may have a destiny to be a ‘mixie’.

For me, struggling with the forces pulling me in multiple directions continued when I was a research and teaching assistant after graduation. See, even the title ‘research AND teaching’ has the conflict built into it. And I was not alone. An older colleague of mine used to say: “This university would have been much better without students”. Of course, at that point it would cease to be a “university” but I grant that “research” part would be much easier without all the time spent on pesky undergraduates.

Joining IBM in 1994 (19 years already?), the tug of war returned as soon as I started doing interesting things in user interfaces. From joining a cool new open source project called Eclipse to creating Eclipse component called PDE to moving to Rational Jazz project, I noticed that I not only cared how things worked but also how they looked and felt. However, in the early days, a developer who also cared about pixels was like a “dog playing a piano”, to borrow words of Freddy Rumsen from Mad Men Season 1. So much so that when the beta version of IBM’s Visual Age for Java was reviewed by a magazine, it was lovingly called “ugly as a dump truck”. Of course, between the beta and the final product, the designers sprinkled it with pixel dust but I always thought that developers should care about visuals from the get go.

Well, I definitely did – when I was writing UI code, and later when I lead teams doing the same, I tried to infect others with the idea that things should not only work great, they should look polished and beautiful. Some projects I started In Eclipse (say, UI Forms) made it really hard to not care (ok, they look dated today, but so does everything else). Later on, a team I lead as part of the Rational Jazz project created beautiful dashboards we still use daily while self-hosting. Still, even though I managed to infect a growing number of people with the thought that caring about code AND visuals is a false dichotomy, the best was yet to come.

Fast-forward to 2013. After iPod, iPhone and iPad, (ok, fine, even Windows 8 and the new Outlook) everybody cares about beauty. In fact, design is now a driving force in the company I work for. Design is not called in at the last moment to ‘do its thing’, or be completely done at the beginning with a big thud (BDUF) – it is a partner at the table, where great things are created collaboratively, using Lean UX techniques. A table where the turtlenecks and the hoodies can live in peace, complement each others’ strengths and watch for the blind spots. It is OK to be a hybrid – we even have a manifest now – Manifesto Ibridi. How cool is that?

It is a great time to have both of these passions in any ratio – caring how things work and how your users go about putting them to daily use. This blog is dedicated to topics I encounter living and leading teams in this crossover area. I hope I can infect you too and maybe wake up a passion that laid dormant for a long time.

As for me, I can now proudly stand up and say: “My name is Dejan Glozic. I care about design AND I care about code. And there is absolutely nothing wrong with that. In fact it is just awesome. Let me show you why.”

© Dejan Glozic, 2013