Skip to content

Slices, layers, tiers, components – oh my!

A few days ago, I wrote about my experience delivering a large web project and compared it to the Hertz/Accenture law suit. A friend asked me to go into more detail – here’s the gist of our conversation.

X: isn’t there a risk that by delivering slices, you’ll introduce lots of duplication? If I want to implement a business rule saying “all orders must have a validated shipping address”, and I have two slices – “website user creates order” and “contact centre creates order”, won’t they duplicate that logic?

Sure – that’s a risk. You could well end up with a mess of duplicate code, especially if you have lots of teams working in parallel. There are broadly speaking two ways to mitigate this risk.

The most common is through “architecture” – you agree a software design where there is a single component which manages orders and their business rules. This is perfectly reasonable – but it often comes with some baggage. There is a temptation to design this architecture in great detail, and assign a team to each component. This often means that the component is designed up front, and cannot change in the light of real requirements that are uncovered during development. I’m all for architecture, and I’m all for cleanly defined components which do one thing and do it well. But I believe architecture should be lightweight and focus on principles and infrastructure, and the actual implementation should evolve along with the application.

The second way to avoid duplication is through process and culture – you make sure the teams communicate about their design decisions, and know how and where to look for existing code which is related to their work. You make sure the team has enough time to refactor and extend existing components, and that everybody understands that the team is evolving the architecture together.

X: What about multiple UIs? What if I want to build a platform that supports web, mobile apps, smart TVs, VR glasses?

Sure, another really important consideration. I’ve worked on projects with multiple brands, multiple markets, multiple devices – the possibility for lots of duplication is even bigger. The same answer applies as above, really – yes, a clean architecture is the answer, but no, designing that architecture up front and dedicating a team to the “re-usable” layer is often a bad idea. The user experience on those different devices is often very different, and it’s much better to evolve the re-usable layer as you learn about what you really need than trying to design it in detail up front.

X: And what about skills? Front-end developers want to work with other front-end developers, you get much better code if you have people work together on the same technology layer.

This is indeed a challenge – especially when you have skill requirements you can’t realistically embed in each “slice” team, or where you have particularly complex requirements in one part of the slice. But in my experience, once developers are used to working in cross-functional teams, they really enjoy it – they learn a lot from working with other disciplines. Some of the most useful code review sessions have come from front-end developers and back-end developers looking at each other’s code – they tend to spot mistaken assumptions, rather than stylistic problems.

I don’t think there’s a nice structural solution to this question – a neat org chart you can draw showing how teams work. I think the solution lies in culture, rather than structure – coding in the open, joint ownership, a shared understanding of “what the project is about”.

X: OK, but how big should a “slice” be?

Good question. As usual, it depends.

The key goal of “slices” is to make sure everyone – developers, business sponsors, QA, designers – has the same perspective on “what the project is”, and “how much is done”. So a slice needs to be demonstrable to business people, in a way that clearly communicates what works and what doesn’t. It needs to have a user interface (even if most of it is placeholder), and it needs to push logic all the way down the stack.

But it doesn’t need to scale, or be fast, or be pretty. It can have some technical debt, and it can cover only the “happy” path. It might only work for one of the different types of users, or one type of content, or one workflow.

X: I think I understand. What do I do after my first slice?

Another good question. I like to structure projects so that we build the first slice as quickly as possible (4 weeks or so is great); I typically ask the entire team to focus on this goal. Once the first slice is delivered, I like to add more slices every 2 weeks. At this point, it may make sense to have some of the team working on cross-cutting concerns (build/deployment, shared look & feel, etc.). It may also make sense to build larger components outside a slice – some things just take more than 2 weeks to deliver. However, I try to keep at least half the team working on the “slice”.

Stories and numbers: OKRs

“Where are we, and what should we do” is a question I’ve had to answer on many consulting engagements. It’s often wrapped in very different language – “How do I deliver this product?”, “Can we invent a new service to serve this type of client?”, “How do we counter this competitor’s move?” are all variants of the same fundamental question.

“Where are we” is often a really hard question. Qualitative assessments – “we’re the quality leader in our market”, “we’re broadly on track”, “we have happy clients” are great, but very hard to analyze for patterns and underlying issues. Quantitative measures – “our velocity is 33 points / sprint”, “we need to be finished in 6 weeks”, “we’ve spent £x out of a budget of £y” are much easier to track, but metrics are often gamed, and future projections are often unreliable. Some important attributes are hard to capture in numbers – how loyal are your customers really? How productive is a developer? How “big” is this project?

Nevertheless, understanding where you are is key to deciding what to do next. I worked with a client once who had a fundamental usability problem with one of the key customer interactions. This manifested in really low loyalty – customers would try the service, get a good experience during purchase, but a bad experience during delivery. They asked us to help improve customer loyalty – they had correctly found that loyalty was well below industry standard. But the reason for poor loyalty (“where are we?”) was not a shortage of features or benefits in the loyalty program. Our initial briefing was to find innovative loyalty ideas; yet the first 5 customers we spoke to all told us that the service had been below par, and that this was their primary reason for not returning. Once this became clear, “what should we do” was obvious.

The numbers, in this case, were all there – we had “low loyalty” numbers, as well as indicators that customer satisfaction with the core experience was low. Digging into application statistics, there were plenty of numerical indicators showing the point where customers were dropping out. What was missing was the story making sense of those numbers.

This is one reason I like Objectives and Key Results (OKRs): the objective is a story. We’re going to dominate the market! We’re going to build an amazing product! We’ll create a fantastic team!. The key results are the numbers that tell you whether your story is going to come true.

Most people I know are not motivated by numbers – even numbers like “salary” are only marginally interesting – it’s the stories people tell about the numbers that matter. “Once I get a salary of £x, I will go on that safari holiday I’ve always wanted”. The story matters.

Accenture and Hertz – slices versus layers

I wrote earlier about Hertz suing Accenture over the failed web replatforming. Again – I have no knowledge other than what I read in the press, and this is all conjecture.

There’s a line in the article that stood out for me. “Despite having missed the deadline by five months, with no completed elements and weighed down by buggy code,….“.

A few years ago, I picked up a large project – looks like it used the same technology stack (Adobe Experience Manager, Angular). The project had been running for a few months, and our client was unhappy – they couldn’t tell if we were making real progress. “Nev, can you have a look and see what’s going on?” asked my boss, so I went for lots of coffee with the team, and ended up taking over the delivery of the project.

The key red flag was this: the team had been working for around 6 months, but didn’t have anything they could show me other than some passing integration tests. We were building a website, but there were no actual web pages. There was some JSON that could be transformed into web pages, there was a content structure that could create and store JSON, but the only people who could assess progress were developers.

Our client was not a developer – they were subject matter specialists, product owners, business people. They understood the “first you have to build the foundations” logic of building the underlying structures before worrying about making web pages. But they felt that 6 months was a long time to wait, and they felt the team was unable to explain when they might see a working web page. They also felt there was a real risk that once we started making web pages, we’d have to revisit lots of “foundation” code. They were right, as it happens.

The first thing we did with the team was to create placeholder pages for the major parts of the site. AEM has the concept of “component” – a widget which shows a bit of a web page – and “template” which defines which components go on the page, and how they fit together. So, we started to build all the templates we needed, and placeholder components to go on those templates. We made sure that the templates and components reflected the key design decisions (how they’d render on different screen sizes, basic colour and styling), and created a basic version of the site.

This took a few weeks, and raised lots of questions. “Where does this content come from? How does this component work on mobile? How do you get from this page to the next?”. It was uncomfortable – we found out exactly how much we didn’t yet know. It also exposed assumptions we’d made in designing the “foundation” which were totally incorrect. We had uncomfortable conversations with the client – as we found answers to our questions, we discovered many areas where we weren’t aligned on requirements. Some of those misalignments reflected significant amounts of effort.

But overall, the trust between our team and our client improved. The conversations were concrete and limited – instead of asking “how should content workflow be set up?”, we could ask “how do we manage content for this widget on the homepage?”. Many of our assumptions were tangible – “we thought the navigation would be a static component, you think it’s data-driven”. By focusing teams on a component (front-end, back-end, design, QA), we could demonstrate that we could make progress in ways our client understood. We’d agree how a component was supposed to look, how it worked, where the content came from, and then assigned a team to deliver that. Within days, the client would see progress, and their level of confidence would grow.

I refer to this as the “layers versus slices” challenge. Logically, it makes sense to build the foundation before you worry about hanging pictures on the wall – but I think there’s a better metaphor than building a house. I see it more like building a city – you want to put down major infrastructure like roads, sewage, utilities first, but then you build each house individually – foundation, walls, roof, interior. You can build several houses concurrently, but you don’t build the foundations of all the houses in the city first, then the walls, then the roof etc. (I may have been playing Sim City).

On a web project, the infrastructure is setting up development environments (a huge pain on AEM!), basic content repository structure (how do you manage sub-sites, language variants etc.), deployment pipelines, BDD/TDD testing framework, design system (e.g. material design) with default styling for the project, and the source code control system.

Once you have the infrastructure layer, I build the user interface, using the basic design system (which may look more like “boxes and arrows” style wireframes than the finished product), and minimum versions of all the components. This should give you a web site with placeholder content, and minimal styling. You now have two layers – infrastructure and user interface.

The next phase is to focus on slices – build up all the components so they work properly (however you’ve defined that!), look right, and have the correct content.

I may be reading too much into the line in the article – but it sounds to me like the Hertz project focused on “layers”, at the expense of “slices”.

Thoughts on the Hertz – Accenture lawsuit

Let’s start with a disclaimer – I have no knowledge of this situation other than what I’ve read on the news. This post is conjecture and opinion, not fact!

There’s a news story about Hertz suing Accenture over the design and build of the new Hertz digital eco system. Many of the challenges sound horribly familiar – and there are lots of smart people commenting on Twitter. When reading the articles, two things really stood out for me.

Firstly, Hertz seems to have treated the engagement as a one-off project, and secondly, they outsourced pretty much the entire project to someone else. I think those two aspects of the project are fascinating.

Let’s start with treating the re-design of your platform as a project. I may be entirely wrong, but I assume that for Hertz (prospective) customers, the digital experience is key. If they are like most consumer brands, somewhere between 25 and 60 percent of their customers interact with them online during the purchase process. I’m guessing that a very large number of customers transact directly with Hertz using their web platform, and that those interactions are more profitable for Hertz than transactions via other channels – low cost to serve, no commissions, lots of cross-sell/up-sell. It’s also very likely that the role of the digital channels is growing in relative and absolute terms, and that key differentiation opportunities will come from digital. Oh, and their major competitors (on-demand services like Uber and Lyft) use digital as their main interaction channel.

So treating their digital platforms as a “project”, or even a series of projects, strikes me as wrong. The digital platform is a core aspect of the way they engage with customers, with no obvious end date, and a roadmap that evolves in the light of market conditions. It’s not a marketing campaign, with a big-bang go-live, or an SAP implementation, with an upfront project and ongoing maintenance – it’s a sequence of releases, each solving one or more problems for customers, the business, regulators, suppliers. It should be treated like a product or service in its own right, not just a customer acquisition channel.

This matters, because in most large companies, a customer acquisition channel is treated differently to a core service. Customer acquisition is a tactical process – invest in what works, reduce spend in what doesn’t, and most of the spend tends to be external – Facebook ads, marketing campaigns, media budgets. If you’re in charge of a customer acquisition channel, your key skill is to extract as much value as possible from your suppliers.

If you’re building a core service for the business, however, your outlook is very different – your time horizon tends to be years rather than months, and your core skills is defining and delivering the product roadmap. You typically assemble a range of skills, from a range of vendors, to do that, but the overall vision belongs to your team. Building that team’s capability is a huge part of your ongoing success.

The second aspect, bringing in an outside firm to deliver the new platform, is another worry. It’s never black and white – even if you have a big product team, you’re likely to face skill and capacity gaps, But outsourcing the whole thing – including product ownership – and relying on a contractual specification to get what you want, means success is determined by your procurement department’s ability to write a contract, and your upfront ability to specify requirements in a way that your vendor can’t dodge. And once the project is delivered, you remain dependent on your supplier – because you haven’t got the internal skills to evolve your platform.

The article suggests Hertz wrote a pretty comprehensive set of requirements, with forward-thinking deliverables like a design system, re-use across brands, and a “core component” library. But – in my experience – those deliverables don’t really add much value when the future rolls up. Test-driven development, Behaviour-driven development, continuous delivery – those really help. Those are things a vendor would often skip – they just want to deliver the project as cheaply as possible, get paid, and move onto maintenance and support (and also get paid).

Compare and contrast – a previous client (an airline) noticed that more and more customers were using mobile to buy tickets and check-in. They had an app, but it had not received a lot of investment, and the focus was primarily on cross-sell/up-sell. Customer satisfaction was terrible (the app didn’t work particularly well), and the team was heavily dependent on a supplier. Once they recognized that mobile check-in was the way their premium customers tended to interact, they set up an internal team to create a mobile product vision. They aligned on key value propositions, and assembled a team; my company provided a range of specialists in product management, design and development. We worked on a product roadmap, with lots of small-ish releases, a beta programme, and the gradual transition to an internal team.

And how about the money? The Accenture budget ran to $32 million. I know that sounds like a lot of money – but it’s not unusual for large-scale digital platforms to cost in that region. It’s (presumably) multi-market, multi-lingual, with transactions, payment management etc. But, let’s look at how you might spend that money in a universe where you care about developing internal skills. I’m going to take a 5 year period to spend that $32 million.

The following numbers are full of assumptions – but give an order of magnitude indication. If you have an annual budget of around $6 million to spend on a team, and the fully-loaded cost of a team member (designer, developer, product owner, QA, DevOps person) is around $200K/year, you can hire a permanent team of around 15 people. In year one, you probably want to bring in external folk , with skills you don’t have, and let’s assume their fully loaded cost is around $300K/year. So in year one, you probably have a team of 5 – 8 consultants, and 2 or 3 internal people; you change that balance over time.

So, can a team of 10 – 20 people deliver a whole new platform in 5 years? Yes, they can. Of course, their first release will be nothing like “a whole new platform”. It might feel disappointing – “we wanted a whole new platform, and all we got was a better sign-up form!”. But getting that feature out into the world, figuring out whether it delivers what you expected, and getting the next feature out shortly after – it’s a sustainable way of building software products. It gives you actual control (as opposed to the illusion of control which you get from project plans stretching over years).

Whilst the articles I’ve read suggest many things went wrong on the project, I think the biggest decisions organisations need to make are “is this core to our interaction with customers”, and “should we outsource this or build internal capability?”.

Serverless is (mostly) about money.

I’ve been working on software projects for a living for 30 years. In 1989, I worked on a COBOL application which managed orders, production schedules, billing and payroll for a manufacturing company. Then I worked on a client-server project for professional service automation; next came a range of web projects, along with mobile, video, and VR/AR, and data applications.

The overwhelming trend over that time is that the proportion of effort expended on domain-specific code has increased dramatically, and the cost of infrastructure capital investment has mostly been replaced by pay-as-go cost.

The latest phenomenon that’s about to break through into the mainstream is serverless. The definition is a little bit open to interpretation, but this is probably the best I’ve seen.
The concept is broken down into two parts: “Backend as a Service” (BaaS) provides functionality to applications on a per-use basis – it’s a little hard to say exactly where the boundaries lie, but in principle the commercial trade off is “do I invest effort building feature x, or do I pay on a per-use basis?”.
The second type is “Function as a Service” (FaaS) – developers write code, and the service provider runs it in the location determined by the commercial arrangement. Here the commercial trade-off is “do I invest in a hosting platform (which could be Cloud), which I manage myself, or do I pay a tiny amount every time the code runs, and let someone else sort out the hosting platform?”.

So why is that mostly about money?

Firstly, it (theoretically) reduces the cost of building solutions – you don’t have to worry about scaling, or availability, or anti virus, or back ups, or configuring the run-time for your application just so. More of the effort goes into the unique, special thing your solution does, rather than the plumbing that keeps it up and running.
This is an ongoing trend, and – arguably – the biggest savings came much earlier, with Cloud, containerization and Platform as a Service.

Secondly, you pay for what you use at a feature level, not infrastructure level. In most Cloud/PaaS models, you pay for running a process, whether it’s busy or not, and it is your job to figure out how many processes you need. In serverless designs, you only pay for running the feature you care about.
This is a big deal – it reduces the fixed costs for building solutions even further. This changes the commercial model – the more cost you can move from “fixed and upfront” to “scales with use”, the easier it is to create a business case that works.

That’s because most organisations look at “return on investment” as a key metric for deciding whether a project will go ahead. They typically look for an RoI as a multiple – “every pound we invest will pay back x pounds over y years”. So, every pound you can shave off the upfront cost makes that return on investment calculation easier. This, in turn, makes more projects commercially viable.

Artificial Intelligence, Big Data, and story-telling apes

I’ve been reading, and focusing on AI. In the fiction area, Gnomon is a complete mind-melt. One of the many premises of the book is that a “system” will run society. If Then takes it a step further, positing a system that runs multi-variate testing on communities to optimize itself.

In the popular science range, the best description of AI I’ve found is <a target=”_blank” href=””>The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World</a><img src=”//″ width=”1″ height=”1″ border=”0″ alt=”” style=”border:none !important; margin:0px !important;” /> – an intelligible history of machine learning and AI.

I listened to this You Are Not So Smart podcast on how biases are baked into AI, and what we might do about that.

And most recently, I’ve read this description of an online retailer’s efforts to predict whether a user will be profitable, based on all the attributes that retailer knows. It’s probably the state of the art in this sort of thing – it’s a fairly general problem, there online retailers have lots of data, and getting it right has big financial pay-offs. Instinctively, it feels right – a shop keeper recognizes signs that a customer might spend a lot of money, and knows that treating high value customers better will make them even more valuable. Though they get it wrong – remember the scene in Pretty Woman? A sex worker, acted by Julia Roberts, goes into an expensive boutique, and the shop assistants size her up and discourage her from shopping there; it’s only when the sex worker’s client (Richard Gere) shows up and they recognize his spending power that they treat her like a valued customer.

Pretty woman shop scene

Pretty woman shop scene

The fun bit is to see how hard this problem is, and how the economics work.

The paper describes a few ways the retailer has approached the “how can we predict the value of a customer” question. They’ve created over 130 different attributes, and looked for which ones were most predictive. Those 130 attributes sound like a lot – but many of them are really simple, like “how many orders have you placed”, “how often do you place orders” etc. And with those 13o attributes, they get fairly good results – around 75% accurate.

The second part of the paper describes how they’re trying to include “self-learning” algorithms, that look at all of the data they have, and use it to create a better prediction. The website tracks everything you do – surely, there must be a correlation between looking at certain products and spending more money (there is) – so how do you let the algorithm work that out?

It turns out to be possible, but the cost of training the system to find these behavioural correlations on fast-moving product catalogues is high – and with AI, that cost is often measured in money, not just time.

What does that mean?

It means that the AI would apply a statistical model to Julia Roberts’ web traffic, and unless she matches one of those 130 attributes they use to predict her life time value, the AI would assume she’s “just another customer”. However, if she then browses the expensive, high-fashion dresses category and spends a lot of money, the AI would not learn that customers who look at expensive, high-value customers are likely to spend a lot of money.

Why is this interesting? Because a state-of-the-art machine learning system, with significant commercial incentives, cannot economically learn to identify behaviour triggers without human intervention.

It boils down to “using traditional methods, we can consider perhaps a handful of attributes to predict your lifetime value. Current machine learning allows us to expand that to hundreds of attributes. But to learn from the thousands of subtle clues each of us leaves is currently orders of magnitude too hard”. It’s the difference between categorizing you by crude methods, less crude methods, or treat you like an individual.

And this brings me to the second topic here – human intelligence is very much tied up in story-telling. Sure, humans can manipulate symbols and abstractions, and use formal logic to prove or disprove things. But explaining phenomena in the real world is largely story telling. There’s an example I read somewhere about a finance editor on the TV news explaining the stock market behaviour, and using exactly the same underlying fact to explain opposite trends – something like “positive figures on the labour market lead to higher stock prices as the market anticipated stronger consumer spending”, and “positive figures on the labour market lead to lower stock prices as the market anticipated a rise in interest rates to curb inflation”.

If you read the academic paper on customer lifetime value prediction, the authors do some story telling – “it makes sense that high value customers look at the more recent products, because they are more fashion conscious”. Story telling apes observe things in the world – the rising of the moon, an erupting volcano, a pattern in the data – and we tell stories to explain those things. As we’ve built better models of the world, the stories have become more accurate (and therefore more useful); many stories have become quantifiable and predictable – we no longer believe the sun rises because a deity drags it across the sky on a chariot; instead we can calculate to the second what time the sun will rise thanks to Newton’s formulae.

So what is the point of this long-winded musing?

Whilst ecommerce sites are non-trivial, they are certainly not the most complex system you might imagine when considering the uses of artificial intelligence. And they have relatively clear outcomes, within a meaningful timeframe – you buy something or you don’t, and you usually do it within a fairly short time of visiting the site. And even at this scale, we struggle to identify actions that correlate cleanly with the outcomes using current machine learning techniques. We need story-telling apes to at least identify hypotheses for the A.I. to test.

If you try to expand the scope of AI to “look at human behaviours, and anticipate health outcomes”, or “anticipate criminal behaviour”, or “anticipate political choices” – we’re still a long way off.


Distributed agile development teams

The key to distributed, Agile software development is to get good velocity by making sure the work the developers pick up is “workable”. This means validating requirements before adding them to the backlog.

The last few projects I’ve managed have been larger than most we do in my company. We needed very specific technical skills, and simply couldn’t find them all in London or Amsterdam.  So, we bit the bullet, and brought in developers from suppliers, partners, other offices in the company, and freelancers working remotely. At one stage, the only people “in the office” were the QA lead, the delivery manager, and I running the development team – but all the developers were in different locations. 4 worked for a Polish specialist software shop; 4 were freelancers working from home, 2 worked in a different office, and 2 worked in our Ukranian tech hub. Oh, and our client and product owners were in a different country, and the user interface design team was in a different office.

I was discussing this experience with a friend yesterday – he has a co-located team, with the product owner 2 doors away, but was complaining about lots of suboptimal process issues.

I realized then that running a distributed team forces you to answer process questions early on, because problems that can be dealt with informally if you sit next to each other quickly become intractable if you’re remote. One example was that we agreed between everyone – client, delivery people, developers, QA – that the only source of truth was our task database (Jira, if you must know). If it isn’t in Jira, it doesn’t exist. If you want something, write it in a Jira ticket, and follow the Jira process to get it on the backlog. Developers had to deliver what the Jira ticket specified; QA had to verify that this is what the software did. This way, you have a crude but accurate view of what you have to do – x Jira tickets – and what you have done so far – y Jira tickets. My friend’s project, on the other hand, collected work items in user stories, screen designs, casual clarifications from the product owner, bug reports etc. His developers were complaining they were spending a lot of time picking through all those requests to work out what they actually had to deliver. My distributed team, on the other hand, could focus on ticking off their Jira backlog.

I know, this sounds the antithesis of Agile – I’ll get to that.

The second benefit of this approach was that I could ask the developers to reject any Jira ticket they didn’t believe was “workable”. Requirements could be expressed in whatever way the originator was happiest with – but unless a developer could pick it up and start coding in 10 minutes, they could reject it. Lots of bugs got rejected – if you don’t include steps to reproduce this bug, or actual versus expected behaviour, the developers can’t work on it. Lots of features got rejected for being unclear, or incomplete, or contradictory requirements.

Again – doesn’t sound very “Agile”, does it?

Velocity is the key to Agile.

In my experience, the key to implementing Agile is to build trust with the business that the team will deliver business value, at the expected quality level, for a reasonable cost. And that means establishing a decent level of velocity.

When I work with clients and we agree on “ways of working”, the deal I make is this.

You make sure I’ve got a prioritized list of workable items, and I make sure the team delivers the best possible level of velocity and quality.

Business people like this – they control the priority, and get the best possible value for their development buck. When they see that top priority requests do indeed get turned around in hours or days, and that the team is consistently delivering the priority items in the backlog, they become a lot more confident in Agile as a process.

Chunking is the key to velocity.

We’ve seen numerous studies on developer productivity – but in my experience, the key is to have as much time as possible with clear, complete problems to solve, of a size that allows progress every day. By clear and complete, I mean that a developer should have all the context available when they start work so they have to focus only on the technical design and implementation, not the requirements or UI design.

Most Agile methodologies recognize this – the “unblocking” part of the Scrum Master’s role, and the “I am blocked by” element of the Scrum stand-up, for instance, are designed to allow a developer to hand off tasks that aren’t executable.

But with a distributed team, this is much harder – the communication channels are much narrower, and much slower.

Avoiding blocks is the key to chunking

So, if we want developers to be able to work on executable chunks of work, we need to minimize the number of block they encounter. And in distributed teams, that means we give them complete, executable pieces of work, defined in one or two artefacts – a task, and a visual design, for instance, or a bug report, or a non-functional specification and test report.

However, often “requirements” (in whatever form) are not complete or executable.

Verifying requirements is the key to avoiding blocks

In “traditional” Agile, this is often done mid-sprint. It’s a bit of a pain, but the product owner sits with the team, and it’s their job to clarify. In distributed teams, this is much better done before the sprint starts – get everyone to verify each story before it goes on the sprint backlog, and reject anything that isn’t ready.

Business stakeholders appreciate this process. The clarification process often brings up problems in the product vision, or highlights prioritization mistakes, or shows up organisational issues that shouldn’t be solved by software.

Microservices in the enterprise – breaking out of the IT silo.

Microservices are entering the “early adopter” phase in large, established corporates.  I was talking to a friend who works for a large systems integrator, and he told me that this is now a common topic of discussion with CIOs. His client is a large insurer, and they are migrating the back-office IT systems to a microservices architecture. Interestingly, they are also using this as a lever for organisational change, creating “pillars” around core business areas, rather than departmental “layers”. The entire IT infrastructure is being transformed, with containerization and API gateways as key enablers, and the solution architecture team is now focusing on time-to-market as a key driver – their previous focus was on standards compliance and long-term strategic fit.

It sounded like a very cool project – not just turning around a super tanker, but reconfiguring it to something fundamentally different. So, when I asked which new customer propositions the insurer would deliver, I was confused when my friend said “oh, that’s not really a priority”. The project is driven by IT, and their goal is to be more responsive to the needs of the business, but the microservices solution will not extend to the customer touchpoints – the website, the call centre applications, etc. When I asked why not, the answer was a little complicated, but I think it boiled down to “the teams responsible for those touchpoints had large, complex software packages which they were afraid to change”.

Microservices can have a dramatic effect on back-office solutions – but the true value comes from transforming the way businesses interact with their customers, by opening up new business opportunities or channels. Seeing it purely as a “back-office” solution just means you get better at delivering yesterday’s service, rather than opening up tomorrow’s opportunities.

Compose first, build last.

I was brainstorming a new product with a client the other day. We had all sorts of amazing ideas, ranging from cool user-interface tweaks to (almost) entirely new business models. “Wouldn’t it be cool if…” was probably the most commonly uttered phrase.

And then we came back to ground. Brainstorming is fun – but we wanted to launch a product, and as quickly as possible. So we looked at what we could do by putting together existing services, frameworks and solutions, rather than building things from scratch. We agreed to settle on a common user interface framework, and to use content-as-a-service and commerce-as-a-service platforms; we will run the production site on Amazon Web Services, and we’ll use Gitlab to manage our code repositories and deployment pipelines. We think we can probably limit the amount of custom development to a couple of weeks – and most of that time will go to our secret sauce idea, rather than plumbing and housekeeping tasks.

It made me think of my first start-up, back in 1999 – an ecommerce site selling prints and frames. We hand-built an ecommerce engine, a pick/pack/dispatch module, a label printer for the warehouse, integration with our payment gateway, a basic CMS, and a product catalogue solution.

We had a designer invent our very own navigation metaphor, and our check-out journey was only loosely based on Amazon’s process. We built a custom “tooltip” solution when it became clear not everyone knew how to check out online. I spoke to some of the guys at and learnt about how they used analytics to see what worked and it blew our minds!

We rented a rack at a hosting provider in West London, and I physically wired in our server (we could only afford the one, to begin with). It took about a month to set up our credit card processing facility – the bank didn’t really have much faith in start-up ecommerce companies taking payment online. We built the site with lots of hand-crafted code – no frameworks for us!

So much has changed. We could build that same solution today mostly by composing existing solutions, for a fraction of the cost. The platform that took 5 of us 6 months to build would probably take 2 people a few weeks at most. The capital required would follow a similar trend – from tens of thousands for hosting, servers etc. to hundreds.

For a similar project today, we’d use one of the many ecommerce platforms (I like Solidius, but we’d probably end up with Magento or Shopify), and use all of the established design artefacts – navigation structures, page layouts etc; our user journeys would be like every other ecommerce site (and that would be a good thing!), and we’d host on Amazon or similar. All of our energy would have gone to the unique, distinguishing features and content which defined our proposition.

On the other hand…it seemed so easy to launch back then. We built it, they came. We did some PR, a little SEO, and our site got traffic. I remember the go-live date, and somehow, from somewhere, the first visitor arrived. They left after a short while, but we got a dozen or so orders on the first day. It was like magic! Today, it’s much easier to build things – but much, much harder to get people to pay attention. Let alone visit your site…sure, we’ve got Facebook ads, and Twitter sponsored posts, and Google ads, and the banner ad is still going strong – but you’re competing against every other entity in the world for attention. In recent business modelling exercises, we found that while the cost of “building” propositions (the design and technical aspects) has dropped around 10-fold in the last 15 years, the cost of acquiring customers online has risen by at least 10-fold, and in some areas by much, much more.

Enterprise innovation through escalating bets

A lot of my work involves working with large, established enterprises to find new ways to reach customers. Sometimes, that’s “just” marketing, sometimes it’s product development, and sometimes it’s business model design.

Enterprise innovation is challenging. Most established models for innovation in software build on the concept of “lean” and “iterative/incremental”. The most common point of view is that you get better results by experimenting and integrating feedback than by planning and designing in the absence of customer interaction. For a 3-person start-up, a “minimum viable product” can be very minimal, but for a company with billions of dollars in revenue, and a reputation to protect, “minimum viable product” might be a huge commitment.

The best model I’ve seen is with a client known for their innovation (no names, of course); I’ve called their approach “escalating bets”.

Escalating bets as a portfolio strategy

Our client creates a portfolio of innovation “bets”. Anyone who has an idea they believe in and can meet a fairly low initial investment decision bar can get a limited amount of time (typically 6 – 12 months) and money to prove their idea is a winner. It’s a small bet from an enterprise point of view – many companies spend that much on discussing why an idea shouldn’t go ahead.

Each idea has to have an agreed proof point for the first bet, usually related to whether the idea can attract customers. “We can get 1000 people to sign up”. “We can get at least 10% of our users to spend more than an hour a day on this”. “Our first users will recommend us to at least 1 other person”.

If the bet pays off – if the team hit their goals – the company places a second, larger bet. 9 – 18 months, roughly double the original budget. They have to agree another proof point – typically involving customer acceptance and some kind of business case. And so on, and so forth. I’m pretty sure you’ve used products delivered in this way.

Why do escalating bets work?

Innovation involves risk. Companies focus on risk to reputation, risk to brand, risk to strategic priorities, risk to margin – but the bigger risks in innovation are finding a market, and – very simply – execution. The way most companies approach innovation is to place a small number of large bets – large R&D projects, consulting engagements with specialist companies, large product development teams. As each bet is large, and involves the careers and reputations of several senior people, the bets tend to be fairly conservative – incremental innovations, close to the defendable core of the business.

And if one of those large bets fails, it tends to make the business more conservative. I know of a large IoT innovation project that ran out of steam after 2 or 3 years, and a significant investment. The project was large, with a lot of management attention, and – from my point of view – imploded under its own weight. Different departments wanted to impress their own priorities on the innovation. Decision making was consensus-driven and slow. The large number of internal and external stakeholders and participants made the communication overhead almost unmanageable.

The basic concepts were great, it was the sheer size of the bet that brought the project to its knees. This doesn’t mean “IoT” was a bad bet – as a competitor proved just a few months later.

Escalating bets have two benefits.

The large number of bets means you have a better chance of success – VC companies expect only a small number of their start-up investments to pay off. Running a portfolio of small-ish innovation projects gives you a better chance of finding a winner.

More importantly, though, a small project with a clear goal has a better chance of success in most companies than a large project with a high-level goal. “Take 3 people and find 1000 customers” is clear, not threatening to the other departments in the company, and focuses attention.