Skip to content

Compose first, build last.

I was brainstorming a new product with a client the other day. We had all sorts of amazing ideas, ranging from cool user-interface tweaks to (almost) entirely new business models. “Wouldn’t it be cool if…” was probably the most commonly uttered phrase.

And then we came back to ground. Brainstorming is fun – but we wanted to launch a product, and as quickly as possible. So we looked at what we could do by putting together existing services, frameworks and solutions, rather than building things from scratch. We agreed to settle on a common user interface framework, and to use content-as-a-service and commerce-as-a-service platforms; we will run the production site on Amazon Web Services, and we’ll use Gitlab to manage our code repositories and deployment pipelines. We think we can probably limit the amount of custom development to a couple of weeks – and most of that time will go to our secret sauce idea, rather than plumbing and housekeeping tasks.

It made me think of my first start-up, back in 1999 – an ecommerce site selling prints and frames. We hand-built an ecommerce engine, a pick/pack/dispatch module, a label printer for the warehouse, integration with our payment gateway, a basic CMS, and a product catalogue solution.

We had a designer invent our very own navigation metaphor, and our check-out journey was only loosely based on Amazon’s process. We built a custom “tooltip” solution when it became clear not everyone knew how to check out online. I spoke to some of the guys at and learnt about how they used analytics to see what worked and it blew our minds!

We rented a rack at a hosting provider in West London, and I physically wired in our server (we could only afford the one, to begin with). It took about a month to set up our credit card processing facility – the bank didn’t really have much faith in start-up ecommerce companies taking payment online. We built the site with lots of hand-crafted code – no frameworks for us!

So much has changed. We could build that same solution today mostly by composing existing solutions, for a fraction of the cost. The platform that took 5 of us 6 months to build would probably take 2 people a few weeks at most. The capital required would follow a similar trend – from tens of thousands for hosting, servers etc. to hundreds.

For a similar project today, we’d use one of the many ecommerce platforms (I like Solidius, but we’d probably end up with Magento or Shopify), and use all of the established design artefacts – navigation structures, page layouts etc; our user journeys would be like every other ecommerce site (and that would be a good thing!), and we’d host on Amazon or similar. All of our energy would have gone to the unique, distinguishing features and content which defined our proposition.

On the other hand…it seemed so easy to launch back then. We built it, they came. We did some PR, a little SEO, and our site got traffic. I remember the go-live date, and somehow, from somewhere, the first visitor arrived. They left after a short while, but we got a dozen or so orders on the first day. It was like magic! Today, it’s much easier to build things – but much, much harder to get people to pay attention. Let alone visit your site…sure, we’ve got Facebook ads, and Twitter sponsored posts, and Google ads, and the banner ad is still going strong – but you’re competing against every other entity in the world for attention. In recent business modelling exercises, we found that while the cost of “building” propositions (the design and technical aspects) has dropped around 10-fold in the last 15 years, the cost of acquiring customers online has risen by at least 10-fold, and in some areas by much, much more.

Enterprise innovation through escalating bets

A lot of my work involves working with large, established enterprises to find new ways to reach customers. Sometimes, that’s “just” marketing, sometimes it’s product development, and sometimes it’s business model design.

Enterprise innovation is challenging. Most established models for innovation in software build on the concept of “lean” and “iterative/incremental”. The most common point of view is that you get better results by experimenting and integrating feedback than by planning and designing in the absence of customer interaction. For a 3-person start-up, a “minimum viable product” can be very minimal, but for a company with billions of dollars in revenue, and a reputation to protect, “minimum viable product” might be a huge commitment.

The best model I’ve seen is with a client known for their innovation (no names, of course); I’ve called their approach “escalating bets”.

Escalating bets as a portfolio strategy

Our client creates a portfolio of innovation “bets”. Anyone who has an idea they believe in and can meet a fairly low initial investment decision bar can get a limited amount of time (typically 6 – 12 months) and money to prove their idea is a winner. It’s a small bet from an enterprise point of view – many companies spend that much on discussing why an idea shouldn’t go ahead.

Each idea has to have an agreed proof point for the first bet, usually related to whether the idea can attract customers. “We can get 1000 people to sign up”. “We can get at least 10% of our users to spend more than an hour a day on this”. “Our first users will recommend us to at least 1 other person”.

If the bet pays off – if the team hit their goals – the company places a second, larger bet. 9 – 18 months, roughly double the original budget. They have to agree another proof point – typically involving customer acceptance and some kind of business case. And so on, and so forth. I’m pretty sure you’ve used products delivered in this way.

Why do escalating bets work?

Innovation involves risk. Companies focus on risk to reputation, risk to brand, risk to strategic priorities, risk to margin – but the bigger risks in innovation are finding a market, and – very simply – execution. The way most companies approach innovation is to place a small number of large bets – large R&D projects, consulting engagements with specialist companies, large product development teams. As each bet is large, and involves the careers and reputations of several senior people, the bets tend to be fairly conservative – incremental innovations, close to the defendable core of the business.

And if one of those large bets fails, it tends to make the business more conservative. I know of a large IoT innovation project that ran out of steam after 2 or 3 years, and a significant investment. The project was large, with a lot of management attention, and – from my point of view – imploded under its own weight. Different departments wanted to impress their own priorities on the innovation. Decision making was consensus-driven and slow. The large number of internal and external stakeholders and participants made the communication overhead almost unmanageable.

The basic concepts were great, it was the sheer size of the bet that brought the project to its knees. This doesn’t mean “IoT” was a bad bet – as a competitor proved just a few months later.

Escalating bets have two benefits.

The large number of bets means you have a better chance of success – VC companies expect only a small number of their start-up investments to pay off. Running a portfolio of small-ish innovation projects gives you a better chance of finding a winner.

More importantly, though, a small project with a clear goal has a better chance of success in most companies than a large project with a high-level goal. “Take 3 people and find 1000 customers” is clear, not threatening to the other departments in the company, and focuses attention.


Sleep habits.

I found an article online about re-setting your sleep patterns. 

I’ve tried this myself, repeatedly – it’s a great way to re-set. I still struggle with insomnia sometimes, and this routine is what I follow when it all gets out of whack. The thing I’d add is that I make sure I’m properly hydrated before going to sleep – typically by drinking a pint of water half an hour before I go to sleep.

Working with large, distributed teams

My last few projects have involved large-ish (up to 50 or so) teams, spread across multiple locations. I’ve been reading about this, and had the occasional conversation with @seldo about how NPM runs its teams, and I’ve realized a few things.

Firstly – language is crucial. Invest in a common idiom!

My current development team has roughly 30% native English speakers; we have French, Polish, Romanian, Russian, Gujarati and Dutch speakers too. While everyone speaks great English, there is a lot of subtlety in language. Developers use a specific vocabulary when communicating with each other – is it a bug, or a defect? Is it a pull request or a review? What, exactly, is the front-end – is it HTML/CSS/JavaScript, or is it “the website”? Is a task “pending” or “open”? Having a common, shared language makes things easier – especially when agreeing the status of the project. Is work that’s “in review” done, or not done? Do you accept “80% done” as done?

But the business domain is where the real benefits are – making sure the team (and the client) all share the same language is always important (Evans’ Domain Driven Design emphasis this too), but with a distributed team, it’s crucial. I spent about half a day on Slack with a developer in a far-away land trying to work out how to reproduce a bug the client had logged – it turned out all 3 of us had a slightly different interpretation of “product” on the ecommerce site. You can work that out in 5 minutes face-to-face, but at a distance, this can become a huge time-sink.

The second issue is – ridiculously – audio quality. One of our offices has beautiful high ceilings, and stylishly sparse decor in some of the meeting rooms – as a result, the audio quality on audio conferencing is terrible. We’ve got great speakers from Jabra which help – but it’s really worth investing in the best quality audio kit when using Hangouts for a Scrum.

The third issue is a little more subtle. It’s a culture thing – and I’m not sure I always get this balance right. Especially when working remotely, it’s really important that developers get “workable” assignments – a clearly described problem to solve. If it’s a development task, the story/task/whatever should be ready for development – all the dependencies should be clear, the task should have enough requirement detail to allow the developer to work on it. If it’s a bug, there need to be clear steps to reproduce the bug, expected and actual behaviours etc. This is hard to achieve with a remote team – backlog grooming, sprint planning etc. is hard to do remotely, and we tend to use a more centralized model, with a small group of developers and managers assessing and assigning work. Some developers enjoy this – “I just want to code, give me my todo list and don’t bother me”. Others want to be part of the process, and feel ownership of their piece of work. It takes a while to find the balance here – but I’ve learnt to hire “heads-down, coding” people alongside developers who are more engaged in the process.


Customer centric businesses need to be able to iterate. Quickly

I’m seeing the concept of “customer centric” business in more of my work. Focusing on your customer’s experience is obviously a good thing.


I was chatting to a friend whose business is trying to become more customer centric. My friend is the lead for this project, and he’s encountering all the classic challenges – the data they have is in silos, different departments have a different interpretation of “customer centric” behaviour, and short-term sales targets don’t always support long-term customer focus. But his biggest frustration is the lack of velocity. His team is focused on a culture change – focusing the entire business on the customer requires a shift in perspective from the entire organisation. But they also need to deliver tangible change – new business processes, an update to the web platform (that’s how we got chatting), new reports for senior management etc. And this is where he’s stuck – his IT department has a 3 year planning horizon, and is currently scheduling new projects for a start in 2 years. This encourages every project to be huge – if you have to wait years to get your software, you want to make sure you get everything you might possibly need!

This, in turn, has made my friend very nervous about his project – if you get one attempt to change the business process (through a software project), you have to get everything right first time round. So, he’s investing in lots of research, and prototyping, and customer interviews, and KPI development. He sees this as a make-or-break event in his career, and he’s making sure he’s thought of everything before finalizing the software specification; he’s currently expecting the first release in around 3 years.

And that is a problem.

In 3 years, customers will have moved on. Our expectations are no longer shaped by TV, or how we’re treated in a chain restaurant, or the supermarket. Customer expectations are increasingly shaped by web-native experiences. Even though they may be ethically challenged, Amazon, AirBnB and Uber create expectations of  instant, seamless gratification (at a low price). It’s unlikely those experiences will remain frozen for the next 3 years – so designing a process based on today’s expectations is likely to lead to be outdated. And of course, we have no way of knowing what new things may emerge in the meantime – it took many large companies 3 to 5 years to make their websites work on mobile devices.

So, if you want to be “customer centric”, you need to accept that customer expectations are evolving faster than ever – and you need to be able to keep up.

Can an agency be a consultant?

Adweek published an article noting the rise of “consulting services” within agencies. I think it misses the point.

I’ve worked at both “agencies” and “consulting shops” – and I’ve worked on several engagements where agencies and consultancies worked together. I’ve spoken to many friends in agencies and consulting firms, and the fundamental difference is not one of scale, or “competence”, or job title – it’s a fundamentally different view of what matters.

My friends who work for consulting firms believe that “the business” matters – how it’s organized, where the value is created, how the product range fits into the market place, pricing models, how it interacts with suppliers and regulators. They see “the market” as a primarily statistical entity, with “segments” and “channels” and believe that marketing and advertising uses a magical process called “creativity” to convince those market segments to buy the goods and services.

Some of my agency friends believe that it’s all about “the brand” – how consumers perceive it, how to tell stories about the brand, how to reach new people who might love the brand, how to measure the brand’s impact. Many of my agency friends go further, thinking about how “the brand” contributes to sales – how do we convert love for the brand into sales, how do we measure that contribution, where can we open up new opportunities for people to interact and buy? They see the rest of the business primarily as a black box which provides money for marketing, and products or services to sell to the consumer, with a magical process called “operations” which somehow delivers all this stuff.

Okay, okay, I’m exaggerating. But not much.

This dichotomy – separating customers from “the business” – has always been a bit problematic, but the Internet is breaking down those walls ever faster. Advertising – taking attention from consumers and using it to push your message – is ever more difficult as media is fragmenting, consumers have more choice in both the information they can access and the products and services they can buy. Today, “brand” is about how you treat your customers, not about your logo, or your strap line, or your colour palette.

But a business that focuses purely on operational efficiency is doomed – you have to innovate, and find new markets, or you will have to compete on price at best, or become irrelevant at best. While there is obvious value in improving operational capabilities, real innovation combines operational capabilities with customer needs.

And that brings me back to the Adweek article.

Where agencies can add new value is not in “marketing consulting” – we have been doing that for ages. Instead, agencies can bring that deep understanding of how to create a link with consumers – a story, a delightful user experience, a slick and natural-feeling technical implementation – to the operational improvements you can get from Accenture of McKinsey.

Offices suck.

About 5 years ago, people realized that they had better IT from Google, Microsoft and LinkedIn than they got from their own IT. Their home devices were much nicer to use, and much easier to live with, than their work laptop. They got free, unlimited email from Google, with amazing search while their IT department limited their mailbox.


This caused a bit of a corporate revolution, commonly known as “BYOD” or “bring your own device”. My girlfriend works for a local council (hardly a trailblazer), and doesn’t get a work laptop – instead, she gets a budget and some limits, and goes and buys her own laptop.


The same will happen with IoT.


At home I can tell Alexa to turn up the heat, or dim the lights, or play some music. I can see what the weather’s going to be when I leave home by looking at my alarm clock, and I know that the heating will come on when people are home, and stop when they leave.


At work, the HVAC works on a timer. The lights are on or off (mostly on) because I switch them on or off. If I’m cold, I put on a sweater. If I’m hot – well, HR have asked me not to disrobe anymore. If my colleagues decide to play pumping dance music, I have the option of wearing headphones, suffering, or physical violence.

My home is a more productive environment than the office when I have to get stuff done.

There goes my job – again?

A fairly widely reported story last week explains how Microsoft research have created an AI that can write software. Hacker News went crazy – as you might expect.

Can an AI write software? Yes – it clearly can. Writing software means converting sentences a human understands into instructions for a computer; if Google Translate can convert “where’s the post office please?” into 8 other languages, there’s no obvious reason it couldn’t convert “add 12 to 88” into computer-executable form. In fact, this concept is older than I am – the venerable Cobol language was created in 1959 with the goal of converting “business language” into computer programs – Cobol stands for “common business language”. And compared to C, or Fortran, it kinda succeeds – Cobol source code at first glance is less dense, and most of the keywords look like “English”. But programming in Cobol is still programming – it’s unlikely that a sales director would be able to use Cobol to work out the monthly commission report.

In the 1990s, we got a lot of hype around 4GLs; in the early 2000’s, model-driven development and round-trip engineering promised to make software development much more business-friendly. Business people could express their requirements, and they could be converted to executable code automagically.

None of these things really worked. Cobol was a hugely successful language – but not because it did away with programmers; rather, it was a widely available language that matched the needs of enterprises which were automating for the first time. I don’t think I’ve heard anyone say “4GL” for about a decade; round-trip engineering foundered with the horrible tools that support it, which hardly helped to simplify life for either developers or business people.

The defining skill of a software developer isn’t the language they code in – it’s the ability to convert requirements into working software. Computers are already helping with this by compiling or interpreting “human-readable” code to machine-executable code. It’s not ridiculous to believe an AI could use a unit test to write code which completes that test, and it’s not ridiculous to assume an AI could convert a BDD-style requirement into working software. The Microsoft research paper says they have taken the first step – their AI solves coding test problems, which are typically specified as “write a program which will take a sequence of numbers “8, 3, 1, 21” and sort them in ascending order, returning “1, 3, 8, 21”. Extending that to a unit test is a logical and manageable step; I could see an environment where a programmer defines the basic structure of the application – classes with public methods and properties, for instance – along with the unit tests to specify the behaviour, and have an AI fill in the details.The next jump – from “programmer designs structure, AI fill in behaviour” to “AI designs structure” would be a huge jump. It would likely run into similar problems that you get with model-based development, or many object-relational mapping tools – the level of detail required to allow the AI to make the choices you want it to make would be high, and the level of detail of the specification might be indistinguishable from writing software.To then jump from “business person defines requirement, AI interprets and builds solution” – well, I’ve been wrong before, but I don’t think that’s credible in the next decade, and possibly longer. It would require natural language processing to reach full maturity, and the AI would need a deep understanding of business domains, the way humans view and interact with business processes, and user interface design.So, I think my job is safe for now. Not sure about any computer science graduates leaving university right now, though…

Don’t worry what you’ll do when you leave education – your job hasn’t been invented yet.

I was listening to a podcast the other day – Tim Ferris talking to Chris Young – and there was a great quote from Chris when he discussed the relationship he had with his father. At some stage, Chris’ father told him “Don’t worry what you’ll do when you leave education – your job hasn’t been invented yet.”.

I am the father of a bunch of teenagers, and they regularly tell me I’m too demanding/my expectations are too high/they are not sure what they want to do, and can I please leave their room? The thing Chris said on the podcast really rang true. I think I might be wrong in my expectations of my kids.

I’m getting on a bit, but when I was at secondary school, computer programming wasn’t part of the curriculum (though you could take extra classes in Hebrew, typing, and bookkeeping). When I went to university, we had access to a computer lab – it ran BBC Micros – and proudly touted access to janet – but the idea of a global network with access for all was pure science fiction. Oh, and TV in the UK was limited to 4 channels, in the Netherlands we got more channels but only because we got access to TV from Germany, the UK, France, Belgium and – for some reason – PBS. Ordering a book that wasn’t available in our local bookshop took about 2 weeks – if you happened to know the ISBN. Phone calls were expensive – international ones were extravagant. The adjective “social” was more commonly associated with “disease” than “media”.

So, no, the jobs I’ve done for the last 18 years wasn’t invented when I was at school; many of the technical skills I have used over the last 25 years (Web development, Visual Basic, PHP, Ruby, Java, DevOps, Agile development, ) weren’t invented when I was at school. Some of the others (SQL, C/C++, object orientation, software project management) were around, but not really commonly known. Some of the things I use every day to do my job – video conferencing, online chat with people around the world, shared knowledge and code repositories – would have sounded like the deluded ramblings of a mad man back in the 80s.

On the other hand…many of the skills and habits I picked up in my teens and early twenties continue to serve me well every day. I learnt to work hard when I worked as a waiter on a passenger ship. I learnt to write at university, and to write in a business sense at PA Consulting, early in my career. I learnt SQL in my first year out of university. I learnt to think about meta-processes and team work in a band. I learnt how to lead a team at school, organising a music festival. I learnt how business finance works in the first two years out of university – when I also learnt the basics of marketing, sales and presenting. I learnt how to pick up new skills in the first few years – I was a graduate trainee, doing 6 months in every department in the company.

I’m still trying to find the common thread here – all the things I learnt involved me being engaged and busy. I’m glad that I hadn’t yet found Civilization back in my teens…They involved exposure to new things, people and experiences outside my normal circle, and surprisingly little formal education.

Am I just a (late) baby boomer riding the technology wave? My father was born before world war 2, and trained as a merchant seaman. He learnt to navigate by the stars, using a sextant and watch. He learnt how to adjust magnetic compasses, and spent time training on tall ships. By the time he was 40, most of those skills were still in daily use, though modern navigation devices like chart plotters were coming onto the market. Many of those skills are still being taught at naval colleges today. My mother trained as a secretary – she could take shorthand, and type an ungodly number of words per minute.  Again – her job still existed by the time she reached 40, though the mechanical typewriter was being replaced with word processors.

My grandfathers were both born before world war I. They trained as craftsmen, and the basics of their trade didn’t change all that much during their working life – though one of my grandparents trained as an air mechanic in the Royal Flying Corps in world war I, and worked on aircraft design in WW II (he had a Hurricane prop in the garden shed) – by the time he died in the 1990s, much of his training in aircraft maintenance was obviously redundant. But his knowledge of the internal combustion engine didn’t go out of date.

So, yes – I think change is accelerating.

Last time I spoke to one of my teenagers, I tried to summarize it thus – as a teenager, your job is to work out what you like doing, and what you’re good at. If you have any extra time or energy, work out how to learn new skills, and communicate. Any educational achievements are a bonus.


Aligot – mashed potato that will kill you (but it’s worth it).

We went to Paris for a few days last week, and ended up in La Petite Perigourdine for dinner. It’s a corner restaurant, a few hundred yards from the tourist hotspots near Notre Dame on the left bank, and we chose it because it looked busy with local people.

The food was great – the onion soup was pretty much the perfect implementation of a French classic – rich, dark, wintery. My steak was perfectly cooked, and the seasoning was superb – it brought a relatively simple cut of beef and turned it into a classic. We had a great bottle of wine – the Cuvée Mirabelle from Château de la Jaubertie. Not hugely expensive, but as a dry white, it’s amazingly complex, with oak notes, and a great mouth feel.

One of the new discoveries for me was served with my steak – a dish called aligot. My steak arrived on a big plate, otherwise empty; the waiter arrived with a copper pan with a semi-liquid substance, and poured it on my plate with some panache. The smell was amazing – cheese and garlic, but not overwhelming. When I tasted it, the texture was rather dense – but pleasingly so. The flavour was rich and intense – a combination of fragrant garlic, tangy cheese and soft potato. It was clear that this dish would take years off my life, but it would be worth it.

Once home, I set about recreating the dish. I found a few recipes, but none were convincing – so I experimented, and I think I’ve stumbled on the correct way. It’s an easy enough dish, but the timing is fairly unforgiving – once you’ve created the mash, you should serve it immediately or it turns into glue.


This recipe is for 2 people – scale up as required.

Boil a kettle.

Then, start by peeling potatoes – I use charlotte potatoes, they’re nice and waxy – and cut them into similarly sized chunks. Depending on their size, I use 5 small or 3 medium size potatoes to feed 2.

Put the potatoes in a steamer, add a bit of salt, and pour boiling water from the kettle into the pan under the steamer. Steam the potatoes until done – around 15 minutes.

Put a big knob of butter – around 50 grams – into a sauce pan, and heat very gently.

Finely chop or mince 3 cloves of garlic, and add to the butter. Don’t let the butter turn brown – y0u want it warm, but don’t let the garlic change colour.

Once the potatoes are cooked, tip them into a mixing bowl or into a clean, dry saucepan. A little moisture is okay, but you want the potatoes to be fairly dry. If you can keep the repository warm, it will help the process.

Pour the garlic-infused butter into the potatoes.

Add three generous handfuls of grated Lancashire cheese to the potatoes (the French use a cheese called Cantal), and use an electric whisk to turn this mixture into mash. Add salt and pepper whilst whisking – I also like to add a tiny bit of nutmeg.

The whisking will be messy – but after a few minutes, the substance will turn soft, fluffy, almost like bread dough. Serve immediately.