Skip to content

Working with large, distributed teams

My last few projects have involved large-ish (up to 50 or so) teams, spread across multiple locations. I’ve been reading about this, and had the occasional conversation with @seldo about how NPM runs its teams, and I’ve realized a few things.

Firstly – language is crucial. Invest in a common idiom!

My current development team has roughly 30% native English speakers; we have French, Polish, Romanian, Russian, Gujarati and Dutch speakers too. While everyone speaks great English, there is a lot of subtlety in language. Developers use a specific vocabulary when communicating with each other – is it a bug, or a defect? Is it a pull request or a review? What, exactly, is the front-end – is it HTML/CSS/JavaScript, or is it “the website”? Is a task “pending” or “open”? Having a common, shared language makes things easier – especially when agreeing the status of the project. Is work that’s “in review” done, or not done? Do you accept “80% done” as done?

But the business domain is where the real benefits are – making sure the team (and the client) all share the same language is always important (Evans’ Domain Driven Design emphasis this too), but with a distributed team, it’s crucial. I spent about half a day on Slack with a developer in a far-away land trying to work out how to reproduce a bug the client had logged – it turned out all 3 of us had a slightly different interpretation of “product” on the ecommerce site. You can work that out in 5 minutes face-to-face, but at a distance, this can become a huge time-sink.

The second issue is – ridiculously – audio quality. One of our offices has beautiful high ceilings, and stylishly sparse decor in some of the meeting rooms – as a result, the audio quality on audio conferencing is terrible. We’ve got great speakers from Jabra which help – but it’s really worth investing in the best quality audio kit when using Hangouts for a Scrum.

The third issue is a little more subtle. It’s a culture thing – and I’m not sure I always get this balance right. Especially when working remotely, it’s really important that developers get “workable” assignments – a clearly described problem to solve. If it’s a development task, the story/task/whatever should be ready for development – all the dependencies should be clear, the task should have enough requirement detail to allow the developer to work on it. If it’s a bug, there need to be clear steps to reproduce the bug, expected and actual behaviours etc. This is hard to achieve with a remote team – backlog grooming, sprint planning etc. is hard to do remotely, and we tend to use a more centralized model, with a small group of developers and managers assessing and assigning work. Some developers enjoy this – “I just want to code, give me my todo list and don’t bother me”. Others want to be part of the process, and feel ownership of their piece of work. It takes a while to find the balance here – but I’ve learnt to hire “heads-down, coding” people alongside developers who are more engaged in the process.

 

Customer centric businesses need to be able to iterate. Quickly

I’m seeing the concept of “customer centric” business in more of my work. Focusing on your customer’s experience is obviously a good thing.

But…

I was chatting to a friend whose business is trying to become more customer centric. My friend is the lead for this project, and he’s encountering all the classic challenges – the data they have is in silos, different departments have a different interpretation of “customer centric” behaviour, and short-term sales targets don’t always support long-term customer focus. But his biggest frustration is the lack of velocity. His team is focused on a culture change – focusing the entire business on the customer requires a shift in perspective from the entire organisation. But they also need to deliver tangible change – new business processes, an update to the web platform (that’s how we got chatting), new reports for senior management etc. And this is where he’s stuck – his IT department has a 3 year planning horizon, and is currently scheduling new projects for a start in 2 years. This encourages every project to be huge – if you have to wait years to get your software, you want to make sure you get everything you might possibly need!

This, in turn, has made my friend very nervous about his project – if you get one attempt to change the business process (through a software project), you have to get everything right first time round. So, he’s investing in lots of research, and prototyping, and customer interviews, and KPI development. He sees this as a make-or-break event in his career, and he’s making sure he’s thought of everything before finalizing the software specification; he’s currently expecting the first release in around 3 years.

And that is a problem.

In 3 years, customers will have moved on. Our expectations are no longer shaped by TV, or how we’re treated in a chain restaurant, or the supermarket. Customer expectations are increasingly shaped by web-native experiences. Even though they may be ethically challenged, Amazon, AirBnB and Uber create expectations of  instant, seamless gratification (at a low price). It’s unlikely those experiences will remain frozen for the next 3 years – so designing a process based on today’s expectations is likely to lead to be outdated. And of course, we have no way of knowing what new things may emerge in the meantime – it took many large companies 3 to 5 years to make their websites work on mobile devices.

So, if you want to be “customer centric”, you need to accept that customer expectations are evolving faster than ever – and you need to be able to keep up.

Can an agency be a consultant?

Adweek published an article noting the rise of “consulting services” within agencies. I think it misses the point.

I’ve worked at both “agencies” and “consulting shops” – and I’ve worked on several engagements where agencies and consultancies worked together. I’ve spoken to many friends in agencies and consulting firms, and the fundamental difference is not one of scale, or “competence”, or job title – it’s a fundamentally different view of what matters.

My friends who work for consulting firms believe that “the business” matters – how it’s organized, where the value is created, how the product range fits into the market place, pricing models, how it interacts with suppliers and regulators. They see “the market” as a primarily statistical entity, with “segments” and “channels” and believe that marketing and advertising uses a magical process called “creativity” to convince those market segments to buy the goods and services.

Some of my agency friends believe that it’s all about “the brand” – how consumers perceive it, how to tell stories about the brand, how to reach new people who might love the brand, how to measure the brand’s impact. Many of my agency friends go further, thinking about how “the brand” contributes to sales – how do we convert love for the brand into sales, how do we measure that contribution, where can we open up new opportunities for people to interact and buy? They see the rest of the business primarily as a black box which provides money for marketing, and products or services to sell to the consumer, with a magical process called “operations” which somehow delivers all this stuff.

Okay, okay, I’m exaggerating. But not much.

This dichotomy – separating customers from “the business” – has always been a bit problematic, but the Internet is breaking down those walls ever faster. Advertising – taking attention from consumers and using it to push your message – is ever more difficult as media is fragmenting, consumers have more choice in both the information they can access and the products and services they can buy. Today, “brand” is about how you treat your customers, not about your logo, or your strap line, or your colour palette.

But a business that focuses purely on operational efficiency is doomed – you have to innovate, and find new markets, or you will have to compete on price at best, or become irrelevant at best. While there is obvious value in improving operational capabilities, real innovation combines operational capabilities with customer needs.

And that brings me back to the Adweek article.

Where agencies can add new value is not in “marketing consulting” – we have been doing that for ages. Instead, agencies can bring that deep understanding of how to create a link with consumers – a story, a delightful user experience, a slick and natural-feeling technical implementation – to the operational improvements you can get from Accenture of McKinsey.

Offices suck.

About 5 years ago, people realized that they had better IT from Google, Microsoft and LinkedIn than they got from their own IT. Their home devices were much nicer to use, and much easier to live with, than their work laptop. They got free, unlimited email from Google, with amazing search while their IT department limited their mailbox.

 

This caused a bit of a corporate revolution, commonly known as “BYOD” or “bring your own device”. My girlfriend works for a local council (hardly a trailblazer), and doesn’t get a work laptop – instead, she gets a budget and some limits, and goes and buys her own laptop.

 

The same will happen with IoT.

 

At home I can tell Alexa to turn up the heat, or dim the lights, or play some music. I can see what the weather’s going to be when I leave home by looking at my alarm clock, and I know that the heating will come on when people are home, and stop when they leave.

 

At work, the HVAC works on a timer. The lights are on or off (mostly on) because I switch them on or off. If I’m cold, I put on a sweater. If I’m hot – well, HR have asked me not to disrobe anymore. If my colleagues decide to play pumping dance music, I have the option of wearing headphones, suffering, or physical violence.

My home is a more productive environment than the office when I have to get stuff done.

There goes my job – again?

A fairly widely reported story last week explains how Microsoft research have created an AI that can write software. Hacker News went crazy – as you might expect.

Can an AI write software? Yes – it clearly can. Writing software means converting sentences a human understands into instructions for a computer; if Google Translate can convert “where’s the post office please?” into 8 other languages, there’s no obvious reason it couldn’t convert “add 12 to 88” into computer-executable form. In fact, this concept is older than I am – the venerable Cobol language was created in 1959 with the goal of converting “business language” into computer programs – Cobol stands for “common business language”. And compared to C, or Fortran, it kinda succeeds – Cobol source code at first glance is less dense, and most of the keywords look like “English”. But programming in Cobol is still programming – it’s unlikely that a sales director would be able to use Cobol to work out the monthly commission report.

In the 1990s, we got a lot of hype around 4GLs; in the early 2000’s, model-driven development and round-trip engineering promised to make software development much more business-friendly. Business people could express their requirements, and they could be converted to executable code automagically.

None of these things really worked. Cobol was a hugely successful language – but not because it did away with programmers; rather, it was a widely available language that matched the needs of enterprises which were automating for the first time. I don’t think I’ve heard anyone say “4GL” for about a decade; round-trip engineering foundered with the horrible tools that support it, which hardly helped to simplify life for either developers or business people.

The defining skill of a software developer isn’t the language they code in – it’s the ability to convert requirements into working software. Computers are already helping with this by compiling or interpreting “human-readable” code to machine-executable code. It’s not ridiculous to believe an AI could use a unit test to write code which completes that test, and it’s not ridiculous to assume an AI could convert a BDD-style requirement into working software. The Microsoft research paper says they have taken the first step – their AI solves coding test problems, which are typically specified as “write a program which will take a sequence of numbers “8, 3, 1, 21” and sort them in ascending order, returning “1, 3, 8, 21”. Extending that to a unit test is a logical and manageable step; I could see an environment where a programmer defines the basic structure of the application – classes with public methods and properties, for instance – along with the unit tests to specify the behaviour, and have an AI fill in the details.The next jump – from “programmer designs structure, AI fill in behaviour” to “AI designs structure” would be a huge jump. It would likely run into similar problems that you get with model-based development, or many object-relational mapping tools – the level of detail required to allow the AI to make the choices you want it to make would be high, and the level of detail of the specification might be indistinguishable from writing software.To then jump from “business person defines requirement, AI interprets and builds solution” – well, I’ve been wrong before, but I don’t think that’s credible in the next decade, and possibly longer. It would require natural language processing to reach full maturity, and the AI would need a deep understanding of business domains, the way humans view and interact with business processes, and user interface design.So, I think my job is safe for now. Not sure about any computer science graduates leaving university right now, though…

Don’t worry what you’ll do when you leave education – your job hasn’t been invented yet.

I was listening to a podcast the other day – Tim Ferris talking to Chris Young – and there was a great quote from Chris when he discussed the relationship he had with his father. At some stage, Chris’ father told him “Don’t worry what you’ll do when you leave education – your job hasn’t been invented yet.”.

I am the father of a bunch of teenagers, and they regularly tell me I’m too demanding/my expectations are too high/they are not sure what they want to do, and can I please leave their room? The thing Chris said on the podcast really rang true. I think I might be wrong in my expectations of my kids.

I’m getting on a bit, but when I was at secondary school, computer programming wasn’t part of the curriculum (though you could take extra classes in Hebrew, typing, and bookkeeping). When I went to university, we had access to a computer lab – it ran BBC Micros – and proudly touted access to janet – but the idea of a global network with access for all was pure science fiction. Oh, and TV in the UK was limited to 4 channels, in the Netherlands we got more channels but only because we got access to TV from Germany, the UK, France, Belgium and – for some reason – PBS. Ordering a book that wasn’t available in our local bookshop took about 2 weeks – if you happened to know the ISBN. Phone calls were expensive – international ones were extravagant. The adjective “social” was more commonly associated with “disease” than “media”.

So, no, the jobs I’ve done for the last 18 years wasn’t invented when I was at school; many of the technical skills I have used over the last 25 years (Web development, Visual Basic, PHP, Ruby, Java, DevOps, Agile development, ) weren’t invented when I was at school. Some of the others (SQL, C/C++, object orientation, software project management) were around, but not really commonly known. Some of the things I use every day to do my job – video conferencing, online chat with people around the world, shared knowledge and code repositories – would have sounded like the deluded ramblings of a mad man back in the 80s.

On the other hand…many of the skills and habits I picked up in my teens and early twenties continue to serve me well every day. I learnt to work hard when I worked as a waiter on a passenger ship. I learnt to write at university, and to write in a business sense at PA Consulting, early in my career. I learnt SQL in my first year out of university. I learnt to think about meta-processes and team work in a band. I learnt how to lead a team at school, organising a music festival. I learnt how business finance works in the first two years out of university – when I also learnt the basics of marketing, sales and presenting. I learnt how to pick up new skills in the first few years – I was a graduate trainee, doing 6 months in every department in the company.

I’m still trying to find the common thread here – all the things I learnt involved me being engaged and busy. I’m glad that I hadn’t yet found Civilization back in my teens…They involved exposure to new things, people and experiences outside my normal circle, and surprisingly little formal education.

Am I just a (late) baby boomer riding the technology wave? My father was born before world war 2, and trained as a merchant seaman. He learnt to navigate by the stars, using a sextant and watch. He learnt how to adjust magnetic compasses, and spent time training on tall ships. By the time he was 40, most of those skills were still in daily use, though modern navigation devices like chart plotters were coming onto the market. Many of those skills are still being taught at naval colleges today. My mother trained as a secretary – she could take shorthand, and type an ungodly number of words per minute.  Again – her job still existed by the time she reached 40, though the mechanical typewriter was being replaced with word processors.

My grandfathers were both born before world war I. They trained as craftsmen, and the basics of their trade didn’t change all that much during their working life – though one of my grandparents trained as an air mechanic in the Royal Flying Corps in world war I, and worked on aircraft design in WW II (he had a Hurricane prop in the garden shed) – by the time he died in the 1990s, much of his training in aircraft maintenance was obviously redundant. But his knowledge of the internal combustion engine didn’t go out of date.

So, yes – I think change is accelerating.

Last time I spoke to one of my teenagers, I tried to summarize it thus – as a teenager, your job is to work out what you like doing, and what you’re good at. If you have any extra time or energy, work out how to learn new skills, and communicate. Any educational achievements are a bonus.

 

Aligot – mashed potato that will kill you (but it’s worth it).

We went to Paris for a few days last week, and ended up in La Petite Perigourdine for dinner. It’s a corner restaurant, a few hundred yards from the tourist hotspots near Notre Dame on the left bank, and we chose it because it looked busy with local people.

The food was great – the onion soup was pretty much the perfect implementation of a French classic – rich, dark, wintery. My steak was perfectly cooked, and the seasoning was superb – it brought a relatively simple cut of beef and turned it into a classic. We had a great bottle of wine – the Cuvée Mirabelle from Château de la Jaubertie. Not hugely expensive, but as a dry white, it’s amazingly complex, with oak notes, and a great mouth feel.

One of the new discoveries for me was served with my steak – a dish called aligot. My steak arrived on a big plate, otherwise empty; the waiter arrived with a copper pan with a semi-liquid substance, and poured it on my plate with some panache. The smell was amazing – cheese and garlic, but not overwhelming. When I tasted it, the texture was rather dense – but pleasingly so. The flavour was rich and intense – a combination of fragrant garlic, tangy cheese and soft potato. It was clear that this dish would take years off my life, but it would be worth it.

Once home, I set about recreating the dish. I found a few recipes, but none were convincing – so I experimented, and I think I’ve stumbled on the correct way. It’s an easy enough dish, but the timing is fairly unforgiving – once you’ve created the mash, you should serve it immediately or it turns into glue.

Recipe

This recipe is for 2 people – scale up as required.

Boil a kettle.

Then, start by peeling potatoes – I use charlotte potatoes, they’re nice and waxy – and cut them into similarly sized chunks. Depending on their size, I use 5 small or 3 medium size potatoes to feed 2.

Put the potatoes in a steamer, add a bit of salt, and pour boiling water from the kettle into the pan under the steamer. Steam the potatoes until done – around 15 minutes.

Put a big knob of butter – around 50 grams – into a sauce pan, and heat very gently.

Finely chop or mince 3 cloves of garlic, and add to the butter. Don’t let the butter turn brown – y0u want it warm, but don’t let the garlic change colour.

Once the potatoes are cooked, tip them into a mixing bowl or into a clean, dry saucepan. A little moisture is okay, but you want the potatoes to be fairly dry. If you can keep the repository warm, it will help the process.

Pour the garlic-infused butter into the potatoes.

Add three generous handfuls of grated Lancashire cheese to the potatoes (the French use a cheese called Cantal), and use an electric whisk to turn this mixture into mash. Add salt and pepper whilst whisking – I also like to add a tiny bit of nutmeg.

The whisking will be messy – but after a few minutes, the substance will turn soft, fluffy, almost like bread dough. Serve immediately.

Requirements – notes on value in software.

I was chatting with an old friend recently. We worked together in the 90s, building a custom software solution for a large, complicated multi-national company. The requirements for the system were owned by several senior stakeholders, across several offices, departments and timezones. I don’t recall a single meeting where all stakeholders were present, and one of the project’s major challenges was to get a consistent point of view on each feature’s scope and priority.

“Agile” was not yet commonplace – we had JAD (Joint Application Development) sessions with our key requirements owners to work out what they wanted. As our software was “client server”, and there was no virtualization or automated deployment, it was very hard to show people outside the team what we’d built, or what we might build if they agreed.

We had business analysts who converted the output of the JAD sessions into semi-formal requirement statements, and we planned our development effort based on those requirements. Of course, this was not a particularly reliable process – the JAD sessions with busy, senior people were hard to manage, and would yield requirements ranging from “we want a nice user interface, maybe something like Netscape Navigator” to arcane rules on rounding financial calculations. The business sponsors were unusually responsive – we could usually get answers in a few days when we had specific questions. However, there was no comprehensive statement of objectives and requirements, and the business analysis team couldn’t substitute for the business sponsors.

We developers would regularly end our week in the pub around the corner muttering into our beer that if only someone could give us a complete, clear set of requirements, we could be finished with the project in a couple of months and go home. We lived in re-work hell – we’d finish a piece of software, the QA team would approve it, and when we showed it to the business owner they’d change something, and we’d start again. This feedback loop was typically 3 months or longer.

We weren’t following a traditional waterfall methodology – but it was close enough. Releases were painful and expensive, so we did one or two a year. Our team was measured on how many features we delivered according to specification, even if that specification was wrong. The quality of our requirements was low, and  the feedback time was too long – so our instinct was to improve the quality of the requirements, and to create a process to prevent change to requirements. If our business sponsor gave us “bad” requirements, they should bear the cost.

Where was the “value” in our software? Even back then, in the glory days of client/server development, the code was the easy bit. It was incredibly laborious compared to today – but once we all agreed on what to build, writing the software rarely took more than a few days per feature. The real effort went into understanding, agreeing, refining, clarifying, validating the requirements, the re-work, the edge cases, the “but this requirement isn’t compatible with that requirement”. The project was a success – it saved the business tens of millions of pounds once live, and helped drive a culture shift within the business. But the value wasn’t in the code – it was in the agreed, prioritized requirements we’d implemented.

Fast-forward to today.

Most of the teams I work with can get a development release out in minutes, and feedback from clients in no more than a day. On most projects, we communicate using online tools like Jira and Confluence to capture requirements and design decisions. We use online chat, email and voice calls to discuss requirements and ideas, as well as team progress. Teams are distributed – my last few projects have had developers in at least 5 locations, and clients in 3 or 4 different offices.

And yet, on many engagements, we still treat code as “expensive” – we spend a significant proportion of our effort capturing, refining, grooming, prioritizing, designing, mocking up, visualizing requirements. It’s not uncommon for a software project to spend around 30% of its budget on developers. Source code and the final product effectively become the output of a long, complicated process of turning Powerpoint into working software. I’ve seen this in both agile and “traditional” projects – though of course making public-facing, mass-audience applications for large brands is always going to be design-intensive.

While we have faster communications than in the 90s, and our software cycle time has gone from months to minutes, the challenge remains coming up with a product feature set that everyone agrees on, is feasible given the other project constraints, and which is captured in a way that can be used to manage the project.

It turns out that the solution to this is both simple, and impossible – the project needs a single, consistent point of view, which combines at the very least the team which is commissioning the software, and the team which is delivering it.

 

 

 

 

Inevitable futures – manufacturing

I recently finished Kevin Kelly’s “The Inevitable” – it’s good, positive, often revealing. But I want to work through some of the ideas and see what scenarios they might open up. First up – manufacturing.

When I left university in the late 1980s, I worked for a small multinational manufacturing conglomerate, and I saw a fair few factories on the inside. They were dirty, noisy places, with humans and machines interacting to transform one thing into another – aggregate, lime and cement into concrete, wood, laminate and hardware into kitchens, etc. The factories were large, and housed multiple specialized machines, storage areas for raw materials, intermediate products and finished goods. Human beings both controlled the process and did the work machines could not – from driving forklift trucks to cleaning the machines, or fixing them when they broke. Controlling the process was a big deal – most of the factories I worked in had roughly the same number of “administrative” staff as shop floor workers. Even though the factories made similar or even identical products every day, there were regular crises – machines breaking down, suppliers delivering late, customers changing their orders at the last minute.

Recently, I was lucky enough to visit the Rolls Royce Motor Car factory in the Sussex countryside. The contrast was amazing – it’s quiet, clean, controlled. Even though every car they produce is different, the process was almost serene. Far less of the factory was dedicated to “storing stuff”, and there were far fewer dedicated machines.

Of course, that’s because Rolls Royce mostly assemble and finish cars in their factory – most of the components that go into the car are made somewhere else. At Goodwood, they are put together, painted, polished, and generally glammed up with leather, wood, and all the other items that make a luxury car.

Now, I also got to have a look inside the engine plant of a motorcycle manufacturer a few years ago. I was expecting much more industrial grit – after all, engines are big, complicated things, made out of metal. Surely there would be lots of noise, and flashing lights and…well, no. Turns out that building an engine is also mostly assembling components delivered by suppliers.

I’m pretty sure it’s turtles all the way down.

The modern factory is possible only because we can process and exchange data across the globe, instantaneously. In the late 80s, we would fax or phone through orders to our suppliers; I spent a few months in the “planning” department, working out different ways to sequence customer orders to optimize production efficiency by shuffling index cards on a big felt board. We would then feed those plans into our manufacturing resource planning software, which in turn would spit out purchase orders (which we’d fax or phone through to our suppliers). We had lots of people throughout the factory collecting data (usually with a clipboard), and then feeding that into the computer.

Today, of course, most companies communicate orders directly, and factories gather their own data; the computer is much better at optimizing production capacity than a human could ever be, and as a result, the role of the human is increasingly about doing the things machines can’t do (yet).

I’m also pretty sure that this is just the beginning.

Once we have robots that can do tasks only humans can do today, self-driving lorries, 3D printing and nano manufacturing it’s easy to imagine lots of different scenarios. I’d like to consider one.

The local manufactury.

Right now, the cost of labour determines where we make most things – and as that’s cheap in China, Vietnam, Mexico, etc. our global economy takes raw materials, sends them (usually over great distances) to those cheap labour places where they get transformed into products we want to buy, and then ship them halfway around the world again for consumption in the West.

What happens once robots can replace that cheap labour?

Of course the other reason to have a “car factory” or a “shoe factory” or a “phone factory” is to have a store of knowledge and skills. Some of those skills are directly related to the product – welding, sewing, assembling small electrical components. Many of those skills are organisational – “how do we do things around here?”. Some relate to design – the development of new products.

It’s not ridiculous to imagine that much of this knowledge – especially the skills and organisational skills – can migrate into computers.

If these trends continue, maybe the cost of shipping things around the world becomes critical. Maybe every neighbourhood gets a local manufactury – a building with pluripotent robots, 3D printers and nano-bots, managed by a scheduling AI, integrated into a supply network. Customers choose a product – from an “off-the-shelf” design, or by customizing a design, or by commissioning a design from a specialist, and send the order to the manufactury. The manufactury looks at the bill of materials, and places orders with its supply network; self-driving vehicles deliver the materials, and the manufactury schedules the robots to build the finished product, which – of course – is then delivered to the customer using a self-driving delivery van. Or a drone.

To create a shirt, the manufactury would order cotton, buttons, etc. – either in bulk (if the purchasing algorithm decides that keeping a stock of cotton makes sense) or “just enough”. The nanobots would create dies to colour the cotton, and a robot would follow the pattern to cut the cotton into the components for a shirt, and stitch it together.

You could easily imagine such a manufactury making clothes, furniture, electrical components, household goods etc.

The economics would be interesting – but I imagine that the price of an object would be driven partly by the cost of the design and raw materials, and partly by the time the customer is prepared to wait. The economies of scale don’t go away – clearly making dozens, hundreds or thousands of the same product would be much cheaper than one-offs. You could imagine clever scheduling algorithms, aggregating demand from multiple neighbourhoods, so that when the threshold is reached for a particular product, one of the manufacturies configures itself to satisfy that demand. Of course, this could apply to finished goods and to intermediate products – manufacturies converting raw cotton to thread, thread to cloth etc. You can also imagine how specialized equipment – weaving looms, injection moulding presses etc. – would continue to offer significant cost advantages.

When? How?

This is just speculation. There are many leaps of faith – I’m pretty sure I made up “pluripotent robot” as a phrase, and while 3D printing and nano-materials are not purely speculation, they’re also not yet ubiquitous. Lights-out factories are still not mainstream, let alone factories that can re-configure themselves every day.

But ecommerce and digitisation means we’re all spending less time on the high street, and becoming more accustomed to ordering stuff on the internet and have it turn up. Amazon especially is innovating logistics and supply chains – I can order coffee beans and printer ink on my phone, and they will deliver it within 2 hours.

So, if this happens, I’d bet it would be a company like Amazon who leads the way – they already have highly automated distribution centers, so the jump to manufacture isn’t quite such a big one. They have the computing power, and the customer insight.

Europe.

I feel European. If I shared any of cousin Dirk‘s talents, I’d qualify to play football for 3 countries. I grew up speaking English at home, Dutch at school, and Frisian with my friends in the playground (though I never got the hang of Sneekers). Growing up, school and music trips went to France, Belgium and Germany; I can read a news paper in French, German, Italian and Spanish. I have friends and colleagues from around half the 27 remaining Eurozone countries.

I love classical music from the continent – Bach, Mozart, Vivaldi, de Falla, Lully, Beethoven, Sweelinck. I love continental food. I love continental cities. I love continental European comics – Franquin, Hergé, Toonder.

I’ve chosen to live in the UK for the last 30 years – I love the UK too. London is an amazing city. Many of my favourite authors – Martin Amis, William Boyd, David Mitchel – are British. The BBC is amazing. Even the food is getting better.

But now, after the vote to leave the EU, it feels like I have to chose. It’s not clear what the UK’s relationship with Europe will be – but I fear the worst.