Skip to content

Aligot – mashed potato that will kill you (but it’s worth it).

We went to Paris for a few days last week, and ended up in La Petite Perigourdine for dinner. It’s a corner restaurant, a few hundred yards from the tourist hotspots near Notre Dame on the left bank, and we chose it because it looked busy with local people.

The food was great – the onion soup was pretty much the perfect implementation of a French classic – rich, dark, wintery. My steak was perfectly cooked, and the seasoning was superb – it brought a relatively simple cut of beef and turned it into a classic. We had a great bottle of wine – the Cuvée Mirabelle from Château de la Jaubertie. Not hugely expensive, but as a dry white, it’s amazingly complex, with oak notes, and a great mouth feel.

One of the new discoveries for me was served with my steak – a dish called aligot. My steak arrived on a big plate, otherwise empty; the waiter arrived with a copper pan with a semi-liquid substance, and poured it on my plate with some panache. The smell was amazing – cheese and garlic, but not overwhelming. When I tasted it, the texture was rather dense – but pleasingly so. The flavour was rich and intense – a combination of fragrant garlic, tangy cheese and soft potato. It was clear that this dish would take years off my life, but it would be worth it.

Once home, I set about recreating the dish. I found a few recipes, but none were convincing – so I experimented, and I think I’ve stumbled on the correct way. It’s an easy enough dish, but the timing is fairly unforgiving – once you’ve created the mash, you should serve it immediately or it turns into glue.

Recipe

This recipe is for 2 people – scale up as required.

Boil a kettle.

Then, start by peeling potatoes – I use charlotte potatoes, they’re nice and waxy – and cut them into similarly sized chunks. Depending on their size, I use 5 small or 3 medium size potatoes to feed 2.

Put the potatoes in a steamer, add a bit of salt, and pour boiling water from the kettle into the pan under the steamer. Steam the potatoes until done – around 15 minutes.

Put a big knob of butter – around 50 grams – into a sauce pan, and heat very gently.

Finely chop or mince 3 cloves of garlic, and add to the butter. Don’t let the butter turn brown – y0u want it warm, but don’t let the garlic change colour.

Once the potatoes are cooked, tip them into a mixing bowl or into a clean, dry saucepan. A little moisture is okay, but you want the potatoes to be fairly dry. If you can keep the repository warm, it will help the process.

Pour the garlic-infused butter into the potatoes.

Add three generous handfuls of grated Lancashire cheese to the potatoes (the French use a cheese called Cantal), and use an electric whisk to turn this mixture into mash. Add salt and pepper whilst whisking – I also like to add a tiny bit of nutmeg.

The whisking will be messy – but after a few minutes, the substance will turn soft, fluffy, almost like bread dough. Serve immediately.

Requirements – notes on value in software.

I was chatting with an old friend recently. We worked together in the 90s, building a custom software solution for a large, complicated multi-national company. The requirements for the system were owned by several senior stakeholders, across several offices, departments and timezones. I don’t recall a single meeting where all stakeholders were present, and one of the project’s major challenges was to get a consistent point of view on each feature’s scope and priority.

“Agile” was not yet commonplace – we had JAD (Joint Application Development) sessions with our key requirements owners to work out what they wanted. As our software was “client server”, and there was no virtualization or automated deployment, it was very hard to show people outside the team what we’d built, or what we might build if they agreed.

We had business analysts who converted the output of the JAD sessions into semi-formal requirement statements, and we planned our development effort based on those requirements. Of course, this was not a particularly reliable process – the JAD sessions with busy, senior people were hard to manage, and would yield requirements ranging from “we want a nice user interface, maybe something like Netscape Navigator” to arcane rules on rounding financial calculations. The business sponsors were unusually responsive – we could usually get answers in a few days when we had specific questions. However, there was no comprehensive statement of objectives and requirements, and the business analysis team couldn’t substitute for the business sponsors.

We developers would regularly end our week in the pub around the corner muttering into our beer that if only someone could give us a complete, clear set of requirements, we could be finished with the project in a couple of months and go home. We lived in re-work hell – we’d finish a piece of software, the QA team would approve it, and when we showed it to the business owner they’d change something, and we’d start again. This feedback loop was typically 3 months or longer.

We weren’t following a traditional waterfall methodology – but it was close enough. Releases were painful and expensive, so we did one or two a year. Our team was measured on how many features we delivered according to specification, even if that specification was wrong. The quality of our requirements was low, and  the feedback time was too long – so our instinct was to improve the quality of the requirements, and to create a process to prevent change to requirements. If our business sponsor gave us “bad” requirements, they should bear the cost.

Where was the “value” in our software? Even back then, in the glory days of client/server development, the code was the easy bit. It was incredibly laborious compared to today – but once we all agreed on what to build, writing the software rarely took more than a few days per feature. The real effort went into understanding, agreeing, refining, clarifying, validating the requirements, the re-work, the edge cases, the “but this requirement isn’t compatible with that requirement”. The project was a success – it saved the business tens of millions of pounds once live, and helped drive a culture shift within the business. But the value wasn’t in the code – it was in the agreed, prioritized requirements we’d implemented.

Fast-forward to today.

Most of the teams I work with can get a development release out in minutes, and feedback from clients in no more than a day. On most projects, we communicate using online tools like Jira and Confluence to capture requirements and design decisions. We use online chat, email and voice calls to discuss requirements and ideas, as well as team progress. Teams are distributed – my last few projects have had developers in at least 5 locations, and clients in 3 or 4 different offices.

And yet, on many engagements, we still treat code as “expensive” – we spend a significant proportion of our effort capturing, refining, grooming, prioritizing, designing, mocking up, visualizing requirements. It’s not uncommon for a software project to spend around 30% of its budget on developers. Source code and the final product effectively become the output of a long, complicated process of turning Powerpoint into working software. I’ve seen this in both agile and “traditional” projects – though of course making public-facing, mass-audience applications for large brands is always going to be design-intensive.

While we have faster communications than in the 90s, and our software cycle time has gone from months to minutes, the challenge remains coming up with a product feature set that everyone agrees on, is feasible given the other project constraints, and which is captured in a way that can be used to manage the project.

It turns out that the solution to this is both simple, and impossible – the project needs a single, consistent point of view, which combines at the very least the team which is commissioning the software, and the team which is delivering it.

 

 

 

 

Inevitable futures – manufacturing

I recently finished Kevin Kelly’s “The Inevitable” – it’s good, positive, often revealing. But I want to work through some of the ideas and see what scenarios they might open up. First up – manufacturing.

When I left university in the late 1980s, I worked for a small multinational manufacturing conglomerate, and I saw a fair few factories on the inside. They were dirty, noisy places, with humans and machines interacting to transform one thing into another – aggregate, lime and cement into concrete, wood, laminate and hardware into kitchens, etc. The factories were large, and housed multiple specialized machines, storage areas for raw materials, intermediate products and finished goods. Human beings both controlled the process and did the work machines could not – from driving forklift trucks to cleaning the machines, or fixing them when they broke. Controlling the process was a big deal – most of the factories I worked in had roughly the same number of “administrative” staff as shop floor workers. Even though the factories made similar or even identical products every day, there were regular crises – machines breaking down, suppliers delivering late, customers changing their orders at the last minute.

Recently, I was lucky enough to visit the Rolls Royce Motor Car factory in the Sussex countryside. The contrast was amazing – it’s quiet, clean, controlled. Even though every car they produce is different, the process was almost serene. Far less of the factory was dedicated to “storing stuff”, and there were far fewer dedicated machines.

Of course, that’s because Rolls Royce mostly assemble and finish cars in their factory – most of the components that go into the car are made somewhere else. At Goodwood, they are put together, painted, polished, and generally glammed up with leather, wood, and all the other items that make a luxury car.

Now, I also got to have a look inside the engine plant of a motorcycle manufacturer a few years ago. I was expecting much more industrial grit – after all, engines are big, complicated things, made out of metal. Surely there would be lots of noise, and flashing lights and…well, no. Turns out that building an engine is also mostly assembling components delivered by suppliers.

I’m pretty sure it’s turtles all the way down.

The modern factory is possible only because we can process and exchange data across the globe, instantaneously. In the late 80s, we would fax or phone through orders to our suppliers; I spent a few months in the “planning” department, working out different ways to sequence customer orders to optimize production efficiency by shuffling index cards on a big felt board. We would then feed those plans into our manufacturing resource planning software, which in turn would spit out purchase orders (which we’d fax or phone through to our suppliers). We had lots of people throughout the factory collecting data (usually with a clipboard), and then feeding that into the computer.

Today, of course, most companies communicate orders directly, and factories gather their own data; the computer is much better at optimizing production capacity than a human could ever be, and as a result, the role of the human is increasingly about doing the things machines can’t do (yet).

I’m also pretty sure that this is just the beginning.

Once we have robots that can do tasks only humans can do today, self-driving lorries, 3D printing and nano manufacturing it’s easy to imagine lots of different scenarios. I’d like to consider one.

The local manufactury.

Right now, the cost of labour determines where we make most things – and as that’s cheap in China, Vietnam, Mexico, etc. our global economy takes raw materials, sends them (usually over great distances) to those cheap labour places where they get transformed into products we want to buy, and then ship them halfway around the world again for consumption in the West.

What happens once robots can replace that cheap labour?

Of course the other reason to have a “car factory” or a “shoe factory” or a “phone factory” is to have a store of knowledge and skills. Some of those skills are directly related to the product – welding, sewing, assembling small electrical components. Many of those skills are organisational – “how do we do things around here?”. Some relate to design – the development of new products.

It’s not ridiculous to imagine that much of this knowledge – especially the skills and organisational skills – can migrate into computers.

If these trends continue, maybe the cost of shipping things around the world becomes critical. Maybe every neighbourhood gets a local manufactury – a building with pluripotent robots, 3D printers and nano-bots, managed by a scheduling AI, integrated into a supply network. Customers choose a product – from an “off-the-shelf” design, or by customizing a design, or by commissioning a design from a specialist, and send the order to the manufactury. The manufactury looks at the bill of materials, and places orders with its supply network; self-driving vehicles deliver the materials, and the manufactury schedules the robots to build the finished product, which – of course – is then delivered to the customer using a self-driving delivery van. Or a drone.

To create a shirt, the manufactury would order cotton, buttons, etc. – either in bulk (if the purchasing algorithm decides that keeping a stock of cotton makes sense) or “just enough”. The nanobots would create dies to colour the cotton, and a robot would follow the pattern to cut the cotton into the components for a shirt, and stitch it together.

You could easily imagine such a manufactury making clothes, furniture, electrical components, household goods etc.

The economics would be interesting – but I imagine that the price of an object would be driven partly by the cost of the design and raw materials, and partly by the time the customer is prepared to wait. The economies of scale don’t go away – clearly making dozens, hundreds or thousands of the same product would be much cheaper than one-offs. You could imagine clever scheduling algorithms, aggregating demand from multiple neighbourhoods, so that when the threshold is reached for a particular product, one of the manufacturies configures itself to satisfy that demand. Of course, this could apply to finished goods and to intermediate products – manufacturies converting raw cotton to thread, thread to cloth etc. You can also imagine how specialized equipment – weaving looms, injection moulding presses etc. – would continue to offer significant cost advantages.

When? How?

This is just speculation. There are many leaps of faith – I’m pretty sure I made up “pluripotent robot” as a phrase, and while 3D printing and nano-materials are not purely speculation, they’re also not yet ubiquitous. Lights-out factories are still not mainstream, let alone factories that can re-configure themselves every day.

But ecommerce and digitisation means we’re all spending less time on the high street, and becoming more accustomed to ordering stuff on the internet and have it turn up. Amazon especially is innovating logistics and supply chains – I can order coffee beans and printer ink on my phone, and they will deliver it within 2 hours.

So, if this happens, I’d bet it would be a company like Amazon who leads the way – they already have highly automated distribution centers, so the jump to manufacture isn’t quite such a big one. They have the computing power, and the customer insight.

Europe.

I feel European. If I shared any of cousin Dirk‘s talents, I’d qualify to play football for 3 countries. I grew up speaking English at home, Dutch at school, and Frisian with my friends in the playground (though I never got the hang of Sneekers). Growing up, school and music trips went to France, Belgium and Germany; I can read a news paper in French, German, Italian and Spanish. I have friends and colleagues from around half the 27 remaining Eurozone countries.

I love classical music from the continent – Bach, Mozart, Vivaldi, de Falla, Lully, Beethoven, Sweelinck. I love continental food. I love continental cities. I love continental European comics – Franquin, Hergé, Toonder.

I’ve chosen to live in the UK for the last 30 years – I love the UK too. London is an amazing city. Many of my favourite authors – Martin Amis, William Boyd, David Mitchel – are British. The BBC is amazing. Even the food is getting better.

But now, after the vote to leave the EU, it feels like I have to chose. It’s not clear what the UK’s relationship with Europe will be – but I fear the worst.

Project management job number one: land the f****ing plane

I’ve been making software for a few decades now, and worked on all sorts of projects – small, large, complex, simple, fun, and not-so-fun. One of the biggest problems with software is the amount of information a developer needs to keep in his head (I believe Dijkstra once wrote that software developers were unique in having to be able to understand, simultaneously, 7 levels of abstraction). The same is true for those who manage developers.

On a large project I was involved with recently, I noticed that the project management team was working really hard, but not making much progress. I looked at all the streams of activity, and I noticed that the project had lots of outstanding decisions. When will we do the training? Who will manage QA? What day will we have the management call? Which version of the API should we use?

It reminded me of an iPhone game I’d played for a bit – I think it was called “Air traffic control” – in which you have an airfield, and planes arrive on the screen; the job is to land the airplanes. As the game goes on, it throws more airplanes at you, and eventually you’re overwhelmed by the number of aircraft, they crash, and the game ends.

It’s mildly diverting, and a good way to while away the tube journey.

It occurred to me that our project management team wasn’t landing enough planes – and the more planes are circling the runway, the more likely it is they’ll crash. Most people I know can keep a handful of things in their brain at one time (there’s some scientific research to confirm this), and the whole “Getting things done” system is designed around this.

The issue with project management, of course, is dependencies. One pending decision can block 4 other decisions, and before you know it, you end up looking like that guy from Airplane! , trying to keep the whole thing spinning, and dedicating all your energy to stopping the planes from crashing into each other, rather than to landing the planes.

And this, of course, affects everyone. The developers find that they can’t work on something because we’re waiting for a decision. The number of items that aren’t “done” grows every day – and when a decision is made, updating all the dependent items grows. The project sponsor sees an ever-longer list of open topics, none of which make much progress, and eventually everyone forgets what they were about. Risks that could have been avoided with a small amount of effort earlier suddenly erupt into craziness.

So, project management job number one: land the plane.

 

 

My kids don’t watch TV. How will you sell them anything?

Disclaimer – views entirely my own, nothing to do with my employer.

Familiarity ≠best

Advertising seeks to persuade human beings to make one choice over another. A big part of this has been taking advantage of our tendency to substitute hard questions (“which can of beans would be the rationally best choice?”) for easier questions – very often substituting “best” for “most familiar”. Daniel Kahneman’s book Thinking, Fast and Slow includes a chapter on this.

Much of the effectiveness of advertising depends on this principle – instead of evaluating the price, quality, nutritional benefits of a can of beans, the advertisers hope we’ll remember “Beans means Heinz”.

That strategy works – especially for products where we don’t expect a big upside from expending the effort to make a “better” choice (will that other can of beans really be so much better?), or where the downside of a wrong choice is (perceived as) high – I’ve never heard of car brand x, it’s safer to stay with a brand I’ve heard of.

But there are some powerful forces eroding the magic bullet of familiarity.

Howling into the void

Becoming “familiar” was never easy – you’d need a memorable message, you’d need a big budget to put it in front of your target audience, and you’d have to hammer home the message over many years. Today, I don’t think it’s even possible any more – no matter how much advertising spend you have, becoming “familiar” just through advertising would be unthinkable (if you’re aiming at a mainstream audience).

People are actively avoiding advertising if they can; if they can’t they ignore it. The decline of print audiences, and the fragmentation of linear TV means the “old” channels are becoming much less effective (even ignoring the fact that linear TV as a medium doesn’t look like it has a great future – nobody I know watches “TV” – it’s all streaming, on-demand, box-set and event-based viewing).

The big business success stories of the last decade – Facebook, Google, Amazon, Uber, AirBnB etc. – don’t advertise much. They use word-of-mouth and built-in mechanics like referral schemes – but most of all, they have a great, useful product.

From information scarcity to abundance

The “familiarity” model is based on information scarcity. If I have to chose a product in a super market, and I have no other information to hand,  instead of reading the label and making a comparison with similar products, it’s tempting to go for “which product have I heard of”.

And it’s not all that long ago that consumers didn’t have access to much other information. Before the Internet, you might know a few friends’ and family members’ opinion on something; you might read a magazine or a book; you might, for an important purchase, order a report from a consumers’ organisation.

Today, you can find out instantly what all your friends and family think about a product by asking on a social channel. You can find out what strangers think on review sites. You can find out every aspect of the product or service by running a quick search. And as we are all consuming more “information” every day, the chances of having no other information available are declining.

So, to become “familiar” is harder, more expensive, and less effective.

The end of the “Friday afternoon car”

In the 1980s, a friend bought a brand new car; it was an MG Metro. She owned the car for about 2 months before it broke down, so she took it back to the garage for a repair. 3 weeks later, it broke down again; after 6 months, the car had been back for 4 repairs. The mechanic at the garage introduced me to the phrase “Friday afternoon car” – the idea was that the factory workers wanted to get home for the weekend, so cars built on a Friday afternoon would be rushed, and suffer from problems.

It’s now pretty much impossible to buy a Friday afternoon car – even the cheapest, least prestigious car manufacturer is delivering a high-quality product that will do exactly what you expect, and will easily outlast its warranty period.

The same is true of most consumer goods (financial services are a notorious exception) – supermarket own-brand beans may taste different to the brand names, but they aren’t “worse”. Clothes from a discount store will last just as long as those from a high-street chain. You can watch NetFlix on a discount laptop just as well as on an Apple.

The value of “brand” and “familiarity” in customer decision making is declining – now that you cannot buy a “bad” product any more, the safety of going with the familiar brand is declining in importance.

 

The tragedy of the mega-pixel

A few years ago, I met an executive from a large camera company. Before digital photography came along, this company’s marketing (and manufacturing) emphasis had been on the quality of their lenses. This is  subjective field – you can use focal length and aperture as a proximate measure, but no serious photographer would equate a “no-name” lense with the same metrix with a lense from a well-known manufacturer.

And you know what? It was broadly right – a good lens meant a better photo.

Then the digital camera came along – and now people buy cameras based on one simple metric: the number of megapixels. This is not really correlated to image quality for most people (unless you want to print a photo to cover a bus shelter). But it allows consumers to compare products using a nice, simple metric – camera x has 12 megapixels for $200, camera y has 15 megapixels for $200 – camera y is the best deal”.

The camera executive called this phenomenon “the tragedy of the mega-pixel” – he said his company culture had changed. The focus on lens quality was still there – but it wasn’t commercially meaningful in the short term. When it came to dollars, it was better to invest in mega pixels than glass.

Restaurant review: Provender, Wanstead

Last week, we went for a sunday lunch at Provender, on Wanstead High Street, in North East London.

It’s a small French restaurant, and pretty much every table was taken, including the small area outside (we’d booked in advance). The interior is bright, and tasteful, without being pretentious.

We had the big starter platter – Hors d’oeuvre “Royale” – which was frankly amazing. Charcuterie, a very nice rilette, and a celeriac remoulade that I will have to experiment with. The other side of the platter was fish – smoked salmon, salmon mousse, and small sections of what looked like sword fish (I don’t eat fish). The charcuterie was spectacular – two sliced dried sausage varieties that were subtle, but each had a distinctive flavour I can still taste 2 days later…

My main was the steak tartare. I love tartare – it’s hard to find in the UK. The Provender tartare was minced by hand, and it makes a difference – the texture was much more interesting. The flavour was robust – the seasoning was just the right side of aggressive, and the meat was clearly from a good butcher.

My lunch partner had the coquilles St Jacques, served with saffron risotto. She smiled beatifically, and reported a state of bliss.

For desert, I had the blackcurrant sorbet – rich with a cassis liqueur, and very fruity. My lunch partner chose the chocolate tart, which may well be the most chocolatey thing I have ever tasted.

We had a nice bottle of wine; the total bill came to just over £100. Excellent value for money.
Thoroughly recommended.

What does “good” look like?

In traditional “waterfall” software development, “good” software meets the written requirements. No matter how bad the requirements – “good” software meets the requirements.

In agile software development, “good” software meets the quality goals set by the team, and delivers the features defined by the product owner. No matter how deluded the product owner.

Both models seem unsatisfactory.

 

Is web design work drying up?

I was listening to the Tim Ferris podcast where he interview Seth Godin. Seth publishes a new post every day. I’m going to try to write more.

I stumbled across a post on Hacker News called “The Elephant in the Room: Web design work is drying up (sazzy.co.uk)“. The conversation got pretty heated.

The original article is about life as a web design freelancer – which is a fair way from my life as an employee of a large digital agency, working in technology. But the basic trends are probably the same.

I think it boils down to three trends.

Anyone who needs a website probably already has one. And many businesses have decided (for better or worse) to stick with a social media presence, and not to put too much effort into their website.

Secondly – companies that do need a website almost certainly need more than just a nice static web presence. They may need a complex ecommerce site, or a dynamic web application, or – well, whatever. These projects typically need specialist skills, rather than a generalist.

Thirdly – for the simpler web sites, the “off the shelf” tools are getting better. A WordPress site, with a custom theme and a few nice plug-ins delivers the same business value as a custom-developed website did 5 years ago.