Skip to content

There goes my job – again?

A fairly widely reported story last week explains how Microsoft research have created an AI that can write software. Hacker News went crazy – as you might expect.

Can an AI write software? Yes – it clearly can. Writing software means converting sentences a human understands into instructions for a computer; if Google Translate can convert “where’s the post office please?” into 8 other languages, there’s no obvious reason it couldn’t convert “add 12 to 88” into computer-executable form. In fact, this concept is older than I am – the venerable Cobol language was created in 1959 with the goal of converting “business language” into computer programs – Cobol stands for “common business language”. And compared to C, or Fortran, it kinda succeeds – Cobol source code at first glance is less dense, and most of the keywords look like “English”. But programming in Cobol is still programming – it’s unlikely that a sales director would be able to use Cobol to work out the monthly commission report.

In the 1990s, we got a lot of hype around 4GLs; in the early 2000’s, model-driven development and round-trip engineering promised to make software development much more business-friendly. Business people could express their requirements, and they could be converted to executable code automagically.

None of these things really worked. Cobol was a hugely successful language – but not because it did away with programmers; rather, it was a widely available language that matched the needs of enterprises which were automating for the first time. I don’t think I’ve heard anyone say “4GL” for about a decade; round-trip engineering foundered with the horrible tools that support it, which hardly helped to simplify life for either developers or business people.

The defining skill of a software developer isn’t the language they code in – it’s the ability to convert requirements into working software. Computers are already helping with this by compiling or interpreting “human-readable” code to machine-executable code. It’s not ridiculous to believe an AI could use a unit test to write code which completes that test, and it’s not ridiculous to assume an AI could convert a BDD-style requirement into working software. The Microsoft research paper says they have taken the first step – their AI solves coding test problems, which are typically specified as “write a program which will take a sequence of numbers “8, 3, 1, 21” and sort them in ascending order, returning “1, 3, 8, 21”. Extending that to a unit test is a logical and manageable step; I could see an environment where a programmer defines the basic structure of the application – classes with public methods and properties, for instance – along with the unit tests to specify the behaviour, and have an AI fill in the details.The next jump – from “programmer designs structure, AI fill in behaviour” to “AI designs structure” would be a huge jump. It would likely run into similar problems that you get with model-based development, or many object-relational mapping tools – the level of detail required to allow the AI to make the choices you want it to make would be high, and the level of detail of the specification might be indistinguishable from writing software.To then jump from “business person defines requirement, AI interprets and builds solution” – well, I’ve been wrong before, but I don’t think that’s credible in the next decade, and possibly longer. It would require natural language processing to reach full maturity, and the AI would need a deep understanding of business domains, the way humans view and interact with business processes, and user interface design.So, I think my job is safe for now. Not sure about any computer science graduates leaving university right now, though…

Don’t worry what you’ll do when you leave education – your job hasn’t been invented yet.

I was listening to a podcast the other day – Tim Ferris talking to Chris Young – and there was a great quote from Chris when he discussed the relationship he had with his father. At some stage, Chris’ father told him “Don’t worry what you’ll do when you leave education – your job hasn’t been invented yet.”.

I am the father of a bunch of teenagers, and they regularly tell me I’m too demanding/my expectations are too high/they are not sure what they want to do, and can I please leave their room? The thing Chris said on the podcast really rang true. I think I might be wrong in my expectations of my kids.

I’m getting on a bit, but when I was at secondary school, computer programming wasn’t part of the curriculum (though you could take extra classes in Hebrew, typing, and bookkeeping). When I went to university, we had access to a computer lab – it ran BBC Micros – and proudly touted access to janet – but the idea of a global network with access for all was pure science fiction. Oh, and TV in the UK was limited to 4 channels, in the Netherlands we got more channels but only because we got access to TV from Germany, the UK, France, Belgium and – for some reason – PBS. Ordering a book that wasn’t available in our local bookshop took about 2 weeks – if you happened to know the ISBN. Phone calls were expensive – international ones were extravagant. The adjective “social” was more commonly associated with “disease” than “media”.

So, no, the jobs I’ve done for the last 18 years wasn’t invented when I was at school; many of the technical skills I have used over the last 25 years (Web development, Visual Basic, PHP, Ruby, Java, DevOps, Agile development, ) weren’t invented when I was at school. Some of the others (SQL, C/C++, object orientation, software project management) were around, but not really commonly known. Some of the things I use every day to do my job – video conferencing, online chat with people around the world, shared knowledge and code repositories – would have sounded like the deluded ramblings of a mad man back in the 80s.

On the other hand…many of the skills and habits I picked up in my teens and early twenties continue to serve me well every day. I learnt to work hard when I worked as a waiter on a passenger ship. I learnt to write at university, and to write in a business sense at PA Consulting, early in my career. I learnt SQL in my first year out of university. I learnt to think about meta-processes and team work in a band. I learnt how to lead a team at school, organising a music festival. I learnt how business finance works in the first two years out of university – when I also learnt the basics of marketing, sales and presenting. I learnt how to pick up new skills in the first few years – I was a graduate trainee, doing 6 months in every department in the company.

I’m still trying to find the common thread here – all the things I learnt involved me being engaged and busy. I’m glad that I hadn’t yet found Civilization back in my teens…They involved exposure to new things, people and experiences outside my normal circle, and surprisingly little formal education.

Am I just a (late) baby boomer riding the technology wave? My father was born before world war 2, and trained as a merchant seaman. He learnt to navigate by the stars, using a sextant and watch. He learnt how to adjust magnetic compasses, and spent time training on tall ships. By the time he was 40, most of those skills were still in daily use, though modern navigation devices like chart plotters were coming onto the market. Many of those skills are still being taught at naval colleges today. My mother trained as a secretary – she could take shorthand, and type an ungodly number of words per minute.  Again – her job still existed by the time she reached 40, though the mechanical typewriter was being replaced with word processors.

My grandfathers were both born before world war I. They trained as craftsmen, and the basics of their trade didn’t change all that much during their working life – though one of my grandparents trained as an air mechanic in the Royal Flying Corps in world war I, and worked on aircraft design in WW II (he had a Hurricane prop in the garden shed) – by the time he died in the 1990s, much of his training in aircraft maintenance was obviously redundant. But his knowledge of the internal combustion engine didn’t go out of date.

So, yes – I think change is accelerating.

Last time I spoke to one of my teenagers, I tried to summarize it thus – as a teenager, your job is to work out what you like doing, and what you’re good at. If you have any extra time or energy, work out how to learn new skills, and communicate. Any educational achievements are a bonus.

 

Aligot – mashed potato that will kill you (but it’s worth it).

We went to Paris for a few days last week, and ended up in La Petite Perigourdine for dinner. It’s a corner restaurant, a few hundred yards from the tourist hotspots near Notre Dame on the left bank, and we chose it because it looked busy with local people.

The food was great – the onion soup was pretty much the perfect implementation of a French classic – rich, dark, wintery. My steak was perfectly cooked, and the seasoning was superb – it brought a relatively simple cut of beef and turned it into a classic. We had a great bottle of wine – the Cuvée Mirabelle from Château de la Jaubertie. Not hugely expensive, but as a dry white, it’s amazingly complex, with oak notes, and a great mouth feel.

One of the new discoveries for me was served with my steak – a dish called aligot. My steak arrived on a big plate, otherwise empty; the waiter arrived with a copper pan with a semi-liquid substance, and poured it on my plate with some panache. The smell was amazing – cheese and garlic, but not overwhelming. When I tasted it, the texture was rather dense – but pleasingly so. The flavour was rich and intense – a combination of fragrant garlic, tangy cheese and soft potato. It was clear that this dish would take years off my life, but it would be worth it.

Once home, I set about recreating the dish. I found a few recipes, but none were convincing – so I experimented, and I think I’ve stumbled on the correct way. It’s an easy enough dish, but the timing is fairly unforgiving – once you’ve created the mash, you should serve it immediately or it turns into glue.

Recipe

This recipe is for 2 people – scale up as required.

Boil a kettle.

Then, start by peeling potatoes – I use charlotte potatoes, they’re nice and waxy – and cut them into similarly sized chunks. Depending on their size, I use 5 small or 3 medium size potatoes to feed 2.

Put the potatoes in a steamer, add a bit of salt, and pour boiling water from the kettle into the pan under the steamer. Steam the potatoes until done – around 15 minutes.

Put a big knob of butter – around 50 grams – into a sauce pan, and heat very gently.

Finely chop or mince 3 cloves of garlic, and add to the butter. Don’t let the butter turn brown – y0u want it warm, but don’t let the garlic change colour.

Once the potatoes are cooked, tip them into a mixing bowl or into a clean, dry saucepan. A little moisture is okay, but you want the potatoes to be fairly dry. If you can keep the repository warm, it will help the process.

Pour the garlic-infused butter into the potatoes.

Add three generous handfuls of grated Lancashire cheese to the potatoes (the French use a cheese called Cantal), and use an electric whisk to turn this mixture into mash. Add salt and pepper whilst whisking – I also like to add a tiny bit of nutmeg.

The whisking will be messy – but after a few minutes, the substance will turn soft, fluffy, almost like bread dough. Serve immediately.

Requirements – notes on value in software.

I was chatting with an old friend recently. We worked together in the 90s, building a custom software solution for a large, complicated multi-national company. The requirements for the system were owned by several senior stakeholders, across several offices, departments and timezones. I don’t recall a single meeting where all stakeholders were present, and one of the project’s major challenges was to get a consistent point of view on each feature’s scope and priority.

“Agile” was not yet commonplace – we had JAD (Joint Application Development) sessions with our key requirements owners to work out what they wanted. As our software was “client server”, and there was no virtualization or automated deployment, it was very hard to show people outside the team what we’d built, or what we might build if they agreed.

We had business analysts who converted the output of the JAD sessions into semi-formal requirement statements, and we planned our development effort based on those requirements. Of course, this was not a particularly reliable process – the JAD sessions with busy, senior people were hard to manage, and would yield requirements ranging from “we want a nice user interface, maybe something like Netscape Navigator” to arcane rules on rounding financial calculations. The business sponsors were unusually responsive – we could usually get answers in a few days when we had specific questions. However, there was no comprehensive statement of objectives and requirements, and the business analysis team couldn’t substitute for the business sponsors.

We developers would regularly end our week in the pub around the corner muttering into our beer that if only someone could give us a complete, clear set of requirements, we could be finished with the project in a couple of months and go home. We lived in re-work hell – we’d finish a piece of software, the QA team would approve it, and when we showed it to the business owner they’d change something, and we’d start again. This feedback loop was typically 3 months or longer.

We weren’t following a traditional waterfall methodology – but it was close enough. Releases were painful and expensive, so we did one or two a year. Our team was measured on how many features we delivered according to specification, even if that specification was wrong. The quality of our requirements was low, and  the feedback time was too long – so our instinct was to improve the quality of the requirements, and to create a process to prevent change to requirements. If our business sponsor gave us “bad” requirements, they should bear the cost.

Where was the “value” in our software? Even back then, in the glory days of client/server development, the code was the easy bit. It was incredibly laborious compared to today – but once we all agreed on what to build, writing the software rarely took more than a few days per feature. The real effort went into understanding, agreeing, refining, clarifying, validating the requirements, the re-work, the edge cases, the “but this requirement isn’t compatible with that requirement”. The project was a success – it saved the business tens of millions of pounds once live, and helped drive a culture shift within the business. But the value wasn’t in the code – it was in the agreed, prioritized requirements we’d implemented.

Fast-forward to today.

Most of the teams I work with can get a development release out in minutes, and feedback from clients in no more than a day. On most projects, we communicate using online tools like Jira and Confluence to capture requirements and design decisions. We use online chat, email and voice calls to discuss requirements and ideas, as well as team progress. Teams are distributed – my last few projects have had developers in at least 5 locations, and clients in 3 or 4 different offices.

And yet, on many engagements, we still treat code as “expensive” – we spend a significant proportion of our effort capturing, refining, grooming, prioritizing, designing, mocking up, visualizing requirements. It’s not uncommon for a software project to spend around 30% of its budget on developers. Source code and the final product effectively become the output of a long, complicated process of turning Powerpoint into working software. I’ve seen this in both agile and “traditional” projects – though of course making public-facing, mass-audience applications for large brands is always going to be design-intensive.

While we have faster communications than in the 90s, and our software cycle time has gone from months to minutes, the challenge remains coming up with a product feature set that everyone agrees on, is feasible given the other project constraints, and which is captured in a way that can be used to manage the project.

It turns out that the solution to this is both simple, and impossible – the project needs a single, consistent point of view, which combines at the very least the team which is commissioning the software, and the team which is delivering it.

 

 

 

 

Inevitable futures – manufacturing

I recently finished Kevin Kelly’s “The Inevitable” – it’s good, positive, often revealing. But I want to work through some of the ideas and see what scenarios they might open up. First up – manufacturing.

When I left university in the late 1980s, I worked for a small multinational manufacturing conglomerate, and I saw a fair few factories on the inside. They were dirty, noisy places, with humans and machines interacting to transform one thing into another – aggregate, lime and cement into concrete, wood, laminate and hardware into kitchens, etc. The factories were large, and housed multiple specialized machines, storage areas for raw materials, intermediate products and finished goods. Human beings both controlled the process and did the work machines could not – from driving forklift trucks to cleaning the machines, or fixing them when they broke. Controlling the process was a big deal – most of the factories I worked in had roughly the same number of “administrative” staff as shop floor workers. Even though the factories made similar or even identical products every day, there were regular crises – machines breaking down, suppliers delivering late, customers changing their orders at the last minute.

Recently, I was lucky enough to visit the Rolls Royce Motor Car factory in the Sussex countryside. The contrast was amazing – it’s quiet, clean, controlled. Even though every car they produce is different, the process was almost serene. Far less of the factory was dedicated to “storing stuff”, and there were far fewer dedicated machines.

Of course, that’s because Rolls Royce mostly assemble and finish cars in their factory – most of the components that go into the car are made somewhere else. At Goodwood, they are put together, painted, polished, and generally glammed up with leather, wood, and all the other items that make a luxury car.

Now, I also got to have a look inside the engine plant of a motorcycle manufacturer a few years ago. I was expecting much more industrial grit – after all, engines are big, complicated things, made out of metal. Surely there would be lots of noise, and flashing lights and…well, no. Turns out that building an engine is also mostly assembling components delivered by suppliers.

I’m pretty sure it’s turtles all the way down.

The modern factory is possible only because we can process and exchange data across the globe, instantaneously. In the late 80s, we would fax or phone through orders to our suppliers; I spent a few months in the “planning” department, working out different ways to sequence customer orders to optimize production efficiency by shuffling index cards on a big felt board. We would then feed those plans into our manufacturing resource planning software, which in turn would spit out purchase orders (which we’d fax or phone through to our suppliers). We had lots of people throughout the factory collecting data (usually with a clipboard), and then feeding that into the computer.

Today, of course, most companies communicate orders directly, and factories gather their own data; the computer is much better at optimizing production capacity than a human could ever be, and as a result, the role of the human is increasingly about doing the things machines can’t do (yet).

I’m also pretty sure that this is just the beginning.

Once we have robots that can do tasks only humans can do today, self-driving lorries, 3D printing and nano manufacturing it’s easy to imagine lots of different scenarios. I’d like to consider one.

The local manufactury.

Right now, the cost of labour determines where we make most things – and as that’s cheap in China, Vietnam, Mexico, etc. our global economy takes raw materials, sends them (usually over great distances) to those cheap labour places where they get transformed into products we want to buy, and then ship them halfway around the world again for consumption in the West.

What happens once robots can replace that cheap labour?

Of course the other reason to have a “car factory” or a “shoe factory” or a “phone factory” is to have a store of knowledge and skills. Some of those skills are directly related to the product – welding, sewing, assembling small electrical components. Many of those skills are organisational – “how do we do things around here?”. Some relate to design – the development of new products.

It’s not ridiculous to imagine that much of this knowledge – especially the skills and organisational skills – can migrate into computers.

If these trends continue, maybe the cost of shipping things around the world becomes critical. Maybe every neighbourhood gets a local manufactury – a building with pluripotent robots, 3D printers and nano-bots, managed by a scheduling AI, integrated into a supply network. Customers choose a product – from an “off-the-shelf” design, or by customizing a design, or by commissioning a design from a specialist, and send the order to the manufactury. The manufactury looks at the bill of materials, and places orders with its supply network; self-driving vehicles deliver the materials, and the manufactury schedules the robots to build the finished product, which – of course – is then delivered to the customer using a self-driving delivery van. Or a drone.

To create a shirt, the manufactury would order cotton, buttons, etc. – either in bulk (if the purchasing algorithm decides that keeping a stock of cotton makes sense) or “just enough”. The nanobots would create dies to colour the cotton, and a robot would follow the pattern to cut the cotton into the components for a shirt, and stitch it together.

You could easily imagine such a manufactury making clothes, furniture, electrical components, household goods etc.

The economics would be interesting – but I imagine that the price of an object would be driven partly by the cost of the design and raw materials, and partly by the time the customer is prepared to wait. The economies of scale don’t go away – clearly making dozens, hundreds or thousands of the same product would be much cheaper than one-offs. You could imagine clever scheduling algorithms, aggregating demand from multiple neighbourhoods, so that when the threshold is reached for a particular product, one of the manufacturies configures itself to satisfy that demand. Of course, this could apply to finished goods and to intermediate products – manufacturies converting raw cotton to thread, thread to cloth etc. You can also imagine how specialized equipment – weaving looms, injection moulding presses etc. – would continue to offer significant cost advantages.

When? How?

This is just speculation. There are many leaps of faith – I’m pretty sure I made up “pluripotent robot” as a phrase, and while 3D printing and nano-materials are not purely speculation, they’re also not yet ubiquitous. Lights-out factories are still not mainstream, let alone factories that can re-configure themselves every day.

But ecommerce and digitisation means we’re all spending less time on the high street, and becoming more accustomed to ordering stuff on the internet and have it turn up. Amazon especially is innovating logistics and supply chains – I can order coffee beans and printer ink on my phone, and they will deliver it within 2 hours.

So, if this happens, I’d bet it would be a company like Amazon who leads the way – they already have highly automated distribution centers, so the jump to manufacture isn’t quite such a big one. They have the computing power, and the customer insight.

Europe.

I feel European. If I shared any of cousin Dirk‘s talents, I’d qualify to play football for 3 countries. I grew up speaking English at home, Dutch at school, and Frisian with my friends in the playground (though I never got the hang of Sneekers). Growing up, school and music trips went to France, Belgium and Germany; I can read a news paper in French, German, Italian and Spanish. I have friends and colleagues from around half the 27 remaining Eurozone countries.

I love classical music from the continent – Bach, Mozart, Vivaldi, de Falla, Lully, Beethoven, Sweelinck. I love continental food. I love continental cities. I love continental European comics – Franquin, Hergé, Toonder.

I’ve chosen to live in the UK for the last 30 years – I love the UK too. London is an amazing city. Many of my favourite authors – Martin Amis, William Boyd, David Mitchel – are British. The BBC is amazing. Even the food is getting better.

But now, after the vote to leave the EU, it feels like I have to chose. It’s not clear what the UK’s relationship with Europe will be – but I fear the worst.

Project management job number one: land the f****ing plane

I’ve been making software for a few decades now, and worked on all sorts of projects – small, large, complex, simple, fun, and not-so-fun. One of the biggest problems with software is the amount of information a developer needs to keep in his head (I believe Dijkstra once wrote that software developers were unique in having to be able to understand, simultaneously, 7 levels of abstraction). The same is true for those who manage developers.

On a large project I was involved with recently, I noticed that the project management team was working really hard, but not making much progress. I looked at all the streams of activity, and I noticed that the project had lots of outstanding decisions. When will we do the training? Who will manage QA? What day will we have the management call? Which version of the API should we use?

It reminded me of an iPhone game I’d played for a bit – I think it was called “Air traffic control” – in which you have an airfield, and planes arrive on the screen; the job is to land the airplanes. As the game goes on, it throws more airplanes at you, and eventually you’re overwhelmed by the number of aircraft, they crash, and the game ends.

It’s mildly diverting, and a good way to while away the tube journey.

It occurred to me that our project management team wasn’t landing enough planes – and the more planes are circling the runway, the more likely it is they’ll crash. Most people I know can keep a handful of things in their brain at one time (there’s some scientific research to confirm this), and the whole “Getting things done” system is designed around this.

The issue with project management, of course, is dependencies. One pending decision can block 4 other decisions, and before you know it, you end up looking like that guy from Airplane! , trying to keep the whole thing spinning, and dedicating all your energy to stopping the planes from crashing into each other, rather than to landing the planes.

And this, of course, affects everyone. The developers find that they can’t work on something because we’re waiting for a decision. The number of items that aren’t “done” grows every day – and when a decision is made, updating all the dependent items grows. The project sponsor sees an ever-longer list of open topics, none of which make much progress, and eventually everyone forgets what they were about. Risks that could have been avoided with a small amount of effort earlier suddenly erupt into craziness.

So, project management job number one: land the plane.

 

 

My kids don’t watch TV. How will you sell them anything?

Disclaimer – views entirely my own, nothing to do with my employer.

Familiarity ≠best

Advertising seeks to persuade human beings to make one choice over another. A big part of this has been taking advantage of our tendency to substitute hard questions (“which can of beans would be the rationally best choice?”) for easier questions – very often substituting “best” for “most familiar”. Daniel Kahneman’s book Thinking, Fast and Slow includes a chapter on this.

Much of the effectiveness of advertising depends on this principle – instead of evaluating the price, quality, nutritional benefits of a can of beans, the advertisers hope we’ll remember “Beans means Heinz”.

That strategy works – especially for products where we don’t expect a big upside from expending the effort to make a “better” choice (will that other can of beans really be so much better?), or where the downside of a wrong choice is (perceived as) high – I’ve never heard of car brand x, it’s safer to stay with a brand I’ve heard of.

But there are some powerful forces eroding the magic bullet of familiarity.

Howling into the void

Becoming “familiar” was never easy – you’d need a memorable message, you’d need a big budget to put it in front of your target audience, and you’d have to hammer home the message over many years. Today, I don’t think it’s even possible any more – no matter how much advertising spend you have, becoming “familiar” just through advertising would be unthinkable (if you’re aiming at a mainstream audience).

People are actively avoiding advertising if they can; if they can’t they ignore it. The decline of print audiences, and the fragmentation of linear TV means the “old” channels are becoming much less effective (even ignoring the fact that linear TV as a medium doesn’t look like it has a great future – nobody I know watches “TV” – it’s all streaming, on-demand, box-set and event-based viewing).

The big business success stories of the last decade – Facebook, Google, Amazon, Uber, AirBnB etc. – don’t advertise much. They use word-of-mouth and built-in mechanics like referral schemes – but most of all, they have a great, useful product.

From information scarcity to abundance

The “familiarity” model is based on information scarcity. If I have to chose a product in a super market, and I have no other information to hand,  instead of reading the label and making a comparison with similar products, it’s tempting to go for “which product have I heard of”.

And it’s not all that long ago that consumers didn’t have access to much other information. Before the Internet, you might know a few friends’ and family members’ opinion on something; you might read a magazine or a book; you might, for an important purchase, order a report from a consumers’ organisation.

Today, you can find out instantly what all your friends and family think about a product by asking on a social channel. You can find out what strangers think on review sites. You can find out every aspect of the product or service by running a quick search. And as we are all consuming more “information” every day, the chances of having no other information available are declining.

So, to become “familiar” is harder, more expensive, and less effective.

The end of the “Friday afternoon car”

In the 1980s, a friend bought a brand new car; it was an MG Metro. She owned the car for about 2 months before it broke down, so she took it back to the garage for a repair. 3 weeks later, it broke down again; after 6 months, the car had been back for 4 repairs. The mechanic at the garage introduced me to the phrase “Friday afternoon car” – the idea was that the factory workers wanted to get home for the weekend, so cars built on a Friday afternoon would be rushed, and suffer from problems.

It’s now pretty much impossible to buy a Friday afternoon car – even the cheapest, least prestigious car manufacturer is delivering a high-quality product that will do exactly what you expect, and will easily outlast its warranty period.

The same is true of most consumer goods (financial services are a notorious exception) – supermarket own-brand beans may taste different to the brand names, but they aren’t “worse”. Clothes from a discount store will last just as long as those from a high-street chain. You can watch NetFlix on a discount laptop just as well as on an Apple.

The value of “brand” and “familiarity” in customer decision making is declining – now that you cannot buy a “bad” product any more, the safety of going with the familiar brand is declining in importance.

 

The tragedy of the mega-pixel

A few years ago, I met an executive from a large camera company. Before digital photography came along, this company’s marketing (and manufacturing) emphasis had been on the quality of their lenses. This is  subjective field – you can use focal length and aperture as a proximate measure, but no serious photographer would equate a “no-name” lense with the same metrix with a lense from a well-known manufacturer.

And you know what? It was broadly right – a good lens meant a better photo.

Then the digital camera came along – and now people buy cameras based on one simple metric: the number of megapixels. This is not really correlated to image quality for most people (unless you want to print a photo to cover a bus shelter). But it allows consumers to compare products using a nice, simple metric – camera x has 12 megapixels for $200, camera y has 15 megapixels for $200 – camera y is the best deal”.

The camera executive called this phenomenon “the tragedy of the mega-pixel” – he said his company culture had changed. The focus on lens quality was still there – but it wasn’t commercially meaningful in the short term. When it came to dollars, it was better to invest in mega pixels than glass.

Restaurant review: Provender, Wanstead

Last week, we went for a sunday lunch at Provender, on Wanstead High Street, in North East London.

It’s a small French restaurant, and pretty much every table was taken, including the small area outside (we’d booked in advance). The interior is bright, and tasteful, without being pretentious.

We had the big starter platter – Hors d’oeuvre “Royale” – which was frankly amazing. Charcuterie, a very nice rilette, and a celeriac remoulade that I will have to experiment with. The other side of the platter was fish – smoked salmon, salmon mousse, and small sections of what looked like sword fish (I don’t eat fish). The charcuterie was spectacular – two sliced dried sausage varieties that were subtle, but each had a distinctive flavour I can still taste 2 days later…

My main was the steak tartare. I love tartare – it’s hard to find in the UK. The Provender tartare was minced by hand, and it makes a difference – the texture was much more interesting. The flavour was robust – the seasoning was just the right side of aggressive, and the meat was clearly from a good butcher.

My lunch partner had the coquilles St Jacques, served with saffron risotto. She smiled beatifically, and reported a state of bliss.

For desert, I had the blackcurrant sorbet – rich with a cassis liqueur, and very fruity. My lunch partner chose the chocolate tart, which may well be the most chocolatey thing I have ever tasted.

We had a nice bottle of wine; the total bill came to just over £100. Excellent value for money.
Thoroughly recommended.