Skip to content

Running a good meeting.

Photo by Jud Mackrill on Unsplash

Running a meeting is an art form. I don’t know anyone who actively enjoys meetings, and I often find myself in meetings which feel like a waste of time. But I’ve also been able to turn projects around just by running a few good meetings. Because meetings are social rituals, and we primates thrive on social rituals.


Firstly – define very clearly what the purpose of the meeting is, and whether a meeting really is the best way to achieve that purpose. Usually, a meeting is not a great way to explore new opportunities, solve problems, or even reach decisions. Workshops or other collaborative frameworks are much more efficient – the format invites much more interaction, and they are much better suited to situations where both the process and outcome could change.

In find meetings best suited to situations where a group of people (who don’t normally work together) need to create a shared understanding and (perhaps) confirm decisions.
Another useful scenario is where one party holds information, and needs to broadcast that information to a group.
More generally, a meeting is useful where you can work through a predictable series of topics, and the discussion is limited to a few individuals (in a business setting, that is – I’ve seen meetings in political groups, creative consortia etc. which work well in a much looser setting).

It’s worth explicitly thinking about what a “successful” meeting outcome would be. For instance, “this meeting will have been a success if person X agrees to fund our next phase”, or, “this meeting will be successful if everyone understands and agrees that the situation is x“.


Next, decide who should attend, what their interests are, and how they tend to act in meetings. Often, there will be one or two attendees who are key, and understanding their interaction patterns will be a big part of creating a successful meeting. Sometimes, these are the most senior people who attend, but those senior people will often base their opinion on a trusted advisor.
I once stepped into a project that was seriously behind the planned milestones, and looked like it was going to exceed budget. We organised a meeting with the customer to agree next steps; the decision maker was a director-level executive. We knew that he was generally confused and angry about the project, but trusted a programme director to advise on next steps. The programme director, in turn, trusted her two technical and delivery leads. To make the meeting a succcess, we had to get all 4 on side – not just the director.

Once you’ve identified the key influencers in the meeting, it’s worth thinking about their communication style. I know there are lots of personality type theories; for this purpose, I have a simple 2-dimensional model.

Big picture

Some people – including many senior folk – prefer to understand the big picture first, before diving into the detail. In my experience, most senior executives fall into this category. It’s not that they don’t value the detail, but they are interested in the “so what”. They are likely to lose interest unless you phrase the conversation in terms of “big picture” concerns, and they may appear unpredictable as a result.

As a result, it’s worth structuring your agenda topics “top down”: the agenda item should be the conclusion or the question. The conversation should go from conclusion to no more than 3 “because”, “despite”, “as a result of” points; each of those points can then be built up in more detail.

Consider the following hypothetical situation: you want to communicate the likelihood your project is going to overrun, due to a new regulatory requirement.

Your agenda item might be: Project likely to overrun by 5 weeks.
Once you reach that agenda point, you might start with the following:
“The project is likely to overrun by 5 weeks. This is because the government has changed the regulatory requirements. The project is other wise on track; we have looked at ways to mitigate the regulation changes but cannot find a credible option.”
Someone who cares about the big picture can now decide whether to pay attention to the rest of this topic, because you’ve told them up front what’s happening and why.
The trap many people fall into is have an agenda topic along the lines of “Regulatory changes”, and to start that conversation with a detailed outline of the legal requirements before reaching the impact on the project. To someone who cares about the big picture, by the time you reach the “so what” (the project will be delayed), they’ve switched off.


At the other end of the spectrum there are people who are focused on the detail, and prefer to build up their understanding “bottom up”. Meetings are usually not great places for exploring a lot of detail, but often the real substance of an issue can be found in complex, subtle details.

One great way to include details in a meeting is to use visualizations. For project progress, burndown/burnup charts, or defect counts, or progress towards milestones can show a lot of information, covering a lot of detail, without needing to discuss every item. I typically find the right level of abstraction to be: what does the most senior person care about, plus one level of detail below that.
For instance, on the project that was at risk, the senior stakeholder cared about “when will it be finished, how much will it cost, and how many agreed features will be missing?”. So we drew up a chart that showed “features delivered”, “features remaining” and “bugs” over time. We accepted (and agreed with the hands-on client team) that some of those numbers were very approximate, but good enough to show the overall trend. We used that graph in our ongoing meetings as the common “now we talk about detail” agenda item.


Next, we talk about decision making and emphasis. Some people focus very heavily on the substance. When buying a car, they’ll focus on the measurable, tangible qualities of the car (rather than the trustworthiness of the sales person). When talking about a meal at a restaurant, they might discuss the quality of the ingredients, the skill in preparation, or describe the decor (rather than how the meal tasted, or how it felt to be in that restaurant).

When running meetings with people who focus on substance, it helps to stay focused on tangible, and ideally measurable, facts. They will often see success as “the thing we agreed to do has been done”, or “we have 5 things to do and everyone knows what they are”.


I’ve struggled to find a better word than “relationship” here – but people who focus on the relationship aspect will use meetings to assess the interactions between people, their confidence levels. They may consider a meeting successful if people formed a better relationship during the meeting. At a more challenging level, they may also judge meetings by the way they personally are treated, and if they feel personally slighted, they will consider the meeting a failure no matter what the outcome.
I once attended a meeting with a very senior colleague who was focused on relationships. It went extremely well – we achieved all the aims we had set out to, and got approval for a large project. My colleague, however, told me in the taxi back that we couldn’t work with those people, that they were not good partners and that we should back out – because he hadn’t been offered a cup of coffee at the start of the meeting.
Annoyingly, he was right – the project never got off the ground…


It’s a very simple model, but by trying to place people on the “big picture/detail” and “substance/relationship” axes, you can prepare your meeting in a way that will appeal to the key stakeholders.

The point of the diagram above, though is to show that most people have preferences along both axes. Most senior executives I’ve met are “big picture” people, and a large number have a preference for “relationships”.

In structuring a meeting for someone who values “big picture/relationships”, it’s important to remember you cannot persuade them with facts alone – you have to create relationships of trust. I do this by focusing on transparency – by being open and owning up to things that aren’t so great, I try to show that I’m a trustworthy person. I also make sure I remember what they care about, and make it a part of every encounter. One of my clients was very focused on relationships, and in one meeting he explained he had a strategic objective to move his hosting setup to a new cloud provider; even though that wasn’t part of our remit, I made sure to include a check-in on every agenda.

People who value “big picture/substance” are often intellectually demanding. It’s important to be on top of your material, and to have a clear structure to your agenda. One of my first jobs was in a management consulting firm, and my boss (or rather my boss’s boss’s boss) was a classic of this type. He was intellectually curious, and his “big picture” dramatically exceeded my own ability to imagine. My first meeting with him was chaos – he kept moving the goal posts, and kept asking questions until I ran out of answers, then moved to the next topic, usually far off the agenda.
I figured out how to run a good meeting with him – we’d start by explicitly agreeing on the purpose of the meeting, and then for each agenda item, we’d agree what we were trying to achieve. This kept the meetings on track.


The agenda is actually the most important part of the meeting – it’s how you express what you are trying to achieve with the meeting, and it’s the main tool you have to keep things on track.

The exact format of the agenda isn’t super important – I’m quite fond of a presentation which can combine both the agenda and the information required for the discussion. I have settled on the following format:

  • Introduction: date, place, meeting name, start time, end time
  • Attendees: names and contact details for all attendees
  • Meeting purpose: a brief summary of the purpose for the meeting
  • Topic 1: subject, allotted time, topic type (presentation, discussion, confirmation), owner
  • Topic 1 supporting information
  • Topic 2…n
  • Summary: restate meeting purpose to verify it was achieved. Next steps/actions.

It’s usually best to have the most engaging topics as early in the agenda as possible – demos, decisions, significant changes to plans, etc. This means people are less likely to have got distracted, or called out for other meetings.


Running a meeting to time is a real skill – but it really matters. When I worked in a management consulting firm, every meeting room had a clock on the wall; it wasn’t uncommon for someone to remark on the billable hours every meeting cost. For senior attendees, every minute of their day is booked, and your meeting is just one of the hundred other things they could be doing. So run meeting to schedule, and finish early if you possibly can.

That means that you should make sure every agenda item has an allotted time slot, and you should make sure no item exceeds their time slot. If you get to 10 minutes from the end of the meeting and you are worried you’re going over time, step in and do a time check; ask what the options are.


Finally – meetings nearly always have some kind of outcome – actions, follow-ups, or meeting notes. Get them out the same day, while the meeting is fresh in everyone’s mind.

Software engineering career ladders

Photo by You X Ventures on Unsplash

This conversation seems to keep coming up – “how do we give software engineers (in the widest sense – DevOps, QA, etc.) a career ladder that allows them to progress, and keep up with market pay rates?”. A friend recently asked for advice – his team was consistently being raided by better-paying companies, but his HR policy had pay scales linked to seniority, and he couldn’t match the higher pay even if he wanted to.

The other common problem is that hands-on engineers often reach a pay ceiling, and feel they have to turn to “management” to earn more money in the same company. Especially in organisations where software development is not (considered to be) a core competency, there’s often a fairly low ceiling on earnings for technical people.

And of course, many people like to see their career progression reflected in a better job title, additional perks, and/or greater authority . They rightly consider the right job title to be a springboard to new jobs in the future – a CV with impressive job titles opens more doors.

There are lots of ways to address this problem. Construx have a very detailed professional development ladder – but they all end up taking you away from the hands-on work and into “management” roles. I quite like the Shuttl model – it shows a divergence between “people and projects” (traditionally associated with management roles), and “systems and services”. This allows people to progress up to the highest levels (VP Engineering and CTO) via either track. However, if salary and other benefit levels are coupled to the ladder, you risk losing people because the market for their skill is hot.

The way I’ve addressed this question in the past is to have a consistent ladder in terms of job titles; the job title reflects the degree of authority and autonomy an individual has.

AssociateWorks under supervision. Follows guidelines and processes already defined
SeniorWorks independently. Follows guidelines and processes, and helps to improve them.
LeadCan take team-level (5-15 people) decisions. Selects guidelines and processes from agreed list, or works to define new ones
PrincipalCan take group-level (up to around 50 people) decisions. Defines selection criteria for guidelines and processes
DirectorCan take department (up to around 300 people) decisions.
ChiefCan take organisation-level decisions.

Within each discipline, those levels have their own skill and experience level expectations. A “discipline” might be “mobile development”, “back-end development”, “QA”, “business analysis”, “delivery lead” etc. Not every organisation will need the full ladder of roles – but it’s better to be honest about that fact, rather than suggest you can work your way up to Chief QA Engineer (for instance).

An unexpected benefit was that it allowed us to reflect realistic expectations of experience – new languages, frameworks and processes are introduced all the time, and insisting on 6 years experience in a brand new framework was obviously ridiculous.

When we introduced this, there was a fair amount of pushback – some individuals felt they matched higher levels in the ladder, others felt the ladder did not recognize their combination of skills. But by bringing it back to the level of authority and autonomy, we could have reasonable conversations. “So, you think you’re a principal software engineer – can you give us some examples of where you’ve taken group-level decisions on software engineering?”.

We then made the salary bands pretty wide – especially at the “associate”, “senior” and “lead” levels, and we varied the salary bands to reflect market pay rates (we paid a consulting firm to provide benchmarks).

It wasn’t perfect – some people really wanted a better job title, and felt that the ladder didn’t provide the flexibility they wanted. Others (rightly) felt the opportunity to demonstrate wider decision making experience was lacking, so they got stuck at their current level for too long.

But the benefits were clear – we had a simple, flexible way to define levels of seniority, and it was easy to communicate. It allowed us to distinguish pay based on what the market was looking for, and we could explain to the rest of the business that we had independent confirmation that mobile developers _really_ were paid that much.

Slices, layers, tiers, components – oh my!

A few days ago, I wrote about my experience delivering a large web project and compared it to the Hertz/Accenture law suit. A friend asked me to go into more detail – here’s the gist of our conversation.

X: isn’t there a risk that by delivering slices, you’ll introduce lots of duplication? If I want to implement a business rule saying “all orders must have a validated shipping address”, and I have two slices – “website user creates order” and “contact centre creates order”, won’t they duplicate that logic?

Sure – that’s a risk. You could well end up with a mess of duplicate code, especially if you have lots of teams working in parallel. There are broadly speaking two ways to mitigate this risk.

The most common is through “architecture” – you agree a software design where there is a single component which manages orders and their business rules. This is perfectly reasonable – but it often comes with some baggage. There is a temptation to design this architecture in great detail, and assign a team to each component. This often means that the component is designed up front, and cannot change in the light of real requirements that are uncovered during development. I’m all for architecture, and I’m all for cleanly defined components which do one thing and do it well. But I believe architecture should be lightweight and focus on principles and infrastructure, and the actual implementation should evolve along with the application.

The second way to avoid duplication is through process and culture – you make sure the teams communicate about their design decisions, and know how and where to look for existing code which is related to their work. You make sure the team has enough time to refactor and extend existing components, and that everybody understands that the team is evolving the architecture together.

X: What about multiple UIs? What if I want to build a platform that supports web, mobile apps, smart TVs, VR glasses?

Sure, another really important consideration. I’ve worked on projects with multiple brands, multiple markets, multiple devices – the possibility for lots of duplication is even bigger. The same answer applies as above, really – yes, a clean architecture is the answer, but no, designing that architecture up front and dedicating a team to the “re-usable” layer is often a bad idea. The user experience on those different devices is often very different, and it’s much better to evolve the re-usable layer as you learn about what you really need than trying to design it in detail up front.

X: And what about skills? Front-end developers want to work with other front-end developers, you get much better code if you have people work together on the same technology layer.

This is indeed a challenge – especially when you have skill requirements you can’t realistically embed in each “slice” team, or where you have particularly complex requirements in one part of the slice. But in my experience, once developers are used to working in cross-functional teams, they really enjoy it – they learn a lot from working with other disciplines. Some of the most useful code review sessions have come from front-end developers and back-end developers looking at each other’s code – they tend to spot mistaken assumptions, rather than stylistic problems.

I don’t think there’s a nice structural solution to this question – a neat org chart you can draw showing how teams work. I think the solution lies in culture, rather than structure – coding in the open, joint ownership, a shared understanding of “what the project is about”.

X: OK, but how big should a “slice” be?

Good question. As usual, it depends.

The key goal of “slices” is to make sure everyone – developers, business sponsors, QA, designers – has the same perspective on “what the project is”, and “how much is done”. So a slice needs to be demonstrable to business people, in a way that clearly communicates what works and what doesn’t. It needs to have a user interface (even if most of it is placeholder), and it needs to push logic all the way down the stack.

But it doesn’t need to scale, or be fast, or be pretty. It can have some technical debt, and it can cover only the “happy” path. It might only work for one of the different types of users, or one type of content, or one workflow.

X: I think I understand. What do I do after my first slice?

Another good question. I like to structure projects so that we build the first slice as quickly as possible (4 weeks or so is great); I typically ask the entire team to focus on this goal. Once the first slice is delivered, I like to add more slices every 2 weeks. At this point, it may make sense to have some of the team working on cross-cutting concerns (build/deployment, shared look & feel, etc.). It may also make sense to build larger components outside a slice – some things just take more than 2 weeks to deliver. However, I try to keep at least half the team working on the “slice”.

Stories and numbers: OKRs

“Where are we, and what should we do” is a question I’ve had to answer on many consulting engagements. It’s often wrapped in very different language – “How do I deliver this product?”, “Can we invent a new service to serve this type of client?”, “How do we counter this competitor’s move?” are all variants of the same fundamental question.

“Where are we” is often a really hard question. Qualitative assessments – “we’re the quality leader in our market”, “we’re broadly on track”, “we have happy clients” are great, but very hard to analyze for patterns and underlying issues. Quantitative measures – “our velocity is 33 points / sprint”, “we need to be finished in 6 weeks”, “we’ve spent £x out of a budget of £y” are much easier to track, but metrics are often gamed, and future projections are often unreliable. Some important attributes are hard to capture in numbers – how loyal are your customers really? How productive is a developer? How “big” is this project?

Nevertheless, understanding where you are is key to deciding what to do next. I worked with a client once who had a fundamental usability problem with one of the key customer interactions. This manifested in really low loyalty – customers would try the service, get a good experience during purchase, but a bad experience during delivery. They asked us to help improve customer loyalty – they had correctly found that loyalty was well below industry standard. But the reason for poor loyalty (“where are we?”) was not a shortage of features or benefits in the loyalty program. Our initial briefing was to find innovative loyalty ideas; yet the first 5 customers we spoke to all told us that the service had been below par, and that this was their primary reason for not returning. Once this became clear, “what should we do” was obvious.

The numbers, in this case, were all there – we had “low loyalty” numbers, as well as indicators that customer satisfaction with the core experience was low. Digging into application statistics, there were plenty of numerical indicators showing the point where customers were dropping out. What was missing was the story making sense of those numbers.

This is one reason I like Objectives and Key Results (OKRs): the objective is a story. We’re going to dominate the market! We’re going to build an amazing product! We’ll create a fantastic team!. The key results are the numbers that tell you whether your story is going to come true.

Most people I know are not motivated by numbers – even numbers like “salary” are only marginally interesting – it’s the stories people tell about the numbers that matter. “Once I get a salary of £x, I will go on that safari holiday I’ve always wanted”. The story matters.

Accenture and Hertz – slices versus layers

I wrote earlier about Hertz suing Accenture over the failed web replatforming. Again – I have no knowledge other than what I read in the press, and this is all conjecture.

There’s a line in the article that stood out for me. “Despite having missed the deadline by five months, with no completed elements and weighed down by buggy code,….“.

A few years ago, I picked up a large project – looks like it used the same technology stack (Adobe Experience Manager, Angular). The project had been running for a few months, and our client was unhappy – they couldn’t tell if we were making real progress. “Nev, can you have a look and see what’s going on?” asked my boss, so I went for lots of coffee with the team, and ended up taking over the delivery of the project.

The key red flag was this: the team had been working for around 6 months, but didn’t have anything they could show me other than some passing integration tests. We were building a website, but there were no actual web pages. There was some JSON that could be transformed into web pages, there was a content structure that could create and store JSON, but the only people who could assess progress were developers.

Our client was not a developer – they were subject matter specialists, product owners, business people. They understood the “first you have to build the foundations” logic of building the underlying structures before worrying about making web pages. But they felt that 6 months was a long time to wait, and they felt the team was unable to explain when they might see a working web page. They also felt there was a real risk that once we started making web pages, we’d have to revisit lots of “foundation” code. They were right, as it happens.

The first thing we did with the team was to create placeholder pages for the major parts of the site. AEM has the concept of “component” – a widget which shows a bit of a web page – and “template” which defines which components go on the page, and how they fit together. So, we started to build all the templates we needed, and placeholder components to go on those templates. We made sure that the templates and components reflected the key design decisions (how they’d render on different screen sizes, basic colour and styling), and created a basic version of the site.

This took a few weeks, and raised lots of questions. “Where does this content come from? How does this component work on mobile? How do you get from this page to the next?”. It was uncomfortable – we found out exactly how much we didn’t yet know. It also exposed assumptions we’d made in designing the “foundation” which were totally incorrect. We had uncomfortable conversations with the client – as we found answers to our questions, we discovered many areas where we weren’t aligned on requirements. Some of those misalignments reflected significant amounts of effort.

But overall, the trust between our team and our client improved. The conversations were concrete and limited – instead of asking “how should content workflow be set up?”, we could ask “how do we manage content for this widget on the homepage?”. Many of our assumptions were tangible – “we thought the navigation would be a static component, you think it’s data-driven”. By focusing teams on a component (front-end, back-end, design, QA), we could demonstrate that we could make progress in ways our client understood. We’d agree how a component was supposed to look, how it worked, where the content came from, and then assigned a team to deliver that. Within days, the client would see progress, and their level of confidence would grow.

I refer to this as the “layers versus slices” challenge. Logically, it makes sense to build the foundation before you worry about hanging pictures on the wall – but I think there’s a better metaphor than building a house. I see it more like building a city – you want to put down major infrastructure like roads, sewage, utilities first, but then you build each house individually – foundation, walls, roof, interior. You can build several houses concurrently, but you don’t build the foundations of all the houses in the city first, then the walls, then the roof etc. (I may have been playing Sim City).

On a web project, the infrastructure is setting up development environments (a huge pain on AEM!), basic content repository structure (how do you manage sub-sites, language variants etc.), deployment pipelines, BDD/TDD testing framework, design system (e.g. material design) with default styling for the project, and the source code control system.

Once you have the infrastructure layer, I build the user interface, using the basic design system (which may look more like “boxes and arrows” style wireframes than the finished product), and minimum versions of all the components. This should give you a web site with placeholder content, and minimal styling. You now have two layers – infrastructure and user interface.

The next phase is to focus on slices – build up all the components so they work properly (however you’ve defined that!), look right, and have the correct content.

I may be reading too much into the line in the article – but it sounds to me like the Hertz project focused on “layers”, at the expense of “slices”.

Thoughts on the Hertz – Accenture lawsuit

Let’s start with a disclaimer – I have no knowledge of this situation other than what I’ve read on the news. This post is conjecture and opinion, not fact!

There’s a news story about Hertz suing Accenture over the design and build of the new Hertz digital eco system. Many of the challenges sound horribly familiar – and there are lots of smart people commenting on Twitter. When reading the articles, two things really stood out for me.

Firstly, Hertz seems to have treated the engagement as a one-off project, and secondly, they outsourced pretty much the entire project to someone else. I think those two aspects of the project are fascinating.

Let’s start with treating the re-design of your platform as a project. I may be entirely wrong, but I assume that for Hertz (prospective) customers, the digital experience is key. If they are like most consumer brands, somewhere between 25 and 60 percent of their customers interact with them online during the purchase process. I’m guessing that a very large number of customers transact directly with Hertz using their web platform, and that those interactions are more profitable for Hertz than transactions via other channels – low cost to serve, no commissions, lots of cross-sell/up-sell. It’s also very likely that the role of the digital channels is growing in relative and absolute terms, and that key differentiation opportunities will come from digital. Oh, and their major competitors (on-demand services like Uber and Lyft) use digital as their main interaction channel.

So treating their digital platforms as a “project”, or even a series of projects, strikes me as wrong. The digital platform is a core aspect of the way they engage with customers, with no obvious end date, and a roadmap that evolves in the light of market conditions. It’s not a marketing campaign, with a big-bang go-live, or an SAP implementation, with an upfront project and ongoing maintenance – it’s a sequence of releases, each solving one or more problems for customers, the business, regulators, suppliers. It should be treated like a product or service in its own right, not just a customer acquisition channel.

This matters, because in most large companies, a customer acquisition channel is treated differently to a core service. Customer acquisition is a tactical process – invest in what works, reduce spend in what doesn’t, and most of the spend tends to be external – Facebook ads, marketing campaigns, media budgets. If you’re in charge of a customer acquisition channel, your key skill is to extract as much value as possible from your suppliers.

If you’re building a core service for the business, however, your outlook is very different – your time horizon tends to be years rather than months, and your core skills is defining and delivering the product roadmap. You typically assemble a range of skills, from a range of vendors, to do that, but the overall vision belongs to your team. Building that team’s capability is a huge part of your ongoing success.

The second aspect, bringing in an outside firm to deliver the new platform, is another worry. It’s never black and white – even if you have a big product team, you’re likely to face skill and capacity gaps, But outsourcing the whole thing – including product ownership – and relying on a contractual specification to get what you want, means success is determined by your procurement department’s ability to write a contract, and your upfront ability to specify requirements in a way that your vendor can’t dodge. And once the project is delivered, you remain dependent on your supplier – because you haven’t got the internal skills to evolve your platform.

The article suggests Hertz wrote a pretty comprehensive set of requirements, with forward-thinking deliverables like a design system, re-use across brands, and a “core component” library. But – in my experience – those deliverables don’t really add much value when the future rolls up. Test-driven development, Behaviour-driven development, continuous delivery – those really help. Those are things a vendor would often skip – they just want to deliver the project as cheaply as possible, get paid, and move onto maintenance and support (and also get paid).

Compare and contrast – a previous client (an airline) noticed that more and more customers were using mobile to buy tickets and check-in. They had an app, but it had not received a lot of investment, and the focus was primarily on cross-sell/up-sell. Customer satisfaction was terrible (the app didn’t work particularly well), and the team was heavily dependent on a supplier. Once they recognized that mobile check-in was the way their premium customers tended to interact, they set up an internal team to create a mobile product vision. They aligned on key value propositions, and assembled a team; my company provided a range of specialists in product management, design and development. We worked on a product roadmap, with lots of small-ish releases, a beta programme, and the gradual transition to an internal team.

And how about the money? The Accenture budget ran to $32 million. I know that sounds like a lot of money – but it’s not unusual for large-scale digital platforms to cost in that region. It’s (presumably) multi-market, multi-lingual, with transactions, payment management etc. But, let’s look at how you might spend that money in a universe where you care about developing internal skills. I’m going to take a 5 year period to spend that $32 million.

The following numbers are full of assumptions – but give an order of magnitude indication. If you have an annual budget of around $6 million to spend on a team, and the fully-loaded cost of a team member (designer, developer, product owner, QA, DevOps person) is around $200K/year, you can hire a permanent team of around 15 people. In year one, you probably want to bring in external folk , with skills you don’t have, and let’s assume their fully loaded cost is around $300K/year. So in year one, you probably have a team of 5 – 8 consultants, and 2 or 3 internal people; you change that balance over time.

So, can a team of 10 – 20 people deliver a whole new platform in 5 years? Yes, they can. Of course, their first release will be nothing like “a whole new platform”. It might feel disappointing – “we wanted a whole new platform, and all we got was a better sign-up form!”. But getting that feature out into the world, figuring out whether it delivers what you expected, and getting the next feature out shortly after – it’s a sustainable way of building software products. It gives you actual control (as opposed to the illusion of control which you get from project plans stretching over years).

Whilst the articles I’ve read suggest many things went wrong on the project, I think the biggest decisions organisations need to make are “is this core to our interaction with customers”, and “should we outsource this or build internal capability?”.

Serverless is (mostly) about money.

I’ve been working on software projects for a living for 30 years. In 1989, I worked on a COBOL application which managed orders, production schedules, billing and payroll for a manufacturing company. Then I worked on a client-server project for professional service automation; next came a range of web projects, along with mobile, video, and VR/AR, and data applications.

The overwhelming trend over that time is that the proportion of effort expended on domain-specific code has increased dramatically, and the cost of infrastructure capital investment has mostly been replaced by pay-as-go cost.

The latest phenomenon that’s about to break through into the mainstream is serverless. The definition is a little bit open to interpretation, but this is probably the best I’ve seen.
The concept is broken down into two parts: “Backend as a Service” (BaaS) provides functionality to applications on a per-use basis – it’s a little hard to say exactly where the boundaries lie, but in principle the commercial trade off is “do I invest effort building feature x, or do I pay on a per-use basis?”.
The second type is “Function as a Service” (FaaS) – developers write code, and the service provider runs it in the location determined by the commercial arrangement. Here the commercial trade-off is “do I invest in a hosting platform (which could be Cloud), which I manage myself, or do I pay a tiny amount every time the code runs, and let someone else sort out the hosting platform?”.

So why is that mostly about money?

Firstly, it (theoretically) reduces the cost of building solutions – you don’t have to worry about scaling, or availability, or anti virus, or back ups, or configuring the run-time for your application just so. More of the effort goes into the unique, special thing your solution does, rather than the plumbing that keeps it up and running.
This is an ongoing trend, and – arguably – the biggest savings came much earlier, with Cloud, containerization and Platform as a Service.

Secondly, you pay for what you use at a feature level, not infrastructure level. In most Cloud/PaaS models, you pay for running a process, whether it’s busy or not, and it is your job to figure out how many processes you need. In serverless designs, you only pay for running the feature you care about.
This is a big deal – it reduces the fixed costs for building solutions even further. This changes the commercial model – the more cost you can move from “fixed and upfront” to “scales with use”, the easier it is to create a business case that works.

That’s because most organisations look at “return on investment” as a key metric for deciding whether a project will go ahead. They typically look for an RoI as a multiple – “every pound we invest will pay back x pounds over y years”. So, every pound you can shave off the upfront cost makes that return on investment calculation easier. This, in turn, makes more projects commercially viable.

Artificial Intelligence, Big Data, and story-telling apes

I’ve been reading, and focusing on AI. In the fiction area, Gnomon is a complete mind-melt. One of the many premises of the book is that a “system” will run society. If Then takes it a step further, positing a system that runs multi-variate testing on communities to optimize itself.

In the popular science range, the best description of AI I’ve found is <a target=”_blank” href=””>The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World</a><img src=”//″ width=”1″ height=”1″ border=”0″ alt=”” style=”border:none !important; margin:0px !important;” /> – an intelligible history of machine learning and AI.

I listened to this You Are Not So Smart podcast on how biases are baked into AI, and what we might do about that.

And most recently, I’ve read this description of an online retailer’s efforts to predict whether a user will be profitable, based on all the attributes that retailer knows. It’s probably the state of the art in this sort of thing – it’s a fairly general problem, there online retailers have lots of data, and getting it right has big financial pay-offs. Instinctively, it feels right – a shop keeper recognizes signs that a customer might spend a lot of money, and knows that treating high value customers better will make them even more valuable. Though they get it wrong – remember the scene in Pretty Woman? A sex worker, acted by Julia Roberts, goes into an expensive boutique, and the shop assistants size her up and discourage her from shopping there; it’s only when the sex worker’s client (Richard Gere) shows up and they recognize his spending power that they treat her like a valued customer.

Pretty woman shop scene

Pretty woman shop scene

The fun bit is to see how hard this problem is, and how the economics work.

The paper describes a few ways the retailer has approached the “how can we predict the value of a customer” question. They’ve created over 130 different attributes, and looked for which ones were most predictive. Those 130 attributes sound like a lot – but many of them are really simple, like “how many orders have you placed”, “how often do you place orders” etc. And with those 13o attributes, they get fairly good results – around 75% accurate.

The second part of the paper describes how they’re trying to include “self-learning” algorithms, that look at all of the data they have, and use it to create a better prediction. The website tracks everything you do – surely, there must be a correlation between looking at certain products and spending more money (there is) – so how do you let the algorithm work that out?

It turns out to be possible, but the cost of training the system to find these behavioural correlations on fast-moving product catalogues is high – and with AI, that cost is often measured in money, not just time.

What does that mean?

It means that the AI would apply a statistical model to Julia Roberts’ web traffic, and unless she matches one of those 130 attributes they use to predict her life time value, the AI would assume she’s “just another customer”. However, if she then browses the expensive, high-fashion dresses category and spends a lot of money, the AI would not learn that customers who look at expensive, high-value customers are likely to spend a lot of money.

Why is this interesting? Because a state-of-the-art machine learning system, with significant commercial incentives, cannot economically learn to identify behaviour triggers without human intervention.

It boils down to “using traditional methods, we can consider perhaps a handful of attributes to predict your lifetime value. Current machine learning allows us to expand that to hundreds of attributes. But to learn from the thousands of subtle clues each of us leaves is currently orders of magnitude too hard”. It’s the difference between categorizing you by crude methods, less crude methods, or treat you like an individual.

And this brings me to the second topic here – human intelligence is very much tied up in story-telling. Sure, humans can manipulate symbols and abstractions, and use formal logic to prove or disprove things. But explaining phenomena in the real world is largely story telling. There’s an example I read somewhere about a finance editor on the TV news explaining the stock market behaviour, and using exactly the same underlying fact to explain opposite trends – something like “positive figures on the labour market lead to higher stock prices as the market anticipated stronger consumer spending”, and “positive figures on the labour market lead to lower stock prices as the market anticipated a rise in interest rates to curb inflation”.

If you read the academic paper on customer lifetime value prediction, the authors do some story telling – “it makes sense that high value customers look at the more recent products, because they are more fashion conscious”. Story telling apes observe things in the world – the rising of the moon, an erupting volcano, a pattern in the data – and we tell stories to explain those things. As we’ve built better models of the world, the stories have become more accurate (and therefore more useful); many stories have become quantifiable and predictable – we no longer believe the sun rises because a deity drags it across the sky on a chariot; instead we can calculate to the second what time the sun will rise thanks to Newton’s formulae.

So what is the point of this long-winded musing?

Whilst ecommerce sites are non-trivial, they are certainly not the most complex system you might imagine when considering the uses of artificial intelligence. And they have relatively clear outcomes, within a meaningful timeframe – you buy something or you don’t, and you usually do it within a fairly short time of visiting the site. And even at this scale, we struggle to identify actions that correlate cleanly with the outcomes using current machine learning techniques. We need story-telling apes to at least identify hypotheses for the A.I. to test.

If you try to expand the scope of AI to “look at human behaviours, and anticipate health outcomes”, or “anticipate criminal behaviour”, or “anticipate political choices” – we’re still a long way off.


Distributed agile development teams

The key to distributed, Agile software development is to get good velocity by making sure the work the developers pick up is “workable”. This means validating requirements before adding them to the backlog.

The last few projects I’ve managed have been larger than most we do in my company. We needed very specific technical skills, and simply couldn’t find them all in London or Amsterdam.  So, we bit the bullet, and brought in developers from suppliers, partners, other offices in the company, and freelancers working remotely. At one stage, the only people “in the office” were the QA lead, the delivery manager, and I running the development team – but all the developers were in different locations. 4 worked for a Polish specialist software shop; 4 were freelancers working from home, 2 worked in a different office, and 2 worked in our Ukranian tech hub. Oh, and our client and product owners were in a different country, and the user interface design team was in a different office.

I was discussing this experience with a friend yesterday – he has a co-located team, with the product owner 2 doors away, but was complaining about lots of suboptimal process issues.

I realized then that running a distributed team forces you to answer process questions early on, because problems that can be dealt with informally if you sit next to each other quickly become intractable if you’re remote. One example was that we agreed between everyone – client, delivery people, developers, QA – that the only source of truth was our task database (Jira, if you must know). If it isn’t in Jira, it doesn’t exist. If you want something, write it in a Jira ticket, and follow the Jira process to get it on the backlog. Developers had to deliver what the Jira ticket specified; QA had to verify that this is what the software did. This way, you have a crude but accurate view of what you have to do – x Jira tickets – and what you have done so far – y Jira tickets. My friend’s project, on the other hand, collected work items in user stories, screen designs, casual clarifications from the product owner, bug reports etc. His developers were complaining they were spending a lot of time picking through all those requests to work out what they actually had to deliver. My distributed team, on the other hand, could focus on ticking off their Jira backlog.

I know, this sounds the antithesis of Agile – I’ll get to that.

The second benefit of this approach was that I could ask the developers to reject any Jira ticket they didn’t believe was “workable”. Requirements could be expressed in whatever way the originator was happiest with – but unless a developer could pick it up and start coding in 10 minutes, they could reject it. Lots of bugs got rejected – if you don’t include steps to reproduce this bug, or actual versus expected behaviour, the developers can’t work on it. Lots of features got rejected for being unclear, or incomplete, or contradictory requirements.

Again – doesn’t sound very “Agile”, does it?

Velocity is the key to Agile.

In my experience, the key to implementing Agile is to build trust with the business that the team will deliver business value, at the expected quality level, for a reasonable cost. And that means establishing a decent level of velocity.

When I work with clients and we agree on “ways of working”, the deal I make is this.

You make sure I’ve got a prioritized list of workable items, and I make sure the team delivers the best possible level of velocity and quality.

Business people like this – they control the priority, and get the best possible value for their development buck. When they see that top priority requests do indeed get turned around in hours or days, and that the team is consistently delivering the priority items in the backlog, they become a lot more confident in Agile as a process.

Chunking is the key to velocity.

We’ve seen numerous studies on developer productivity – but in my experience, the key is to have as much time as possible with clear, complete problems to solve, of a size that allows progress every day. By clear and complete, I mean that a developer should have all the context available when they start work so they have to focus only on the technical design and implementation, not the requirements or UI design.

Most Agile methodologies recognize this – the “unblocking” part of the Scrum Master’s role, and the “I am blocked by” element of the Scrum stand-up, for instance, are designed to allow a developer to hand off tasks that aren’t executable.

But with a distributed team, this is much harder – the communication channels are much narrower, and much slower.

Avoiding blocks is the key to chunking

So, if we want developers to be able to work on executable chunks of work, we need to minimize the number of block they encounter. And in distributed teams, that means we give them complete, executable pieces of work, defined in one or two artefacts – a task, and a visual design, for instance, or a bug report, or a non-functional specification and test report.

However, often “requirements” (in whatever form) are not complete or executable.

Verifying requirements is the key to avoiding blocks

In “traditional” Agile, this is often done mid-sprint. It’s a bit of a pain, but the product owner sits with the team, and it’s their job to clarify. In distributed teams, this is much better done before the sprint starts – get everyone to verify each story before it goes on the sprint backlog, and reject anything that isn’t ready.

Business stakeholders appreciate this process. The clarification process often brings up problems in the product vision, or highlights prioritization mistakes, or shows up organisational issues that shouldn’t be solved by software.

Microservices in the enterprise – breaking out of the IT silo.

Microservices are entering the “early adopter” phase in large, established corporates.  I was talking to a friend who works for a large systems integrator, and he told me that this is now a common topic of discussion with CIOs. His client is a large insurer, and they are migrating the back-office IT systems to a microservices architecture. Interestingly, they are also using this as a lever for organisational change, creating “pillars” around core business areas, rather than departmental “layers”. The entire IT infrastructure is being transformed, with containerization and API gateways as key enablers, and the solution architecture team is now focusing on time-to-market as a key driver – their previous focus was on standards compliance and long-term strategic fit.

It sounded like a very cool project – not just turning around a super tanker, but reconfiguring it to something fundamentally different. So, when I asked which new customer propositions the insurer would deliver, I was confused when my friend said “oh, that’s not really a priority”. The project is driven by IT, and their goal is to be more responsive to the needs of the business, but the microservices solution will not extend to the customer touchpoints – the website, the call centre applications, etc. When I asked why not, the answer was a little complicated, but I think it boiled down to “the teams responsible for those touchpoints had large, complex software packages which they were afraid to change”.

Microservices can have a dramatic effect on back-office solutions – but the true value comes from transforming the way businesses interact with their customers, by opening up new business opportunities or channels. Seeing it purely as a “back-office” solution just means you get better at delivering yesterday’s service, rather than opening up tomorrow’s opportunities.