I’m fortunate – I get to work with some of the best-known companies in the world, and observe the way they approach software and marketing projects. These companies are much bigger than the one I work for – I’m in India this week, meeting with a systems integrator with more than a quarter of a million employees. And at that kind of scale, you simply must have “process” – you can’t have projects figure it all out for themselves every time. When you’re running projects with hundreds of team members, you can’t rely on people who understand the entire project – you have to partition.
However, this leads to something I find worthy of comment: there comes a point where the organisation becomes more interested in the process than the product. The team starts to see itself as a machine whose size and complexity is a victory in itself; often the actual output of the machine becomes a secondary concern. It can become hard to identify the people who understand how the machine parts relate to the product it’s building – it’s not uncommon to hear “my team does x, y and z so that that other team over there can do a, b and c”, instead of “my team produces feature x, which contributes to the product by doing y and z”.
You hear project managers discuss the team organisation chart, the governance structure, the progress reports, the project metrics – but it’s not uncommon for those project managers to be unable to explain when the product will be finished – “my team’s work is scheduled to complete on date x, subject to dependencies from that other team, and then we start phase 2.b.2, which runs for another 3 months etc.”, instead of “the project goes live on date x, and my team’s on the critical path, so we have to deliver on date y”.
The “quality” process often involves significant focus on documentation and standards – but the actual key project characteristics may not always make it into the process, so websites launch that are unusably slow, or mobile apps go into the app store which break because services from “other” teams change – and that’s not “in the scope” of the app project team.
In such organisations, key software processes can be slow and ponderous – I’ve seen many release processes which take 4 – 6 weeks to complete, because the process includes many checkpoints, mandatory evaluations and test scenarios, and there’s often a “one size fits all” process, so small tweaks have to go through the same process as major changes. The processes often seem to arise by accretion – over time, every mistake that’s happened is converted into a checklist item. Someone released a piece of software with poor performance? Add a “performance test – mandatory” step to the release process.
I get it – I really do: once you reach a certain scale, you can’t rely on smart people doing the right thing. Once your project is too large for a team of 10 – 12 people, you have to find a way of controlling it without understanding every detail. This is not a rant against “process”.
However, losing sight of the product, the end user, the business drivers – applying “one size fits all” processes and judging the project by its adherence to standards rather than the outcome – all these things make it very hard to do innovative, “web-native” work. So, this is a rant for a focus on the product.
How do you know you’ve gone to far down the “process” route? If you can’t answer the following questions off the top of your head, you may be at risk…
- Who is the end customer of the project? Why should they care about what you’re doing? How will your project impact their lives?
If you answer this question with “the VP of organizational excellence or whatever, because he’s the budget holder and needs to spend his budget by the end of this fiscal year” – well, you might be in trouble.
- When does the end product go live? How important is that date to the end user?
If you answer this question with “not sure – my part will be finished on x”, or “we communicated that date to the steering committee” – you might be in trouble.
- What are the major subsystems, and what technologies do they use? How can you be sure they’re going to do what the end user needs?
If you answer this with “well, the hoogenfloomer department is working on project BadgerFoot, and the manager is xxxx; they report to the Vice President of Arm Waving, so he’s making sure it’s standards compliant” – well, you get the idea.
- Why is this the end product important? What are the key business drivers?
Okay, you should be getting the hang of this now – if your answer is “because it fits into the departmental strategy for x y z”, you’re probably not seeing the big picture. Consider using “5 whys“. I’m also fond of pirate metrics for web projects…
I’ve seen some projects which focused on products fail – there are no silver bullets. However, the failure rate of process-oriented projects in innovative and/or exploratory projects is – in my experience – much higher. Our CEO talks about process being absolutely right for burger flipping – you don’t want the kid in MacDonalds to reinvent the burger for every happy meal – but not for innovation.
Failure modes I’ve seen are often not the classic “iron triangle” issues of time, money or scope – at least not in the sense that they are usually measured. They’re also not really correlated to “waterfall versus agile” distinctions.
For example, a long while ago, a partner company was working on an “e-CRM” project. The purpose for the business was to collect good customer data, and to combine the “web” data with their other CRM data. For the end user, the intention was to make the registration and sign-in process easier.
The IT solution provider applied their standard software development process – honed over many years of enterprise solution development – and interviewed all the business stakeholders, gathered functional and non-functional requirements, created a technical architecture based on best-practice service-oriented architecture principles, developed a comprehensive data model, and converted all this into a detailed task list, dependency matrix and project plan.
They executed according to the plan; the QA team evaluated the software against the documented requirements, and reported a decent level of quality.
The project delivered on time, and on budget. The development team ticked off each requirement against the traceability matrix, and showed they had met every single requirement. The bug list from the user acceptance testing phase showed no P1 or P2 bugs (though quite a few lower priority issues). From a very narrow project management point of view, the project was a success.
However, from a business point of view, it was a failure.
The sign-up forms were very long (because the team had accommodated all the data requests from the business stakeholders), and many of the data field descriptions on the web forms were meaningless to consumers, because the team had applied the internal business jargon, e.g using “point of sale” instead of “country”.
The service oriented architecture was based on SOAP messages, and provided admirable decoupling of the systems – but was also rather slow; the front-end was not designed to work asynchronously, so to the consumer, it often felt unbearably slow.
Finally, there were many small user interface problems – none of them significant in their own right, but collectively, they made the sign up process feel clunky.
As a result, the sign-up rate declined after the site went live.
Whilst the narrow business goal was indeed met – what data was collected was of a higher quality than before – the wider business goals suffered.
I watched this from the sidelines – my project was going to use the new eCRM solution to manage authentication and authorisation – but I couldn’t help thinking that someone in that project should have taken a moment to think about the product, rather than trusting blindly in the process.