A fairly widely reported story last week explains how Microsoft research have created an AI that can write software. Hacker News went crazy – as you might expect.
Can an AI write software? Yes – it clearly can. Writing software means converting sentences a human understands into instructions for a computer; if Google Translate can convert “where’s the post office please?” into 8 other languages, there’s no obvious reason it couldn’t convert “add 12 to 88” into computer-executable form. In fact, this concept is older than I am – the venerable Cobol language was created in 1959 with the goal of converting “business language” into computer programs – Cobol stands for “common business language”. And compared to C, or Fortran, it kinda succeeds – Cobol source code at first glance is less dense, and most of the keywords look like “English”. But programming in Cobol is still programming – it’s unlikely that a sales director would be able to use Cobol to work out the monthly commission report.
In the 1990s, we got a lot of hype around 4GLs; in the early 2000’s, model-driven development and round-trip engineering promised to make software development much more business-friendly. Business people could express their requirements, and they could be converted to executable code automagically.
None of these things really worked. Cobol was a hugely successful language – but not because it did away with programmers; rather, it was a widely available language that matched the needs of enterprises which were automating for the first time. I don’t think I’ve heard anyone say “4GL” for about a decade; round-trip engineering foundered with the horrible tools that support it, which hardly helped to simplify life for either developers or business people.
The defining skill of a software developer isn’t the language they code in – it’s the ability to convert requirements into working software. Computers are already helping with this by compiling or interpreting “human-readable” code to machine-executable code. It’s not ridiculous to believe an AI could use a unit test to write code which completes that test, and it’s not ridiculous to assume an AI could convert a BDD-style requirement into working software. The Microsoft research paper says they have taken the first step – their AI solves coding test problems, which are typically specified as “write a program which will take a sequence of numbers “8, 3, 1, 21” and sort them in ascending order, returning “1, 3, 8, 21”. Extending that to a unit test is a logical and manageable step; I could see an environment where a programmer defines the basic structure of the application – classes with public methods and properties, for instance – along with the unit tests to specify the behaviour, and have an AI fill in the details.The next jump – from “programmer designs structure, AI fill in behaviour” to “AI designs structure” would be a huge jump. It would likely run into similar problems that you get with model-based development, or many object-relational mapping tools – the level of detail required to allow the AI to make the choices you want it to make would be high, and the level of detail of the specification might be indistinguishable from writing software.To then jump from “business person defines requirement, AI interprets and builds solution” – well, I’ve been wrong before, but I don’t think that’s credible in the next decade, and possibly longer. It would require natural language processing to reach full maturity, and the AI would need a deep understanding of business domains, the way humans view and interact with business processes, and user interface design.So, I think my job is safe for now. Not sure about any computer science graduates leaving university right now, though…