It’s well known that we in the tech sector love our acronyms. We also like re-badging stuff.What was once ASP (application service provision) morphed into SaaS (software as a service) and then became part of the cloud computing eco-system.
And so it is at the moment with AI (artificial intelligence). There have been a number of pronouncements recently, particularly from private practice firms, about their adoption of AI. But is the truth more weighted towards the “artificial” than the “intelligence”?
That well-known convenient one-stop-shop of truth, Wikipedia, defines AI as “a machine acting as a rational agent that perceives its environment and takes actions which maximise its chance for success at an arbitrary goal”. Even colloquially, it says, AI tends to be applied when “a machine uses cutting-edge techniques to competently perform or mimic cognitive functions that we intuitively associate with human minds, such as learning and problem solving”.
There appear to me to be a range of recent claims around adoption of legal technologies which are taking advantage of re-labelling opportunities presented by the current hype surrounding AI. Whether it is Kira for due diligence, Cael for legal project management, Hotdocs/Exari/ContractExpress for document automation, or a whole gamut of internally-developed proprietary systems, they have all been badged as AI.
These systems are not AI. Yes, they are often clever technologies which can greatly enhance efficiencies and deliver benefit, and are therefore to be commended and encouraged for the advancement of the profession. But like OCR (optical character recognition), they will soon be considered essential, and relatively mundane, tools of the trade (if they are not already).
True AI in law presents tricky challenges. Unlike medicine (or quiz shows or board games for that matter), it does not lend itself well to a fixed incontrovertible set of rules – much as many of us in the profession would like to think so. There are multi-layered jurisdictional variations, a vast range of primary and secondary sources, many different disciplines and specialisms, and a level of necessary subjectivity which sometimes goes to the heart of what being human means. Justice and fairness are not easy to teach to a machine (indeed many human operators still grapple with those concepts, particularly in “hard cases”).
Current technology is also open to abuse and corruptibility, most entertainingly demonstrated recently by the unfortunate Tay – the racist, genocidal Microsoft chatbot led astray by online trolls earlier this year. The consequences of a similar episode within legal services organisations or in-house legal departments could set the reputation of the profession back decades rather than advance it, and cause some serious headaches for regulators and PI insurers to boot.
I’m not saying that true AI in law isn’t coming. I firmly believe it will, and great strides are already being made by organisations like ROSS (based on IBM Watson), Judicata and others, predominantly based in the US at the moment. I’m a huge advocate, and we’re looking at a number of options at the moment for my own firm. But there is a lot of work to do before we get there. It will be a heavily resource-intensive exercise not only to “teach” expert systems our bodies of law, but also how “intelligently” to apply them to best effect to a given legal problem or transaction.
Increasingly lawyers will work alongside expert systems given specific roles in transactions or court work. As the technology develops and improves, and machines become more capable, we will de-construct the matters we work on to give more responsibility to systems in areas where they will complete tasks faster and better than humans. This in turn will make human legal jobs more fulfilling (though there may be less of them and those that there are will change in nature), improve access to justice, and help practitioners and the businesses they represent achieve more for less.
But it will be a while yet before an intelligent analysis of AI within legal services organisations has the “intelligence” element taking prominence of place over the “artificial”.
Callum Sinclair, partner and head of technology sector, Burness Paull