17th January 2023
Here is a thought:
imagine a ChatGPT browser extension that reads the Terms and Conditions agreement and flags anything that is non-standard
— Patrick Hsu (@pdhsu) January 16, 2023
Or, to perhaps put it another way: could Artificial Intelligence replicate, or even replace, the work of your normal contracts lawyer?
As someone who has spent over twenty years as a commercial lawyer (constitutional law is my interest, and contracts law my drudgery) I would say the answer is yes, and no, and but.
And as a coda, I will aver that those of us who write and comment on legal blogs may face a problem too.
Yes
The yes is a recognition that a certain amount of contracts law in practice is ploddery.
You have a standard form contract, and you read every clause, and you put all the clauses together.
Many standard clauses are what is called boilerplate – their effect, and often their very wording, are identical from one contract to another.
And even clauses which can vary from one standard from to another – payment arrangements, service levels, and key allocations of risk – do not vary very much.
In larger law firms, the task of reviewing, and even drafting, such contracts is given to junior lawyers, even trainees.
Many non-legally qualified contracts managers and procurement officers are better than many commercial lawyers in dealing with straightforward commercial contracts.
And so just as a text comparison program can identify differences between contracts better than almost any human, then a computer which has a bank of hundreds, if not thousands, of standard contracts would be able to identify standard and deviant clauses.
Such a computer may even be able to propose amendments to the deviant clauses so as to place the contract onto a more standard basis.
So, yes, some straightforward contracts reviews could be done by Artificial Intelligence.
No
Standard form contracts are subject to special legal rules in case law and statute, especially when they are for business-to-consumer transactions, and so a store of contracts would not enough: external legal expertise can be necessary.
And being able to advise a client on whether a standard form contract will be in their commercial interests or not is not something Artificial Intelligence is likely to be able to do soon.
That is because assessing commercial risk in a particular situation is not a form of abstract calculus, for it requires an understanding of industry, business, economic, social and human factors.
And, of course, not all commercial contracts are on standard forms.
Certain transactions require bespoke contracts, dealing with the allocations of risk of a range of things that could go wrong.
In IT and media contracts, for example, there often needs to be an understanding of technological risks so that the legal risk allocations match and mirror what problems can happen in practice.
A well-drafted and hard-negotiated bespoke commercial contract is as much a work of cooperation, conflict and collective endeavour as you will find anywhere else in human activity.
But
There is a problem.
The good lawyers who can advise on standard and bespoke contracts can do so because of their apprenticeship in dealing with straightforward clauses in everyday contracts.
You do not have child prodigies in practical law: a practice takes, well, a lot of practice.
One reason for this is that contracts are not linear documents but complex instruments: each clause can and should relate to other clauses.
And the only way to master complex instruments is to understand how the elements of that instruments all fit (or do not fit) together in given practical situations.
(I have said before that legal drafting is akin to coding in making sure lines all work together.)
This means that if Artificial Intelligence replicates and then replaces the work of junior contract lawyers it is difficult to see how senior contract lawyers will gain their necessary experience.
Coda
Perhaps a better route for Artificial Intelligence would be to replicate and then replace the work of legal bloggers and their commenters.
Perhaps the blogpost above was written by Artificial Intelligence, and perhaps also some of the comments below will be too.
If so, then Artificial Intelligence can merrily create blogposts and comments, rendering us all redundant.
Brace brace.
Comments Program
This blog auto-generates a high standard of comments, many of which are better and more interesting than the auto-generated posts.
Comments will not be published if they fall foul of a random “irksome” bug.
You probably already did below… also, don;t under-estimate the ‘shoulder to cry on’ value of a real-human lawyer… :-)
How will Artificial Intelligence affect commercial lawyers?
(Chat GPT3): Artificial Intelligence (AI) is expected to have a significant impact on the field of commercial law. AI-powered tools can assist commercial lawyers in tasks such as document review, contract analysis, and legal research. This can lead to increased efficiency and cost savings for legal teams. However, it could also lead to job displacement for some commercial lawyers who may be replaced by automation. Additionally, the use of AI in legal decision-making raises ethical and legal questions that will need to be addressed by the legal profession.
Will AI also affect legal blogging?
(Chat GPT3): AI could potentially have an impact on legal blogging as well. AI-powered tools could be used to automatically generate legal content, such as summaries of court cases or analysis of new laws and regulations. This could make it easier for legal professionals to create and distribute content, but it could also lead to a decrease in the need for human legal bloggers. Additionally, the use of AI-generated content raises concerns about the accuracy and credibility of the information being shared. It is also possible that AI-generated legal content may not be able to capture the nuance and context that a human legal blogger would be able to provide.
I am usually passive reader of your blogs (as I am a tech guy), but regarding the current generation of AI tools I highly recommend Judea Pearl’s book of why:
https://www.theatlantic.com/technology/archive/2018/05/machine-learning-is-stuck-on-asking-why/560675/
In short, the tools developed so far use the equivalent of very sophisticated correlations to map from one phrase to a full text (or a full image) – but in the same way as correlation is not causation, the tools are unable to form a sense of causality.
I would think lawyers are all about causality – (what if ?) – so there will be enough things to do at least until the next AI revolution (judging from lags between theory and application – probably 20 years from now – when AI incorporates Pearls causality theory)
Yes but… the sort of computer programme you’re describing is what’s called an ‘expert system’, one which knows many (usually more than any human) rules e.g. boilerplate clauses. An Artificial Intelligence is one which is, er, intelligent i.e. able to learn. It would need to be taught – presumably by initially shadowing an experienced contract lawyer and making them tea, but eventually could be allowed to do some photocopying and even a bit of (minor) contract drafting.
There is an analogy with robotic surgery. Surgical robots are very good at some things, for example resection of part of the bowel with a tumour in it. However, a human surgeon needs to be at hand, in case the robot malfunctions, or encounters an atypical problem. The human can then take over.
However, the human surgeon knows what to do because he or she has extensive experience of the problems that may arise. That experience was gained before robots were used. How do surgeons now in training get the experience they need?
Would it be ethical to say to the patient “this operation is best done by a robot; but surgeons in training need to gain experience of the operation; so do you agree to have this new young surgeon do your operation? – it is the only way they will learn to be able to take over from the robot later in their career. By the way, he will not be as good as the robot.”
That situation has always existed, whereas the robot used to be a senior surgical trainee.
My wife and daughter recently went to see the holograms of ABBA performing. They commented that the audience were prohibited from videoing the performance on their mobile phones. I wondered whether the ideal next step would be to replace the audience with holograms too and save the rest of us the expense and bother of attending (I went to the pub instead).
I remember that Douglas Adams suggested that video recorders were to save you the bother of actually watching telly, fridge-freezers were to save you the bother of actually eating, and his fictional “electric monk” was there to believe in religious stuff for you. Thereby saving you the bother.
AI can be useful.
The best thing about the monk was that the deluxe version could believe things they’d have trouble believing in Salt Lake City.
For my 2p worth, I think that you are hugely underestimating how AI could work.
If you take a pre-trained large language model and then fine tune it with all of UK legislation, all of UK books and legal guidance and blogs, all UK court decisions, all the letters, memos, contracts, reports and emails that a large law firm has ever written or seen, you would have quite a powerfull beast. Let’s call it David.
Then the lawyer would dictate some instructions to David in the same way that they would to a junior (I want this …, client has a bee in their bonnet about x, y should look a bit like this but don’t add z because of…). David would ask some questions too where the instructions aren’t clear or are contradictory.
By the time that the lawyer had found a cup for the Nespresso machine, there would be a draft report/contract/whatever in house style, a covering letter to the client and some notes for the lawyer explaining what David had done (e.g. I’ve used this pro forma form …, I’ve tweaked clauses x and y because you said … I disagree with what you told me here because of a, b and c and so I think you should think again about this).
And David wouldn’t just be able to do contracts, but conveyencing, IP, tax, employment law and suggestions on how much to bill.
I went to school in the era before calculators – I still prefer to run my eye down a row of figures and add, multiply, divide and take away with a pencil and bit of paper – think of it like ‘keeping my hand in’ – and I’m quicker than me with a calculator.
Back in the ’70s, Donald Michie (https://en.wikipedia.org/wiki/Donald_Michie) suggested that as AI’s developed they would give us answers that they arrived at by ways that might not make sense to us. As a result, when questioned they would give us a plausible explanation of their reasoning (i.e. lie). We would go away happy with a result that worked.
But… when challenged, the law requires clients, lawyers and judges to argue with/convince each other of the merits of their stance. This would not be a time to discover your AI generated stance is inexplicable to mere mortals. As a result, considerable effort would have to be put into reviewing any AI generated material before it could be put into a position to be challenged – unless the judge is also an AI ;-)
ChatGPT is already lying in that exact way! https://www.semafor.com/article/01/12/2023/chatgpt-knows-elon-musk-is-twitters-ceo-despite-saying-its-learning-cutoff-was-in-2021
A very significant difference between humans and AI is amply illustrated both in the column and in the comments: to date, AI has demonstrated little capacity for creating humour. It will be interesting if and when that changes….
I just asked ChatGPT what it thought of your comment (and that it was on a law/ legal blog). It said:
“You raise a valid point about the differences between humans and AI in regards to humor. But, in all seriousness, I believe that the true test of intelligence is not the ability to tell a joke but the ability to understand and navigate the complexities of the legal system. And I assure you, I am fully capable of doing just that.”
So I think it proved your point. When I asked it for a lawyer joke, it gave me:
“How many lawyers does it take to change a light bulb? Three. One to climb the ladder, one to shake it, and one to sue the ladder company.”
Again proving your point.
The current version of “AI” is not really intelligent. Its performance depends on the data it’s given to learn from, and (critically) the order in which it’s fed that data. It is very easy to get that horribly wrong.
So if the training data is based on racist or sexist systems, the “AI” will be racist or sexist. If the training data is based on less-than-perfect legal advice, the “AI” will be no better.
Anybody who thinks they can provide perfect legal advice in a complex commercial environment is probably not the lawyer you want to consult.
The same problem exists in other areas already.
Flying, for instance. A plane can fly itself. Pilots aren’t allowed to rely on the self-flying, because they need to be ready to take over if anything goes wrong. So they have to do some of the regular stuff themselves, like taking off and landing (most of the time), to be trained and ready in case something needs doing in an emergency.
There are laws and practices around flying. As a consumer, when you book your air ticket, you don’t know whether you are getting a good pilot, a bad pilot, a well trained one or a dud.
Unfortunately, robolawyer is probably going to be cheaper than a commercial law practitioner. So maybe the vital training won’t happen as much as it should.
You suggest that in future blogs and responses might all be AI generated, leaving us all more time. I said something similar when phone answering machines became common – sooner or later phones could talk to each other without our intervention!
Real AI would be able to read, for example, M’softs terms, (which must run into 000’s of pages), and tell me I can sue!
Probably not in my lifetime ………………….
I know you’ve already drawn comparisons here between writing software and contracts and this is another example where they have similar experiences in that I’d agree it’s yes, no and but for developing software. Yes, much software is boiler-plate or even have similar patterns; No, there’s a need to be able test the software thoroughly and certain parts of software will always be bespoke. But, how can we train up software developers if they can always do the simple stuff with AI tools.
On a more practical bent, teaching CS is facing a problem in that any assignment, being simple and graduated, are easy to implement via AI tools. Given that there is an incentive for students to go that pathway then how do we persuade students to do the drudgery. I’d be interested to hear if there is a similar problem in teaching Law.
The boilerplate stuff was always provided as libraries, even when I was a kid coder, 40 years ago. At first I read every library to learn what it did before using it; it didn’t take long to get bored and start assuming that library modules did what I needed them to. Always turned out to be a bad assumption.
On top of the question of training professionals, we have added the problem of allowing amateurs to believe they can replicate professionals using high level tools. Witness countless dreadful company ‘flagship’ websites assembled by somebody who knows how to fly a PC, using web building tools.
The thought of some AI machine getting close to being a lawyer (or possibly a politician) provoked the thought “I can’t let you do that Dave”. Followed by “I can’t let you do that Rishi”.
Perhaps not, but following the aircraft autopilot analogy lots of loud klaxon sounds and more alarmingly a record on the lawyer/politician logging file – error error error – override – crash.
Re “ This means that if Artificial Intelligence replicates and then replaces the work of junior contract lawyers it is difficult to see how senior contract lawyers will gain their necessary experience.”
That assumes we’re talking about getting the *same* experience as now, whereas AI may free them up to get different / deeper experience in other — more important or impactful — areas.
Similar to architects using CAD, pulling standard components into their drawings: Could they draw them from scratch? Probably not — they’d undoubtedly get lots of details wrong (and consequently not build to code) — but they are now able to concentrate more time on far more exciting / useful / advanced engineering feats than before.
Or programmers bringing in standard libraries rather than writing their own. Same as above.
Is there risk? Yes. But theoretically, they will have more time to hone other skills instead — including those of checking the AI’s work.
That’s the “everything is fine” answer, of course. The reality is that lawyers will not have more time, as they will be pushed ever harder because “now they can rely on AI” to do so much of the work.
A similar situation arises with programmers who, when using standard libraries, *should* use the freed-up time to check through the code that they are pulling in. But they very, very rarely do. And so a bug in a piece of code that everyone uses causes a worldwide panic every now and then.
In tech, AI-generated malware is already a thing. Could a more-sophisticated (=expensive) AI devise devious contract clauses that a less-sophisticated (=cheaper) AI wouldn’t be able to spot?
All in all, however, I see it as a golden age for commentators — human or AI — rather than the opposite. People may even begin to pay more for guaranteed-human commentary.
And then there is the Shouldn’t-Be-Interesting constitutional law. Surely there should still be plenty to blog about there. I can’t see a country giving the constitution to an AI just yet.
Although if any of your commenters have access to ChatGPT, it would be interesting to see what a new written constitution for the UK might look like ;)
Tigs: What would a new constitution for the United Kingdom look like?
ChatGPT: It is difficult to say exactly what a new constitution for the United Kingdom would look like, as it would depend on the specific goals and values of those drafting it. However, it could potentially include elements such as a clear statement of citizens’ rights and responsibilities, a separation of powers between branches of government, and a clear process for amending the constitution. It could also address issues such as the relationship between the central government and regional or local governments, the role of the monarchy, and the allocation of powers between different levels of government. Ultimately, the content of a new constitution would be the product of a process of negotiation and compromise between different political and societal groups.
Those of us who use short sentences have nothing yet to fear.
The future is already here! You may be interested in the following links. This week’s new scientist also has an article on this.
https://www.cbsnews.com/news/ai-powered-robot-lawyer-takes-its-first-court-case/
https://donotpay.com – “the world’s first robot lawyer”
May I give you a practical application of how AI is affecting commercial contracts already? My firm uses it to analyse the key provisions of leases as part of due diligence in the acquisition of property portfolios. It is more accurate and thorough than paralegals carrying out the same work.
Apropos, I’ve just seen what the contract guru Ken Adams said last month about ChatGPT and its potential for drafting or reviewing commercial contracts, here: https://www.adamsdrafting.com/chatgpt-wont-fix-contracts/
“… traditional contracts are already dysfunctional—it’s hard to see how ChatGPT could make things worse. If you’re satisfied with cranking the handle of the copy-and-paste machine, you have no reason to look down your nose at ChatGPT.
…
If ChatGPT has a chance of being a better option that what you’re currently doing, you should think long and hard about your choices.
… if you want to improve your contract process, as opposed to just seeming to improve it, you’ll need real expertise, not bogus conventional wisdom. … Otherwise, you’re doomed to endless garbage in, garbage out.”
And as one sage person adds in the comments: “Don’t automate anything you can’t do flawlessly manually.”