Tech Corner: Ethical AI for Family Lawyers

Published: 01/07/2024 07:00

A conversation between Kay Firth-Butterfield and Rhys Taylor

Rhys Taylor: Could you tell me about your current role please?

Kay Firth-Butterfield: Currently, I run my own consultancy, education and thought leadership company called Good Tech Advisory through which I give speeches and help companies and countries think about their design, development and deployment of AI and responsible AI. I always say that you can’t have a successful AI use without a responsible AI underpinning.

Once upon a time you were a family barrister – how did you transition from being a family barrister into your current role?

Well, I think it worked out quite easily. I was doing a lot of work with children and sitting part-time and I decided that this was not the career I wanted for myself anymore. I didn’t want to be a full-time judge and if I didn’t want that, then maybe it was time for a change. I was fortunate enough to find a professorship here in Austin, Texas – which is not as odd as it sounds! We already had a home here, because my mother-in-law is Texan. After our move I continued to work on human rights and human trafficking where I was teaching those subjects and working with the government to set up laws around trafficking in particular and also working with trafficking victims as a professor.

In 2011 I was writing a book on human rights and human trafficking at the same time as I was reading a Time article about the technological singularity (aka Artificial General of Super Intelligence). I then bought Ray Kurzweil’s book The Singularity is Near and after reading it became really fascinated, in a time before the Westworld series, about whether humans would abuse humanoid robots if such things existed. So, I became very interested in how humanity and machines that were sentient would work together. That set me off on what became the last 10 nearly 15 years thinking about responsible AI.

What should family lawyers today be thinking about in respect of AI?

I think all sorts of things. It’s easy to think about AI in the courts – how it’s going to be deployed in the courts is an important discussion that lawyers should be having. I think it’s important that none of us get so involved in our careers that we don’t have time to poke our heads up and say, ‘Okay, what’s happening? Is it good for society? Is it the way that we want to go?’. I think family lawyers are, in a way, uniquely positioned because their work brings them into societal conversations. I think they have a have a huge role to play in thinking through how AI is deployed in the courts.

There are ways in which AI is currently touching your clients in ways that we should be considering. Let’s take children, for example. When children under seven are developing their attitudes, beliefs and values, they are now being exposed to the internet. In fact, some parents are buying their children toys that are AI enabled, but we don’t have any idea about where their data is being stored; what they’re learning; if the toys are using facial recognition, where that data is going and more. All of those are actually really big issues and I think family lawyers could and should have a view upon them. It could be that it is wrong for parents to bring those items into their homes without having some knowledge of whether their children’s data is (a) going to, say, China or (b) being resold, because it doesn’t say any of that on the packet. I think the way that children are using AI, particularly the one to sevens, is dangerous, we are beta testing on our children.

I think we’re also going to see more cases where children are involved in deep fakes and deep fake pornography – we’re already seeing that. We’ve then got the Online Safety Act that’s going to come into force, and I suspect that’s an area of interest to people who are doing child care work.

I’ve been talking to my own chambers about the fact there’s administrative law issues – how are public services using AI, and is that of benefit to your clients?

It’s also important to keep an eye on the many areas of law which will see an increase of litigation as AI becomes more involved in our lives, for example crime, copyright, tort, and contract to name a few.

You mentioned about the use of AI in judicial decision making, i.e. ‘the robo judge’. Do you have any observations about how AI might come to be used in in the Family Court in that way? I’m thinking in particular perhaps, in this instance, in the context of financial remedies.

I do quite a lot of education for judges on AI and you were at the meeting at the Inner Temple where we talked about this. I think what we have to consider is humanity. It’s actually not just a decision about whether judges hear this case or that case, it’s a much wider societal decision. It’s a societal decision about whether we fly the plane in the future or whether AI does. If you think about our sci-fi fantasies in both Star Trek and Star Wars, they’re still flying the plane, but they do have super intelligent computers to help them.

I see the judicial issue, and AI in the law, rather in that way. Do we want law to still have humanity, or do we want law to be an algorithmic process? At the moment I don’t think the algorithm is up to the job, so we’re not at that point, anyway. We know that algorithms get things wrong and they are just as biased as we are. It’s really hard to make them accountable and explainable, whereas, when we’re sitting as judges, we at least write a judgment and can be held accountable and we can explain how we came to our judgment, even if we have biases. So, I don’t think that the tool is fit for purpose at the moment. The question is whether family law still has humanity in it, or whether it’s just a number crunching exercise. If as a society we decide that it’s just a number crunching exercise, then yes, of course, we can use computers. I personally hope that we don’t.

Thinking about AI, as it’s currently developed. What risks do you think it poses to introduce AI into the family law and how might these be mitigated?

AI has many risks, but let’s just take one: if you’re thinking of using generative AI, then what we know is that the data used by Large Language Models (LLMs) is appallingly skewed to reflect white men living in the global north, simply because white men living in the global north have been creating more data for many more years than anybody else in the world, including women. If you’re thinking of family law moving forwards, then the current AI that we have is not helpful in creating societies that think differently, it needs the human to do that, and then to look for the data that might be helpful. Jane Austen in Persuasion has her female character say, and I paraphrase ‘I’m not going to use books to back up my argument about women’s feelings, because all books are written by men and so we can’t know women’s feelings from books’, and that’s true today in that the bulk of the data is not from women or persons of colour.

I think that barristers are in a better position in terms of jobs than perhaps solicitors. Morgan Stanley did research recently that said 44% of all the work solicitors do could be automated today but I don’t think that’s true for family barristers. If you think about what a barrister’s life is – it’s going to court and providing the humanity to the client as well as the legal advice to the client.

You mentioned mediation before we started this conversation. I think that mediators are about providing the human that the parties can relate to in order to enable them to relate to one another in some way and solve the case.

Do you have any recommendations for particular sources for people who want to understand where we are with AI at the moment?

Well, obviously, my ongoing column for The Innovator! There are also a number of newsletters that I subscribe to and you can get both the good and the bad about what’s happening in AI. If you want to make sure you are always alert to the things that might go wrong, you need to be reading Gary Marcus or Timnit Gebru, or Joy Buolamwini. If you want to be thinking more about the understanding of new applications and how AI is being deployed, then that’s Benedict Evans. For immediate news then Politico does a great newsletter and the Financial Times also has some really good writing around AI at the moment.

Can I ask you to look into your crystal ball and imagine how AI will have changed the practice of family law in England and Wales over the next ten years?

Well, as I say, I think it depends upon what we decide we want as a society, in that lawyers do need to be part of that conversation, and frankly leading it, as far as family law is concerned. I think it’s unlikely that we will have robot judges in family law in ten years. I think that there will probably be more people coming directly to barristers rather than solicitors, because solicitors are going to see huge turmoil in their practices within the next ten years. It is said that making research easier and taking away the junior role among the solicitors will enable more sole practitioners as solicitors, so it’s worth barristers just keeping an eye on that, because that could be a place where solicitors are doing the work that barristers would otherwise be doing.

And finally, if I may, should we fear or should we be embracing AI?

A bit of both, I think! I have always felt AI could do tremendous things for us as human beings. What I think we should be hugely sceptical about is allowing it to take over the jobs that need human interactions. So, for example, there’s a tool just announced that Nvidia is working on, called Nurse AI. If I’m in hospital I probably want my nurse to be a human rather than the robot. The other day, my oncologist (I am totally better but have checkups and I wrote about my journey in The Innovator) was talking about the fact we could now have AI to talk a patient through their journey with cancer. My response to that was ‘well I’d actually rather you, the human being, talk to me about my journey with cancer and let AI help you with your administrative stuff which will give you time to talk to me’. So, I think it’s about us having those conversations; going back to who’s flying the plane, we need to be having those conversations now. If we have those conversations now and we discover our humanity, and where it fits in the age of AI, then I think we have a great future.

Kay, thank you very much.

It was a pleasure.

©2023 Class Legal classlegal.com
Class Legal

Share this

    Most read

    message