Technology and Family Law – Opportunities, Challenges and Reflections

In contemporary family law practice, technological advancements are reshaping how legal professionals manage cases and interact with clients. It is vital for legal practitioners to keep abreast of contemporary developments in this field, and to tailor their approach to legal practice accordingly.

Introduction

In contemporary family law practice, technological advancements are reshaping how legal professionals manage cases and interact with clients.[[1]] From complex financial analysis to navigating issues such as digital abuse, technology plays a pivotal role in creating both opportunities and challenges within the field.

In recent years, artificial intelligence (AI) has emerged as a potent transformative force. While there is no single accepted definition of AI, there is a consensus that the core concept refers to machines simulating human intelligence. In the United Kingdom, the Information Commissioner’s Office defines AI as ‘an umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking’.[[2]] In Singapore, the Infocomm Media Development Authority defines AI as ‘the study and use of intelligent machine learning to mimic human action and thought’.[[3]]

The use of AI within the legal market has grown rapidly in recent years. According to the Solicitors Regulation Authority of England & Wales (SRA), by the end of 2022, three-quarters of the largest solicitors’ firms in England & Wales were using AI, approximately twice the number from 3 years previously, and over 60% of large law firms and a third of small firms were examining the potential of new generative systems.[[4]] Similar trends have been observed in Singapore, with multiple major law firms implementing AI to enhance their legal workflows.[[5]] Singapore’s Ministry of Law has provided law firms with funding to cover their initial costs of subscribing to the Legal Technology Platform (LTP), a platform that allows users to incorporate Generative AI into their daily file management and workflows.[[6]] Lawyers can leverage the LTP to view, track and manage both substantive (e.g. drafting, research) and administrative aspects (e.g. time costs, billing) of their cases in a consolidated manner. This streamlines workflow, enhances efficiency and allows lawyers to focus on more substantial matters.

Some AI tools have been widely used by legal professionals for years without much difficulty. However, unlike earlier technology, Generative AI, a subcategory of AI that uses deep learning algorithms to generate novel outputs based on large quantities of existing or synthetic (artificially created) input data,[[7]] can create original or new content, including text, images, sounds and computer code. Generative AI chatbots, such as Google Gemini, Bing Chat, Meta AI and ChatGPT, are computer programs which simulate online human conversations using AI and rely on the large language model (LLM), which having been trained on enormous quantities of text, learns to predict the next best word, or part of a word in a sentence.

AI (including Generative AI) presents unique challenges, including potential breaches of privacy and intellectual property, and the production of misleading or false outputs (e.g. fake case citations).

Other technological advancements have also posed challenges for the judiciary, legal practitioners and lay clients. The prevalence of editing software has made it easier for parties to forge or manipulate documents, the advancement of deepfake technology is posing ever greater evidentiary challenges, and online messaging platforms, some of which incorporate ephemeral messaging (the transmission of messages that automatically vanish from the recipient’s screen after being viewed), have enabled parties to perpetuate family violence beyond geographical boundaries and post-separation.

In light of the fast-evolving opportunities and challenges presented by AI, and the increased accessibility, affordability and sophistication of these tools, it has become increasingly vital for legal practitioners to keep abreast of contemporary developments in this field, as well as to tailor their approach to legal practice accordingly, in order to uphold the principles of justice and fairness in family law. As the Bar Council of England & Wales helpfully points out:

‘there is nothing inherently improper about using reliable AI tools for augmenting legal services, but they must be properly understood by the individual practitioner and used responsibly.’[[8]]

In this article, the authors refer to specific apps and software in their exploration of this topic and to illustrate their arguments. However, they do not endorse any of the apps or software mentioned, nor are they sponsored by them. Further, for the avoidance of doubt, save where the facts are reported in a judgment, the authors do not assert or imply any particular link between any given software or app, and any particular pitfall or reported instance of apparent misuse.

Opportunities

Enhanced case management processes

Like all technologies, AI and Generative AI have the potential to enable increased efficiency and cost savings where processes can be automated and streamlined, with minimal human intervention. For example, many firms now leverage Technology Assisted Review (TAR) in their electronic disclosure processes. TAR is a machine learning system trained using data from lawyers’ manual identification of relevant documents.[[9]] It then utilises the learned criteria to scan, classify and identify potentially relevant documents from very large data sets.

Mainstream legal research products, such as Westlaw, LexisNexis and LawNet Singapore employ AI-enhanced capabilities to automate searches.[[10]] Some legal writing tools, such as Lexis Create, utilise machine learning to analyse legal documents, offer suggestions for improvement, catch typographical errors, clean up incorrect citations and streamline sentences.[[11]] Other case management software, such as Divorcemate, provides verified AI answers to legal questions, and assists lawyers with conducting legal research, drafting letters and affidavits, creating precedent orders and proofreading.[[12]]

However, lawyers must ensure that such documents are drafted with human oversight and care, and that the end result is checked for accuracy before use. Several reported decisions highlight the pitfalls for legal practitioners in this regard (see more below).

In the financial remedies field, family lawyers are now able to utilise AI-powered financial analysis software to review financial data in divorce proceedings. Examples include MLTPL, Capitalise and Open Banking. Machine learning embedded within the software extracts transactions from bank statements, credit card statements and other financial documents to help lawyers identify possible financial obfuscations.[[13]] Lawyers can then take a deeper dive to potentially uncover the hidden assets.

Beyond processes within individual cases, AI could also be incorporated in broader case management systems to classify and route cases to the relevant teams more efficiently. For example, courts in Florida have leveraged robotic process automation technology to classify incoming e-filings, extract information in tagged fields and docket them in the court’s case management systems.[[14]]

Increased access to justice

When used responsibly, AI undoubtedly also has the potential to increase access to justice and to reduce unmet legal needs. First, AI-powered chatbots and virtual assistants can be utilised to help court users, especially litigants-in-person, in identifying their legal issues, accessing relevant resources, completing forms and navigating legal processes. An example would be iLAB, an Intelligent Legal Assistance Bot introduced by the Legal Aid Bureau of Singapore. The chatbot helps litigants identify their legal issues and provides relevant information pertaining to divorce, custody, maintenance, family violence and the division of assets. It also helps ascertain a user’s eligibility for legal aid and directs them to legal advice by lawyers. Similar chatbots incorporated in the New Jersey and New Mexico Courts utilise AI to respond to complex public queries and help self-represented litigants locate the necessary resources.[[15]]

Secondly, AI, especially LLMs, can help enhance court users’ understanding of Family Court processes by translating legal jargon into plain English or other languages.[[16]] An example would be the Legalese Decoder.[[17]] Similar software has also emerged in the Los Angeles County Superior Court, where chatbot ‘Gina’ assists with a variety of traffic court services in multiple languages.[[18]] This helps enhance access to justice, especially for non-English speaking court users.

Thirdly, there are now AI-powered dispute resolution tools aimed at helping couples separate in an amicable and collaborative manner. For example, Amica, an AI-powered online platform launched by the National Legal Aid and the Legal Services Commission of South Australia, aims to help divorcing couples reach financial settlements and co-parenting agreements.[[19]] Amica leverages AI to take into account parties’ circumstances, similar cases and factors that would typically influence the court’s decision-making, to suggest potential settlement terms. The software will then record an agreement upon the completion of negotiations.

AI has also been used in the software ‘Lex Machina’, which provides predictive analytics of cases in specific courts or by certain judges, giving users insight into litigation trends.[[20]] It is argued that these predictive analytics can help litigants strategically assess the merits of their claim before commencing litigation, helping them save on costly legal fees.

However, the utility of such software remains to be seen in the family law context, given the factual idiosyncrasies across cases and the wide discretion afforded to Family Court judges. For example, in making financial orders, both the English and Singapore courts must have regard to all the circumstances of the case, and take into account a variety of factors, such as the parties’ financial resources, their needs, their contributions to the marriage, etc. Furthermore, under English law, the welfare of any minor child of the family is a first consideration. In the search for fairness, individual judges may place more weight on one factor over another, depending on the factual circumstances of each case, which results in highly discretionary decisions.

Singapore courts

Notably, the Singapore courts have been exploring the use of Generative AI to assist court administrators, registrars and judges in their work. For example, the Singapore courts have experimented with applications such as Microsoft’s ‘CoPilot Chat’ to assist their officers with their daily work (‘Copilot Chat’ is now also available as a ‘secure AI tool’ to judicial office holders in England and Wales, and is specifically addressed in the Lady Chief Justice’s latest guidance to the judiciary on the use of AI[[21]]).

Singapore court judges have experimented with using AI to query and interrogate submissions and evidence, which allows them to examine the coherence and strength of arguments before them. The potential for hallucinations is limited, given that AI-generated answers are grounded in specific document references.

In a speech given by the Honourable Justice Aidan Xu (Judge of the General Division of the High Court, Supreme Court, and Judge in Charge of Transformation and Innovation, Singapore Judiciary), it was said that the Singapore courts are also actively investigating, with due care, the potential of AI assistance in judgment writing.[[22]] AI-generated first drafts would be subject to the review, consideration and adaption by judges. It is said that, to date, Generative AI has produced drafts that are fairly readable and accurate when given the appropriate prompts, detail and documents, albeit with certain hallucinations or creative leaps. Notwithstanding this, Generative AI has been noted for its capability for red teaming, i.e. to adopt adversarial positions and generate draft judgments for different outcomes, in order to help judges critically examine the vulnerabilities of a given stance and facilitate judicial ‘thought experiments’. This allows judges to identify strengths and weaknesses in their reasoning and conclusions.

In this regard, however, the Honourable Justice Aidan Xu has emphasised that AI would not be removing the need for human input and interpretation in the near future, especially in family law practice, where the human element is necessary.

First-tier Tax Tribunal (UK)

In VP Evans (as executrix of HB Evans, deceased) & Ors v HMRC [2025] UKFTT 1112 (TC), which appears to be the first published UK judicial decision of its kind, Tribunal Judge McNall, sitting in the First-tier Tax Tribunal, has confirmed that he used AI to help draft his decision on a disclosure application. The underlying appeals challenged closure notices issued by HMRC concerning capital gains tax liabilities arising from tax planning arrangements involving offshore trusts and double taxation conventions between the United Kingdom and New Zealand, and the United Kingdom and Mauritius. The application was dealt with on the papers.

In the Tribunal’s postscript, entitled ‘The Use of AI’, Judge McNall indicated that he had used AI in the preparation of his decision, stating (at [48]), ‘I have used AI to summarise the documents, but I have satisfied myself that the summaries – treated only as a first-draft – are accurate. I have not used AI for legal research’. The judge had used Microsoft’s ‘Copilot Chat’, available to judicial office holders through the eJudiciary platform. All data entered into Copilot Chat on that platform remains secure and private.

Judge McNall noted the ‘Practice Direction on Reasons for Decisions’, released on 4 June 2024, in which the Senior President of Tribunals wrote:

‘Modern ways of working, facilitated by digital processes, will generally enable greater efficiencies in the work of the tribunals, including the logistics of decision-making. Full use should be made of any tools and techniques that are available to assist in the swift production of decisions.’

The judge explained that the disclosure application was suited to the use of AI as it was a paper-only, case management matter. He had not heard any evidence, nor had he been called upon to make any decision in relation to the credibility of any party. He concluded by stating that he was the decision-maker, and was responsible for the material created by AI, and that ‘the critical underlying principle is that it must be clear from a fair reading of the decision that the judge has brought their own independent judgment to bear in determining the issues before them’.

Challenges

Reliance on incorrect AI-generated results

As stated above, Generative AI is able to create new content based on patterns and data it has learnt from existing content. In particular, Generative Pre-trained Transformer (GPT) models use a technique known as a Transformer, which enables the model to analyse the surrounding context of the text it generates, to ensure that the language produced is coherent and aligned with the context provided.[[23]]

Generative AI does present unique risks, as demonstrated in recent decisions that involve lawyers or litigants misusing AI, including the reliance in court proceedings on false AI-generated content (known as ‘hallucinations’, i.e. where a Generative AI-system produces highly plausible but incorrect results); the topic of AI-generated hallucinations has been covered in greater detail elsewhere.[[24]] Research indicates that legal hallucinations are alarmingly prevalent when AI models are asked specific, verifiable questions about random federal court cases heard in the USA, and occur between 58% of the time with ChatGPT 4 and 88% with Llama 2.[[25]]

Such difficulties are illustrated by the case studies below.

MATA v Avianca, Inc, Case No 22-cv-1461

In this case, decided in June 2023, two New York attorneys had relied on authorities generated by ChatGPT in their court submissions. When ordered by the court to file an affidavit with annexed copies of the cited cases, the lawyers submitted an affidavit purporting to contain all but one of the decisions. However, they had in fact asked ChatGPT to show them the whole opinion, which ChatGPT acted upon by inventing a much longer text. The cases cited were fictitious and could not be located in any reputable legal database.

The court found that both attorneys had acted in bad faith and posed serious risks to the integrity of judicial proceedings. In particular, the court shed light on the various harms that flowed from the submission of fake opinions. These include but are not limited to causing the opposing counsel to waste time and money in exposing the deception, the court’s time being taken away from other more important endeavours and causing cynicism pertaining to the legal profession and the judicial system.

Following the decision, a federal judge in the Northern District of Texas issued a standing order, directing anyone who appeared before the court to either attest that ‘no portion of any filing would be drafted by Generative AI’ or to flag any language that was drafted by AI to be checked for accuracy.[[26]] He noted that AI in its current state of development is prone to hallucinations and biases of the programmer, ‘unbound by any sense of justice, honour or duty’, and acts based on programming over principle.

Felicity Harber v HMRC [2023] UKFTT 1007 (TC)

AI is also open to abuse by litigants. This case centred on the failure of Mrs Harber to notify HM Revenue and Customs (HMRC) of her liability to pay capital gains tax on the disposal of a property. In her appeal as a litigant-in-person, she argued that she had a reasonable excuse because of her mental health and/or because it was reasonable for her to have been ignorant of the law. In her written response document, she provided the Tribunal with the names, dates and summaries of nine decisions which supported her case.

It transpired that none of the authorities were genuine and had most likely been generated by AI. The Tribunal noted that the summaries of the cases provided were ‘plausible but incorrect’, bearing some resemblance to a number of genuine cases but with material differences, including the outcome of the decisions. The genuine cases had in fact all been determined in favour of HMRC.

In giving judgment, the Tribunal cited (at [20]) the SRA’s Risk Outlook report, which had warned that:

‘All computers can make mistakes. AI language models such as ChatGPT, however, can be more prone to this. That is because they work by anticipating the text that should follow the input they are given, but do not have a concept of “reality”. The result is known as a “hallucination”, where a system produces highly plausible but incorrect results.’

Handa v Mallik [2024] FedCFamC2F 957; Dayal [2024] FedCFamC2F 1166

In Handa v Mallik [2024] FedCFamC2F 957 an Australian solicitor tendered a list of authorities and case summaries pertaining to an application to enforce consent orders. However, the list had in fact been generated by an AI-powered legal software and the authorities cited did not exist. The court emphasised that ‘Generative AI does not relieve the responsible legal practitioner of the need to exercise judgment and professional skill in reviewing the final product to be provided to the court’. The solicitor was ordered to file submissions as to why he should not be referred to the Victorian Legal Services Board and Commission. Those submissions were later considered in Dayal [2024] FedCFamC2F 1166, when the court determined that the matter should be referred, those bodies being best placed to consider if further investigation and/or action was necessary.

Nina Zhang v Wei Chen [2024] BCSC 285

In this Canadian case, the husband’s counsel had inserted two non-existent cases invented by ChatGPT into the notice of application. While the court found that the lawyer had not done so with the intent to deceive, it made a costs order against her, to reflect the additional effort and expense incurred due to her inclusion of fake cases.

(1) Birgitte Wagner Olsen (2) Karsten Olsen v Finansiel Stabilitet A/S [2025] EWHC 42 (KB)

In this case, the appellants, who were litigants-in-person, had relied on a case summary on the case ‘Flynn v Breitenbach’, which they had also included in the authorities bundle. It was soon revealed that the case did not exist.

The court decided not to cause a summons for contempt of court against the appellants, given that the court could not be sure that the appellants had known the case summary was false. The appellants were elderly and had otherwise behaved properly in the proceedings. They had not been advantaged by the patently false case summary and would be entitled to legal aid should contempt proceedings be commenced, which would further diminish public resources. However, it is unlikely that qualified lawyers who attempt the same would be given as much latitude, given the stringent professional standards to which they are subject. This is illustrated by the subsequent case.

R (Ayinde) v London Borough of Haringey; Hamad Al-Haroun v Qatar National Bank QPSC & Anor [2025] EWHC 1383 (Admin)

These two English cases were listed before the Divisional Court under the Hamid jurisdiction,[[27]] which relates to the court’s inherent power to regulate its own procedures and to enforce duties owed by lawyers to the court.

In the first case, the junior counsel concerned had cited five fictitious cases (including a fake Court of Appeal case) in her grounds for judicial review. While she accepted that she was at fault to some degree, she denied that she had used AI tools when preparing her grounds for judicial review.

In this regard, the court opined that there were two possible scenarios – that she had deliberately included fake citations or she did use AI tools, and hence her denial of the same in her witness statement was untruthful. Both scenarios would amount to a contempt of court. While the court decided not to initiate contempt proceedings or refer the case to the Law Officers due to the specific factual idiosyncrasies, the court clarified that this does not serve as a precedent for future cases, and lawyers who do not comply with such professional obligations ‘risk severe sanction’. The court subsequently referred her to the regulator.

In the second case, the claimant and his solicitor had, in correspondence with the court and in their respective witness statements, relied on AI-generated authorities. Of the 45 citations that had been placed before the court, in 18 instances, the case cited did not exist. In respect of those cases that did exist, in many instances they did not contain the quotations that were attributed to them, did not support the propositions for which they were cited and did not have any relevance to the subject matter of the application.

The court observed that the solicitor concerned had failed to comply with his professional responsibility to verify the accuracy of material that was put before the court, and that he was not entitled to rely on his lay client to ensure the accuracy of citations. As the lawyer had not demonstrated a deliberate attempt to mislead the court, his conduct did not merit contempt proceedings, but the court did refer the solicitor to the SRA.

The court made it clear that legal professionals with individual leadership responsibilities (such as heads of chambers and managing partners), and those responsible for regulating the provision of legal services, must take practical and effective measures to uphold the administration of justice and public confidence. Such measures must ensure those providing legal services understand and comply with their professional and ethical obligations, and their duties to the court, when they utilise AI. The court stated thus (at [9]):

‘For the future, in Hamid hearings such as these, the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled.’

XAI v XAH and another matter [2025] SGFC 93

In Singapore, there have also been reported cases of litigants and legal professionals using fictitious AI-generated authorities. In this case, the father had used ChatGPT to assist him in ‘identifying relevant local legal precedents’. This caused him to include 14 fictitious case citations proposed by ChatGPT in his written submissions.

The court ordered the father to pay the mother’s costs and specifically directed that should the father represent himself in any future matter before the family courts, and if he were to use Generative AI in preparing court documents, he must declare this in writing to the court and state that he has complied with the Guide on the Use of Generative Artificial Intelligence Tools by Court Users.[[28]] This approach strikes a fair balance between allowing the use of Generative AI and ensuring that the court is not taken by surprise by AI hallucinations. That the father is self-represented does not absolve him of responsibility of ensuring that the information submitted in court proceedings is accurate.

Further, the court noted that in a common law system, the citation of cases plays an important role in light of the principles of stare decisis and ratio decidendi. The court must be able to take parties’ citations at face value and citing fictious cases has a deeply corrosive impact on the legal system.

Tajudin bin Gulam Rasul & Anor v Suriaya bte Haja Mohideen [2025] SGHCR 33

This case is the first reported judgment in Singapore involving legal practitioners using fictitious AI-generated authorities and provides instructive guidance as regards the Singapore court’s approach to this issue. In the case, counsel for the claimants had cited a fictitious authority in their written submissions, which had been produced by a Generative AI tool. Notably, in line with the Australian case of Luck v Secretary, Services Australia [2025] FCAFC 26, the court omitted the name and the case number of the fictitious authority to prevent the further propagation of false information.

Citing the Guide on the Use of Generative Artificial Intelligence Tools by Court Users (which is elaborated upon below), the court explained the three principles embodied in the Guide, namely:

(1) that court users are not prohibited from using GenAI tools to prepare court documents if they comply with the Guide;

(2) that court users must independently verify that all materials placed before the court are in existence and accurate in nature, including any AI-generated references, and that advocates and solicitors bear an additional professional duty to do so; and

(3) that court users remain fully responsible for the content in all their court documents, and they must continue to comply with existing rules and practice directions.

The court ordered counsel for the claimants to pay costs to the defendant. When considering the egregiousness of the advocate and solicitor’s conduct, the court stated that it would consider various factors, including:

(1) whether the fictitious AI-generated authority was intentionally cited to mislead or deceive the court;

(2) whether the advocate and solicitor had previously cited fictitious AI-generated authorities to the court;

(3) whether an immediate, full and truthful explanation is given to the court and the counterparty, specifically whether the advocate and solicitor expeditiously informed the court that a fictitious AI-generated authority was cited and took appropriate steps, in consultation with the court, to remedy the mistake; and

(4) the impact on the underlying litigation, in particular if the legal proposition purportedly supported by the fictitious authority exists and could have been supported by a genuine authority.

Given the severity of the matter, the court ordered both parties’ counsel to provide a copy of the court directions to their respective clients. Notably, the court also opined on the gravity of a lawyer’s conduct in citing fictitious authorities to the court (at [3]):

‘When an advocate and solicitor cite a fictitious authority to the court, the gravity of his improper conduct does not lie solely in the loss of valuable judicial time and the unnecessary expenditure of his counterparty’s resources in uncovering his actions. Even more pernicious is the fissure that he foments in the public’s perception of the legal profession.’

Breach of confidentiality

Additionally, the misuse of AI may lead to a breach of client confidentiality. When lawyers use public Generative AI programs, they risk entering confidential information into these systems. Such information may be stored and subsequently accessed by the provider, thus amounting to a breach of the duty of confidentiality in the process. Moreover, such input will likely be further used in training AI systems and form part of the system’s database, which is drawn upon when the AI responds to queries from other users. As a result, such input may become public knowledge.[[29]]

Forged or manipulated documents

Additionally, technological advancements raise concerns pertaining to falsified evidence in proceedings. The case of Crypto Open Patent Alliance v Wright [2024] EWHC 1198 (Ch), which concerned Dr Craig Wright, who had forged documentation in support of his claim that he was Satoshi Nakamoto, the creator of Bitcoin, is probably the most well-known.

For family lawyers, this phenomenon cuts across all areas of family law, and the falsity of such documents may only be subsequently discovered when lawyers or litigants locate the original documents by chance. Alterations may be made through Adobe Acrobat, PDF Expert, Microsoft Word or Google Drive. There are also various publicly accessible and free websites and apps, which allow individuals to create and then screenshot or print out fake messages. These fake messages may subsequently be relied upon in proceedings.[[30]] The following two decisions are good examples.

X v Y [2022] EWFC 95

This English case, litigated before HHJ Hess in the Family Court, drew attention to the ease with which unscrupulous litigants can create realistic bank statements and other documents.[[31]] Here, the parties had been living overseas but the husband wished to move to London. To persuade the wife that the move would be financially beneficial to the family, the husband showed the wife a draft sale contract offering him £80m to purchase his company, as well as a bank statement demonstrating that a down payment of £8m had been paid.

When divorce proceedings commenced several years later, the court found that the husband had fabricated the bank statements to convince his wife to move to London. This was exposed when bank statements were obtained directly from the bank, confirming that there was never a payment of £8m.

Filatona Trading Limited & Anor v Quinn Emanuel Urquhart & Sullivan [2024] EWHC 2573 (Comm)

In this case, an unnamed business intelligence consultancy had provided a well-known law firm with a report which was deployed in proceedings with a view to setting aside an arbitration award. That report was subsequently revealed to be a forgery.

Although the lawyers at the firm did not know about the wrongdoing, the court rejected the argument that they were ‘mere onlooker[s] or witness[es]’ advising on a document. Instead, the court found that they ‘were actively involved in the (unwitting) verification and deployment’ of the forged document in legal proceedings and were ‘accordingly mixed up in the alleged wrongdoing and enabled the purpose of that wrongdoing to be furthered’ (at [80]).

Deepfake technology

Deepfake technology employs AI to generate realistic fake videos, audio and images, creating the illusion that someone is doing or saying something that they did not actually do or say.

The failure to detect such manipulated evidence has the potential of skewing the court’s decision-making, resulting in unsafe findings. This is compounded by the fact that photos and videos can now be doctored quite easily, facilitated by the increased accessibility of sophisticated editing software, and the tendency by most to take evidence at face value.

Deepfake technology can also be deployed with great effect to threaten, blackmail and abuse victims, for example by turning everyday images into sexually explicit material without victims’ consent. Ephemeral messaging (mobile-to-mobile transmission of multimedia messages that automatically disappear from the recipient’s screen after the message has been viewed or is later edited) can also be easily leveraged by perpetrators of domestic abuse. Litigants have also relied on the existence of deepfakes to argue that authentic media portray them in a negative light, or authentic evidence harmful to their cases is fake, which poses further challenges to the integrity of the court system.[[32]]

The problem is amplified in family proceedings, where (in Singapore, for example) trials are commonly dispensed with to promote a more expedient outcome. Mediation and negotiation often take centre stage, with the court recording consent orders upon parties reaching an agreement. Manipulated evidence could be used in out-of-court discussions, undetected by parties and without being subject to court scrutiny. In ex parte hearings, common in the context of applications for emergency personal protection orders, freezing injunctions, etc. fake AI-generated evidence may easily fly under the radar.

Guidance and possible solutions

Lawyers should ensure compliance with judicial and regulatory guidance, which provide a useful point of reference. In England and Wales, the Artificial Intelligence (AI) – Guidance for Judicial Office Holders, 14 April 2025, issued by the Lady Chief Justice[[33]] aims to support the judiciary in their interactions with AI. Similarly, the Solicitor’s Regulation Authority[[34]] and the Bar Council have published recent guidance in this regard.[[35]]

The Bar Council’s guidance for practitioners when using AI (including LLMs, such as ChatGPT), in broad terms, is as follows:

(1) Due to possible hallucinations and biases, it is important for barristers to verify the output of LLM software and maintain proper procedures for checking generative outputs.

(2) LLMs should not be a substitute for the exercise of professional judgment, quality legal analysis and the expertise that clients, courts and society expect from barristers.

(3) Barristers should be extremely vigilant not to share with an LLM system any legally privileged or confidential information.

(4) Barristers should critically assess whether content generated by LLMs might violate intellectual property rights and be careful not to use words which may breach trademarks.

(5) It is important to keep abreast of relevant court rules, which in the future may implement rules/practice directions on the use of LLMs, for example, requiring parties to disclose when they have used Generative AI in the preparation of materials, as has been adopted by courts elsewhere.

In Singapore, the Supreme Court has disseminated the Guide on the Use of Generative Artificial Intelligence Tools by Court Users, which sets out general principles and guidance regarding the use of Generative AI tools in court proceedings.[[36]] Notably, the Guide provides that ‘any output generated should only be used on the basis that the Court User assumes full responsibility for the output’ (p 3). In particular, lawyers should assess whether the AI-generated output is suitable for their specific case, and ensure that any output used in court documents is accurate, relevant and does not infringe intellectual property rights. The Guide makes clear that breaching its provisions will carry consequences, including:

(1) cost orders against a litigant, or personally against the lawyer;

(2) documents or other material submitted to court being disregarded (in part or in full), or being given less evidentiary weight;

(3) disciplinary action against lawyers who are in breach; and

(4) appropriate action being taken in accordance with existing laws in respect of intellectual property rights, personal data protection, the protection of legal privilege and contempt of court.

Helpfully, the Singapore courts have also proposed the ‘traffic light’ system for the use of AI in carrying out legal tasks, which could be a reference point for practitioners worldwide.[[37]] Activity in the green zone, such as using AI for evidence review, generating summaries and conducting speech-to-text transcription, would be permissible. Using AI for access to justice purposes and to help draft documents would sit within the yellow zone, which is still permissible but requires moderate caution in relation to the creativity and precision of AI. However, using AI to predict case outcomes and decide cases would be in the red zone, which requires heightened caution in use, especially regarding the fine line between language and knowledge generation in the context of AI.

Practitioners must remain vigilant to signs of forged or manipulated evidence. Information should not necessarily be accepted at face value, especially when clients indicate discrepancies between their understanding of the matter and the narrative presented by the financial documents, or where they express doubts about the accuracy of the opposing party’s claims. Software could be leveraged to detect forged or manipulated documents,[[38]] but as with other technologies, the use of AI to combat AI must remain subject to human oversight, to minimise error and increase detection rates. In some cases, engaging a specialist forensic IT specialist may be necessary (which can be expensive and be simply out of reach for most litigants).

Given that the judiciary are an essential part of ensuring the authenticity of any evidence submitted, some have also canvassed the possibility of an internal procedure involving independent forensic experts, who are able to assess the authenticity of visual and audio evidence.[[39]] However, this is likely to pose undue cost considerations for the judiciary and the court system, and raises additional questions as to whether the judiciary are currently sufficiently trained to identify manipulated evidence.[[40]]

Conclusion

The increasing use of AI tools in the legal sector is inevitable. As Sir Geoffrey Vos MR aptly puts it, it may soon be negligent for lawyers not to use AI, in particular, where failing to do so jeopardises client interests.[[41]] Whilst technological advancements could be utilised to enhanced access to justice, drive efficiency and facilitate informed decision making, such benefits come with corresponding challenges.

As technology evolves, legal frameworks must adapt to safeguard vulnerable parties and uphold the principles of justice and fairness in family law. Ultimately, AI should not be a substitute for the exercise of professional judgment, the application of sound ethics and quality legal analysis by individual judges and lawyers.

[[1]]: The authors wish to acknowledge Maisie Ng (Drew & Napier LLC) for her assistance with this article.

[[2]]: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence/part-1-the-basics-of-explaining-ai/definitions

[[3]]: www.imda.gov.sg/about-imda/emerging-technologies-and-research/artificial-intelligence

[[4]]: www.sra.org.uk/sra/research-publications/artificial-intelligence-legal-market

[[5]]: See for example, www.rajahtannasia.com/rajah-tann-singapore-catapults-its-innovation-journey-by-partnering-with-gen-ai-pioneer-harvey-to-develop-an-ai-driven-workforce; www.wongpartnership.com/news/detail/wongpartnership-llp-enters-into-groundbreaking-partnership-with-harvey-to-drive-generative-ai-integration-in-singapores-legal-industry

[[6]]: www.mlaw.gov.sg/enhanced-productivity-for-law-firms-in-singapore-with-the-legal-technology-platform

[[7]]: The Law Society Guidance on Generative AI: the essentials (20 May 2025): www.lawsociety.org.uk/topics/ai-and-lawtech/generative-ai-the-essentials

[[8]]: www.barcouncil.org.uk/resource/new-guidance-on-generative-ai-for-the-bar.html; www.barcouncilethics.co.uk/wp-content/uploads/2024/01/Considerations-when-using-ChatGPT-and-Generative-AI-Software-based-on-large-language-models-January-2024.pdf

[[9]]: www.judiciary.uk/wp-content/uploads/2023/12/AI-Judicial-Guidance.pdf

[[10]]: See for example, www.lexisnexis.com/en-us/products/lexis-plus.page?srsltid=AfmBOoogfFD74quc1aBow-phteMkqhfAYA95zwsn_UqXUgDQBxPmTdNE; https://sal.org.sg/articles/building-tomorrow-together-speech-by-the-honourable-justice-kwek-mean-luck-at-techlaw-fest-2024/; www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/factsheets/2024/gpt-legal; https://release-notes.lawnet.com/2024/10/15/gen_ai

[[11]]: www.lexisnexis.com/community/insights/legal/b/thought-leadership/posts/3-ways-ai-can-help-in-house-counsel-draft-legal-documents-faster-in-microsoft-word

[[12]]: www.divorcemate.com/family-law/

[[13]]: www.valid8financial.com/resource/artificial-intelligence-speeds-lifestyle-analysis-for-family-law-firms

[[14]]: https://ncsc.contentdm.oclc.org/digital/collection/tech/id/1191/rec/2

[[15]]: https://ncsc.contentdm.oclc.org/digital/collection/tech/id/1191/rec/2

[[16]]: www.nuffieldfjo.org.uk/wp-content/uploads/2024/05/NFJO_AI_Briefing_Final.pdf at p 8.

[[17]]: https://legalesedecoder.com

[[18]]: https://ncsc.contentdm.oclc.org/digital/collection/tech/id/1191/rec/2

[[19]]: www.ag.gov.au/families-and-marriage/families/family-law-system/amica-online-dispute-resolution-tool

[[20]]: https://lexmachina.com/legal-analytics

[[21]]: Artificial Intelligence (AI) – Guidance for Judicial Office Holders, 14 April 2025: www.judiciary.uk/wp-content/uploads/2025/04/Refreshed-AI-Guidance-published-version-website-version.pdf

[[22]]: Justice Aidan Xu, Speech at the IT Law Series 2025: Legal and Regulatory Issues with Artificial Intelligence: www.judiciary.gov.sg/news-and-resources/news/news-details/justice-aidan-xu—speech-at-the-it-law-series-2025—legal-and-regulatory-issues-with-artificial-intelligence

[[23]]: www.ibm.com/think/topics/gpt

[[24]]: See Jennifer Lee’s blog for the FRJ, ‘Fabricated Judicial Decisions and “Hallucinations” – a Salutary Tale on the Use of AI’ (March 2024): https://financialremediesjournal.com/fabricated-judicial-decisions-and-hallucinations-a-salutary-tale-on-the-use-of-ai/. See also Alexander Chandler’s recent, excellent blog for the FRJ, ‘Legal Research, AI and the Canary in the Mineshaft’ (May 2025): https://financialremediesjournal.com/legal-research-ai-and-the-canary-in-the-mineshaft/

[[25]]: Matthew Dahl, Varun Magesh, Mirac Suzgun and Daniel E Ho, ‘Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models’, 16 J Legal Analysis 64 (2024), https://doi.org/10.48550/arXiv.2401.01301

[[26]]: Hon Brantley Starr, ‘Mandatory Certification Regarding Generative Artificial Intelligence [Standing Order]’, (ND Tex); https://guides.lib.uchicago.edu/AI/Practice; www.dcbar.org/for-lawyers/legal-ethics/ethics-opinions-210-present/ethics-opinion-388

[[27]]: R (Hamid) v Secretary of State for the Home Department [2012] EWHC 3070 (Admin) [2013] CP Rep 6, R (DVP) v Secretary of State for the Home Department [2021] EWHC 606 (Admin), [2021] 4 WLR 75 at [2].

[[28]]: www.judiciary.gov.sg/docs/default-source/news-and-resources-docs/guide-on-the-use-of-generative-ai-tools-by-court-users.pdf?sfvrsn=3900c814_1

[[29]]: www.judiciary.uk/wp-content/uploads/2023/12/AI-Judicial-Guidance.pdf

[[30]]: See for example ifaketextmessage.com or fakewhats.com. There are also free apps available for download, such as the Fake Text Message app on Google Play or Faker 2 on Apple Store.

[[31]]: For further discussion, see www.phb.co.uk/article/manufacturing-documents-in-divorce-proceedings. X v Y [2022] EWFC 95 at [28].

[[32]]: https://legal-forum.uchicago.edu/print-archive/deepfakes-court-how-judges-can-proactively-manage-alleged-ai-generated-material

[[33]]: www.judiciary.uk/wp-content/uploads/2025/04/Refreshed-AI-Guidance-published-version-website-version.pdf

[[34]]: www.lawsociety.org.uk/topics/ai-and-lawtech/generative-ai-the-essentials

[[35]]: www.barcouncil.org.uk/resource/new-guidance-on-generative-ai-for-the-bar.html; www.barcouncilethics.co.uk/wp-content/uploads/2024/01/Considerations-when-using-ChatGPT-and-Generative-AI-Software-based-on-large-language-models-January-2024.pdf

[[36]]: www.judiciary.gov.sg/docs/default-source/circulars/2024/registrar’s_circular_no_1_2024_family_justice_courts.pdf?sfvrsn=d0e5ad96_1

[[37]]: https://insight.thomsonreuters.com/sea/legal/posts/ai-in-the-judiciary-a-singapore-courts-perspective

[[38]]: Examples include Resistant.AI, designed to identify falsified evidence in documents which may not be apparent to the human eye, Amped Five, which can be used to validate the integrity of image and video evidence, and Pindrop, which helps detect audio deepfakes.

[[39]]: www.counselmagazine.co.uk/articles/deepfakes-in-the-courts

[[40]]: www.judiciary.gov.sg/docs/default-source/circulars/2024/registrar’s_circular_no_1_2024_family_justice_courts.pdf?sfvrsn=d0e5ad96_1

[[41]]: www.legalfutures.co.uk/latest-news/it-will-soon-be-negligent-not-to-use-ai-master-of-the-rolls-predicts

is curated by
The Leaders In Family Law Books & Software
EXPLORE OUR PRODUCTS