Home > Tech > Expert Contributor

AI in Arbitration: Faster Cases, Fragile Awards?

By Omar Guerrero - Hogan Lovells
Office Managing Partner

STORY INLINE POST

Omar Guerrero and Eduardo Lobatón By Omar Guerrero and Eduardo Lobatón | Office Managing Partner - Thu, 12/11/2025 - 08:00

share it

(In collaboration with Fernanda Serrano, Associate at Hogan Lovells)

Arbitration has long been the preferred dispute-resolution mechanism for cross-border business. Its appeal lies in party autonomy, confidentiality, specialization of the decision-makers, complexity of the issues at stake and procedural flexibility, but above all, in the production of a binding award that can be recognized and enforced internationally under the New York Convention. This enforceability is arbitration’s “superpower,” enabling parties to resolve disputes with confidence that the outcome will be respected across jurisdictions. However, this very feature is now facing new challenges with the rapid integration of artificial intelligence into the arbitration process.

The arrival of generative AI in 2023 marked a paradigm shift in knowledge work, including legal practice. By 2024, what began as cautious experimentation quickly became normalized, with law firms and arbitral institutions adopting AI tools to streamline case preparation and management. By 2025, the pace of adoption accelerated further, prompting the development of internal policies, procedural-language templates, and robust debates about disclosure obligations and ethical boundaries. AI is no longer a novelty in arbitration, it is becoming an integral part of how cases are prepared, managed, and even decided.

AI tools now assist with a range of tasks: summarizing submissions, organizing and analyzing evidence, conducting legal research, and even drafting procedural documents or sections of awards. These technologies promise significant benefits, including reduced costs, faster case resolution, and improved efficiency. For parties and counsel, the ability to process large volumes of information quickly can be a game-changer, especially in complex, document-heavy disputes.

Despite these advantages, the use of AI in arbitration introduces new risks, particularly regarding the enforceability of awards. Arbitration is ultimately judged not by the speed of its proceedings, but by the strength of its results, specifically, whether an award can withstand scrutiny at the seat (in annulment proceedings) and at the enforcement stage in foreign courts. The use of AI, if not carefully managed and transparently disclosed, can undermine the perceived legitimacy and independence of the arbitral process.

The core issue is not the use of AI as an assistant tool, but rather concerns about attribution, independence, and due process. If parties suspect that a tribunal has delegated its reasoning to an AI tool, relied on information outside the evidentiary record, or introduced opaque “analysis” into its decision-making, they may challenge the award on grounds of excess of mandate or breach of due process. In enforcement proceedings, perception is critical: courts expect a process that is fair, transparent, and firmly anchored in the evidence and arguments presented by the parties.

The ongoing US case of LaPaglia v. Valve Corp. has become a focal point in this debate. The claimant alleges that the arbitral tribunal improperly relied on AI in rendering its award, pointing to the unusually rapid issuance of a 29-page decision just 15 days after final submissions during a holiday period, references to facts not in the record, and even an “authorship test” suggesting AI-generated text. The respondent, Valve Corp., counters that stylistic quirks and factual errors are not unique to AI and that mere speculation about AI use should not meet the high bar required for annulment or a finding of misconduct.

This case highlights a new frontier in annulment and enforcement proceedings: challenges not based on misapplication of the law, but on the alleged outsourcing of judicial reasoning to machines. While the court has yet to decide whether AI involvement alone can justify setting aside an award, the case serves as an early warning for arbitrators and practitioners alike.

The lesson for arbitrators and practitioners is not to avoid AI altogether, but to ensure that the enforceability of awards is protected through traceability and transparency. Awards must clearly reflect human judgment, grounded in the evidentiary record and legal arguments presented. In an era of generative tools, best practice requires a “defensive architecture” approach: disclosing material AI assistance when appropriate, rigorously verifying every fact and citation, and ensuring that the chain of reasoning remains the tribunal’s own.

Arbitral institutions and practitioners are beginning to respond by developing guidelines and protocols for AI use. These may include requirements for disclosure of AI assistance, human verification of AI-generated content, confidentiality restrictions, and even sanctions for improper use. Some institutions are considering the inclusion of AI-specific provisions in procedural orders, addressing issues such as the provenance of digital evidence and the maintenance of audit trails documenting how AI outputs were validated.

For businesses and their legal teams, the key takeaway is the need for a proactive enforceability strategy. This means addressing AI use at the outset of proceedings, building clear provisions into procedural orders regarding disclosure, verification, and confidentiality. Parties should demand transparency about the use of AI in the preparation of evidence and submissions, and insist on robust validation processes to ensure the integrity of digital evidence.

Maintaining an audit trail of how AI outputs were generated and validated can be crucial in defending the enforceability of an award. In addition, parties should stay informed about evolving best practices and legal standards regarding AI in arbitration, as these will continue to develop in response to new technologies and emerging case law.

Arbitration will undoubtedly remain a powerful forum for resolving cross-border disputes, but the integration of AI introduces new complexities that must be carefully managed. While speed and efficiency are valuable, they cannot come at the expense of enforceability and legitimacy. The real measure of success in the AI era will be the ability to deliver awards that not only resolve disputes quickly, but also withstand scrutiny and enforcement across jurisdictions. By embracing transparency, traceability, and robust procedural safeguards, the arbitration community can harness the benefits of AI while preserving the integrity of its most valuable asset: the enforceable award.

You May Like

Most popular

Newsletter