Many thanks to FRA Associate, Dinara Afaunova, for her outstanding contributions to this article and for going above and beyond to ensure the success of the event.
FRA joined this year’s Paris Arbitration Week (PAW) where arbitration and ADR practitioners worldwide gathered to hear from eminent speakers on the latest industry insights and challenges. FRA Partner and Head of FRA’s Paris Office, Yousr Khalil, moderated and co-hosted the panel ‘AI meets AI: International Arbitration in the Era of Artificial Intelligence'.
Over 60 PAW attendees joined to hear the expert panellists Noah Rubins KC (Freshfields Bruckhaus Deringer LLP), Kathryn Khamsi (Three Crowns LLP), Alexander Leventhal (Quinn Emanuel Urquhart & Sullivan, LLP), José Ricardo Feris (Squire Patton Boggs) and Laura Galindo-Romero (Meta) debate the potential for using Artificial Intelligence (AI) in International Arbitration, including its impact on decision-making. Key takeaways and photographs from the panel discussion are included below.
Key Discussion Points
- AI and its impact on the business and practice of law
- AI and bias in decision making
- AI and evidentiary challenges
Key Takeaways
The Need for a Targeted Use of AI for Arbitration Law
Today, AI tools can review much larger quantities of documents than humans, drawing correlations between the content and providing an “answer” in a few seconds. For example, AI can analyse a draft of a contract from an opposing party and point the lawyers to potentially problematic clauses to be examined more closely. This AI analysis is based on the processing of similar contracts available online. Despite this technological prowess, panellists emphasized that what AI can do at this stage remains relatively limited with respect to critical reasoning and creative problem solving. For example, AI is not able to assess fairness of an arbitration decision-making process. However, they recognized that AI contributes to simplifying certain menial tasks for lawyers while creating a need for a different type of skills.
One aspect limiting the use of AI in International Arbitration is the confidentiality of most arbitral awards. As for judicial decisions related to arbitration cases, many jurisdictions do not publish cases.
Awareness of Biases and Limitations in AI Tools
The issue of AI as a ‘black box’ was raised during the discussion, as we often have a limited understanding of how the AI algorithm is developed. Panellists emphasized that data, which is fed into models must be diverse to limit biases. Processing data is what AI does best, but the outcome is only as good as the data that the AI is ‘fed’. A crucial point is the disparities in data volumes coming from various parts of the world and, in different languages, which means AI is biased in favour of content creators, such as the US.
The Predictive Power of AI for Decision-Making
Computer models can effectively recreate historical events to understand the consequences of decisions and to present a predictive tool to see how a system would behave in the future. Nevertheless, no such model can be perfect at this stage, as societal systems do not always behave in ways that are immediately quantifiable (e.g. measurement of popular support or the rule of law). This is accompanied by occasional lack of relevant data, which complicates the deployment of this kind of technology in International Arbitration, while solutions such as expansion of the dataset may not always be straightforward.
The panellists concluded that while AI can serve as a powerful tool that can assist throughout the procedure, human judgement remains essential in arbitration law.
View Photos from the Event
Please take a moment to browse through a selection of photographs below taken from the panel session.