This is a guest post from Gabriela Sánchez of the Universidad de los Andes, Bogotá
The International Arbitration system has been experiencing a growing and novel phenomenon that attempts to change the way we conceive it today: the use of Artificial Intelligence (AI). The use and impact of AI on the legal profession is slowly becoming a hot topic in legal, technology, and academic circles.[1] The benefits of AI are undeniable: it can enhance legal representation, boost human cognitive abilities and automate time-consuming labor. AI tools are already playing a significant role throughout the arbitration process and are being used by both the parties to the disputes and by arbitral tribunals. Indeed, AI-infused products and services are being used to draft legal documents, automate management and research tasks, quantify damages, assist in discovery activities, appoint arbitrators and even draft arbitral awards.[2] Definitely, AI has the potential to change the international arbitration system as we know it.
However, not all that glitters is gold, the implementation of AI also involves risks. Specifically, regarding its use by arbitral tribunals, I would like to highlight two features of AI that should be handled carefully. On the one and, the lack of explainability of some systems, known as black box systems. And on the other, the problem of the lack of neutrality or biases in AI. These two risks are especially important in International Arbitration since they have the potential of weakening the reasonableness, coherence and legitimacy of the system.
Lack of explainability
Firstly, the problem of lack of explainability in AI tools appears when the developers and implementers of these systems cannot explain how the program reaches a conclusion or a prediction. This is known as black box systems and occurs especially within Deep Learning programs. This is problematic since as AI-enabled systems are becoming critical to life, death, and personal wellness, thee need to trust these AI-based systems is paramount. In response to these concerns, regulations such as the Data Protection Directive and the General Regulation for Data Protection (GDPR) of the European Union establish a “right to explanation” to demand explanation and access to the logic behind the decisions made using AI.[3] Therefore, explainable AI pretends that developers and operators can answer hot questions like: Why did the AI system make a specific prediction or decision? Why didn’t the AI system do something else? When did the AI system succeed and when did it fail? When do AI systems give enough confidence in the decision that you can trust it, and how can the AI system correct errors that arise?[4]
Not being able to answer these questions can be especially problematic in judicial activity, where providing reasons and justifications is precisely one of its fundamental features. Indeed, in the International Arbitration, parties expect the arbitrators to reach duly explained and consistent decisions, which help legitimate the entire system. For this reason, the fact that AI programs might not able to provide explanations for their decisions in the same way humans do, supports the theory that AI will not be able to replace arbitrators in decision making in the near future.
Lack of neutrality
Secondly, AI may not be as "objective" and "neutral" as it is believed[5]. AI systems are trained, fed and operated with data sets. The problem arises when the data used for training are themselves biased, which may result in AI systems becoming a vehicle to reproduce human biases and prejudices.[6] Similarly, AI systems used to assist the decision-making of arbitral tribunals could reproduce the trends that inhabit the cases entered as training data. Critical questions would be: Which cases will be used for this purpose? Who will choose them? How will diversity be guaranteed taking into account that many cases are kept confidential[7]. In other words, a worldview or set of values that inadvertently inhabit previous cases and that are used to train the programs can influence how these systems behave and the decisions they make. For these reasons, the use of AI has the potential to affect the neutrality and equality between the parties, which are essential elements for maintaining the legitimacy of the International Arbitration regime.
In conclusion, the rapid technological development we are contemplating nowadays is a reality that cannot be ignored. AI has virtues that must be recognized and exploited in all fields: from science and healthcare to finance and law. However, its application in the International Arbitration should be handled with caution. Especially, the promises of efficiency and neutrality of these systems should be taken with kid gloves if one wants to maintain the legitimacy of the International Arbitration. It should not be forgotten that with the use of AI by arbitral tribunals, the lack of explainability and biases involved in some AI-infused tools can threaten the neutrality, reasonableness legitimacy of the entire International Arbitration system. Undoubtedly, AI must be handled with caution.
[1] Bento, L. (2018). International Arbitration and Artificial Intelligence: Time to Tango? Kluwer Arbitration Blog. Recovered from: http://arbitrationblog.kluwerarbitration.com/2018/02/23/international-arbitration-artificial-intelligence-time-tango/.
[2] Ibid.
[3] Watcher, S., Mittlestadt, B., Floridil, L. (2017). Transparent, Explainable, and Accountable AI for Robotics. Science Robotics, 2 (6). Recovered from: http://robotics.sciencemag.org/content/2/6/eaan6080.
[4]Schmelzer, R. (2019). Understanding Explainable AI. Forbes Magazine July 23, 2019. Recovered from: https://www.forbes.com/sites/cognitiveworld/2019/07/23/understanding-explainable-ai/
[5] See, for example, the reference to AI as “neutral and bias-free” in Scherer, M. (2019). The Vienna Innovation Propositions, International Arbitration 3.0 – How Artificial Intelligence Will Change Dispute Resolution, Austrian Yearbook on International Arbitration, Volume 2019.
[6] See, for example, the controversy regarding Hewlett Packard computers with “racist” facial recognition systems at: https://www.wired.com/2009/12/hp-notebooks-racist/.
[7] Bento, L. (2018). International Arbitration and Artificial Intelligence: Time to Tango? Kluwer Arbitration Blog. Recovered from: http://arbitrationblog.kluwerarbitration.com/2018/02/23/international-arbitration-artificial-intelligence-time-tango/.