Japanese

TSUKUBA FRONTIER

Society/Culture

#042 Exploring "Moderate Trust" between Humans and AI: Desirable AI Regulations in Legal and Behavioral Science

Professor KIMURA Makiko, Business Law Group, Institute of Business Sciences

Professor KIMURA Makiko

In recent years, the progress of artificial intelligence (AI) and robotics has been remarkable, prompting discussions about regulations and governance as automation becomes increasingly prevalent. Amid this progress, there exists a tendency among humans to either unquestioningly embrace these machines or conversely, harbor a strong aversion towards them. Integrating insights from behavioral science becomes crucial in shaping the future legal framework governing the responsible and appropriate utilization of AI.


The Intersection of Automation Technology and Law

images

Consider online shopping where a computer algorithm operates without human intervention. This challenges the traditional understanding of contract formation outlined in the Civil Code. The ambiguity arises in discerning whether a sales agreement is actually established in such online transactions. Additionally, in systems where users input personal information leading to personalized product suggestions, users remain unaware if the system is programmed to steer them toward specific choices.


As automation through robots and AI advances, numerous legal issues emerge, spanning various domains such as contract, intellectual property, personal information protection, and constitutional laws. For instance, in the event of an accident caused by an automated vehicle, determining responsibility—whether it lies with the driver, manufacturer, or seller—requires a nuanced examination of the causal relationships among various events. However, the opaqueness of AI computational processes, particularly in technologies like machine learning and deep learning, presents challenges in making legal determinations.


Considering Human Bias in Regulation

To address these challenges, preemptive regulations are under consideration. This approach involves establishing ethical principles for AI in its developmental stages to mitigate risks. Legal experts highlight the concerns associated with poorly understood algorithmic content, prompting engineers to question the acceptability of relying on such algorithms for convenience and development risk management.


However, human biases remain elusive. Studies on behavioral science notes that individuals tend to exhibit biases or cognitive patterns at both extremes: an excessive trust and reverence for AI or a complete distrust and aversion toward it. Defining a "moderately trusting relationship" with AI is complicated because attempts to mitigate bias often involve inherent biases. Consequently, a new avenue of research has emerged, focusing on framing laws and regulations based on the premise that human judgment inherently contains biases, using findings from behavioral science.


Toward a New Legal Research Method

Initially, my research focus revolved around the validity of contracts based on automated algorithms, stemming from a sense of urgency triggered during my tenure at a securities firm. Working within the jurisprudential framework, I applied the "comparative method," delving into precedents from various countries and analyzing the emerging laws and their contextual backgrounds. This exploration led me to examine the influence of AI and other emerging technologies on the regulatory framework governing business transactions and investment behavior.


What captivated me was the innovative prospect of integrating human behavior and cognition traits into the AI regulation. Behavioral science, already influential in economics and diverse fields, holds promising prospects for future development within jurisprudence.


Contemplating Limits and Purpose

There have been suggestions to pause AI development until certain safeguards are established. Simultaneously, there is a need for a deliberation regarding the extent to which AI should be integrated. While fully automated cars offer convenience, some may resist them due to the loss of the joy of driving. Ignoring human nature and the intent behind automation may diminish human agency. Substantial automation might lead to overregulation or insufficient vigilance, resulting in adverse consequences.


Robots and AI are tools for human life and activity. Rather than entrusting everything with them, there is a necessity to distinguish what should be delegated, aiming for a symbiotic relationship between people and machines. To achieve this, standards, regulations, and insights from various research disciplines and public opinions, not just limited to legal and engineering perspectives, need consideration. Engaging behavioral science stands as an initial step in this direction.


Profile

After graduating from Tsuda College, Faculty of Arts and Sciences, and gaining experience in a foreign securities firm, she pursued further education at the University of Tsukuba. Starting with the Master's Program in Advanced Studies of Business Law, she pursued the Doctoral Program in Systems Management and Business Law. She has served as a Professor for the Master's Program in Advanced Studies of Business Law at the University of Tsukuba, focusing on commercial law, corporate law, and financial instruments and exchange law. Her work primarily revolves around analyzing the intersection between law and technology, with a recent emphasis on incorporating insights from behavioral science into legal analysis. She has published a paper titled "Applying Behavioral Insights to the Design of Securities Regulation," in the Tsukuba Law Journal.


(URL:https://www.fbs.otsuka.tsukuba.ac.jp/en/group/bl/)


Article by Science Communicator at the Bureau of Public Relations


TSUKUBA FRONTIER (PDF for printing)