top of page
  • Writer's pictureSamVidhiforum

ARTIFICIAL INTELLIGENCE AND LEGAL PERSONALITY: A FUTURISTIC ANALYSIS OF THIS RELATIONSHIP

By: Yashi Bajpai* |




Introduction


In this era of 21st century, the unforeseeable pervasiveness of Artificial Intelligence has taken the world by storm. There has arisen a lot of hue and cry in the last decade about the tenets of constitutional law not being adequately regarded while developing artificial intelligence technologies. This article discusses the much-debated issue of granting the legal personality to the AI, burgeoning AI industry, the socio-legal concerns around this technology and its scathing potential if left unscrutinized.




According the stature of a legal personality to a computer


One of the most important issues that are being widely discussed and debated upon is the accordance of a “legal personality” to a computer and the rationale behind this idea. The concept of “legal personality" has changed considerably over the years. One of the earliest instances of widening the ambit of legal personality was brought in by an executive mandate, when the black slaves were finally recognized as legal persons, after decades of being identified as the recoverable items of property.


When we come to talk of according the stature of a “person” to an object, various legal analogies could be used to determine whether such a status should be extended to computer systems and, if so, what limitations should be placed upon that recognition because modern computers can gather, create, synthesize, and transmit vast seas of information even as they become more “human-like”: they are increasingly becoming interactive, effective, and corporal. To fairly examine this probability of according legal recognition, it is critical to understand that behaviour is of more importance than the appearance and it is the mental capability that matters more than physical traits. Ideally, the recognition should be given to a person who is capable of behaving and exhibiting a legal personality.


Another critical argument that has been put forth is that a human being who is functionally “brain dead” is still considered human as against a computer which displays more intelligent behaviour unobtrusively. This comparison between a comatose human being and a computer is only cogent and deserves to be given a thought.

Of course, one cannot deny that AI cannot come to acquire the inherent qualities of a human, be it intentions, feelings, betrayal, desires or interests. And even if that AI machine goes on to display any of those qualities, it shall only be mimicking the human traits and nothing beyond. And this is the argument presented against granting legal personhood to the AI systems.[1]



The collision of AI and constitutional principles


Given the foreseeable pervasiveness of Artificial Intelligence in modern societies, the most pivotal question here is how will this new technology help the citizenry in strengthening the constitutional democracy. Will they be able to attest to the core elements of western, liberal constitutions i.e human rights, democracy and the rule of law? The ‘Trinitarian formula’ of our constitutionalist faith. Will there be any accountability on AI if it commits a transgression against these elements of democracy?


It is of enormous importance that the principles of the triad be imbibed into the AI in essence because of the prevailing issues of the Big Data, sensors of the Internet of things etc. One can easily predict the future of AI, the technology being infused in the governance of the major sectors including health, education, law, security, identification, innovation, business. Absence of a regulated framework within which we intend the technology to operate has already contributed highly in wreaking havoc and raising undesirable challenges. It has diabolically put in danger the democratic values and unleashed information carelessly, the Facebook Cambridge Analytics scandal being the latest dangerous wakeup call in this respect.


The case of Cambridge Analytica has been one of the biggest eye openers in the world. A company whistleblower namely, Christopher Wiley revealed that misuse of data and “cheating” by the company may have helped in altering the results of both US elections and Brexit vote of UK, how the psychological diversions were being made in the mind of the voters using specialist communications techniques previously developed by the military to combat terror organizations. Wylie explained how Cambridge Analytica was able to build complex data analysis tool to target potential voters during the US campaigns. Which included ties to Aggregate IQ, or AIQ, a Canadian company which aided in developing the underlying algorithm that was used by Cambridge Ananytica to target the Facebook users. Obviously both the companies denied any involvement.


The need for laying down a framework for a future relationship between technology and democracy cannot be realized without understanding the tremendous concentration of power in the hands of a few internet giants. AI was developed as an add-on to the digital Internet economy, but with its recent growth, it is safe to assume that it will dominate the others very soon.


We must discern and shoulder some basic realities that there is a clear demarcation between what is the theoretical potential of the AI, what it is meant to be created for and the actual purpose for which it has been created by the developers and how it shall be biased towards mimicking their interests. We must strive to gauge the potential of this and how it is slated to be used by a few corporate giants whose hands are concentrated with sweeping digital powers for imparting the internet services to the consumers. “Google, Facebook, Microsoft, Apple and Amazon,” are the frightful 5 who shape our internet experiences. It would only be naive of us to ignore the reality of how we use the internet, and what the internet delivers back to us is basically influenced by the desires and aspirations of these mega-corporations around the world, who are thriving on our data, which is more valuable than the oil now.




Insidious threats posed by Artificial Intelligence



When a self-driving vehicle ran over a pedestrian it showed that the law on these issues is still uncertain, and needs to be codified. In a history of “Google Books”, Scott Rosenberg describes the early attitude of Silicon Valley as engineering-driven and without any respect for the law. Students were taught disruptive innovations across the best business schools of this country, legitimizing disrespecting of law at every step. These people who were heroes and masters of disruptive internet did not just express their dissension against the Government but also flagrantly flouted the intellectual property rights, trying to manipulate the tax collections procedure.



Fortunately, not all of this notoriety went unnoticed, some surfaced, which ultimately led to some landmark decisions by the European Commission; Apple was brought to books and made to pay 13 billion euros to compensate for the unpaid taxes to Ireland; And when the aim was to disrupt the regulators by not divulging the truth, exactly what took place in the Facebook/WhatsApp merger case, it eventually ended up in European Commission making Facebook pay up 110 million euros.



Throwing light on another recent example, the MIT University terminated its research project in collaboration with a Chinese artificial intelligence company, namely iFlytek who was charged with supplying technology to China for the purposes of surveillance on 13 million Turkic Muslims residing there including the Uyghur and ethnic Kazakhs in the Xinjiang region. China has been in news recently for actively oppressing and unfurling systematic abuses including rape on this ethnic minority on the pretext of morally cleansing them and teaching them culture. Though MIT did not state the reasons for calling off the collaboration, there were strong speculations that human rights concerns may be the most plausible reason. Maria Zuber, Vice President of research at MIT claimed that "We take very seriously concerns about national security, economic security threats from China and other countries, and human rights issues."



We also witnessed the news of Google issuing an apology because their innovation, Vision AI produced racist results. Cases can be prickly, especially if gargantuan powers are bestowed upon corporations, who operate in an unregulated and unscrutinised manner. Philip Alston, the UN Special Rapporteur on Extreme Poverty and Human Rights in a report to the UN General Assembly last month identifies that governments are reluctant to regulate the tech firms, for fear of stifling innovation, while at the same time, the private sector is averse to integrating human rights whilst designing their systems. It is not their priority. He also threw light on a speech given by UK Prime Minister, Johnson in the UNGA last year where he warned that we are slipping into a world involving round the clock surveillance, the perils of algorithmic decision-making and the difficulty of appealing against computer determinations.



On a positive side, AI is coming to courts, bridging the gap between law and science. Several European judicial systems have already implemented cyber justice tools that are facilitating access to justice, improving communication between courts and lawyers, and providing direct assistance to judges and court administration. Courts are already experimenting with machine learning applications and predictive justice tools. For example, the Superior Court of Los Angeles is making use of Gina, the Avatar, an online assistant, to assist the residents in managing their traffic citations Also, with the deployment of automation, the judicial system can increasingly improve access to justice for the poorer sections who face financial constraints while hiring a lawyer or picking the time for a court date. Technology cannot replace judges but it can certainly help in predicting the outcome of the cases thereby succouring in reducing the enormous number of pending cases.



Coming back to the case of legal personality, the rise of modern corporations has hopefully carved out an easier path for the computer systems to gain legal recognition. A corporation is an artificial human being. It can purchase or sell property, enter into contracts, commit crimes, but it cannot marry or vote. If one actually looks beyond the curtains of the corporation, it is the computer which is actually doing the work. A more cogent understanding could be developed by drawing similarities between an artificial person and a computer rather than with a human being. Therefore, the question that comes up is not “if”, but “when”.



Some of the other arguments presented against granting legal personhood to a computer were the issues of AI attaining social dominance over humans. If the AI was to become more intelligent than a man and thereby attaining human status, they could aide corporations in hiring and firing of men, thereby becoming corporate overlords. When Sophia, a humanoid robot was granted citizenship in Saudi Arabia, a first, a lot of people including feminist scholars protested against this move because, it gave Sophia more rights than what Saudi women possessed, thereby declaring it to be an attack on their dignity.




Conclusion



Since the time corporations have been brought under the ambit of legal persons, it can safely be implied that the computers have almost received a de jure recognition, and sooner than later they are set to gain a de facto recognition. A legal minimum standard test of personality could possibly be satisfied by a computer system in the near future. Though the fear looms and certain experts have identified that AI learning machines could exploit workers and earn money by using algorithms to price goods and manage investments, and find new ways to automate key businesses. But this only urgently calls for stringent laws to be codified at the earliest. The policymakers will have to ensure that technology does not tamper with the fundamental rights of the persons in any way.



Further, it is indisputable that new forms of governance are emerging which are significantly banking upon the processing of vast quantities of digital data from all the obtainable sources, using predictive analytics to foresee risk, automate decision-making, and remove discretion from human decision-makers. In such a world, citizens are becoming increasingly visible to their governments, but not the other way around. The huge tech companies have already invaded our personal spaces and continue to do so for monetary benefit. It’s pertinent that AI and constitutional law go hand in hand. One cannot deceive the other.


Taking note of the pandemic that has engulfed the world right now, the AI online digital care and education facilities are exploding. It’s time we realize that the technologies which are supposed to aid us in difficult times can also be hugely misused taking advantage of our misery, helplessness, lack of options, mobility and lack of flexibility in such a scenario. We cannot stress enough on how important and grave, it is to have a strong backing of laws and regulations to discourage any violations, misgivings and loopholes that can be exploited against the interests of consumers in this situation. Now, is the time we need to become more conscious than ever.


***


* The author is a student at Rajiv Gandhi National University of Law, Patiala.



[1] Roman Dremliuga, Pavel Kuznetcov and Alexey Mamychev, Criteria for Recognition of AI as a Legal Person, 12, 2-3, Journal of Politics and Law, (2019).







0 comments

Yorumlar


    bottom of page