Application of Criminal Liability in Crimes Related to Artificial Intelligence

Introduction

Artificial Intelligence is any computer programme that can independently perform a task or a set of tasks that are normally done by humans. Digital assistants, driverless cars and chatbots are some everyday examples of AI. AI has the potential to massively improve automation, reduce human involvement in tedious tasks and even provide specialized assistance to medical personnel. This growing synergy between humanity and artificial intelligence, however has also opened doors to a rather intriguing question. How do we apply criminal law in instances where an AI system is involved? 

Overview

Questions surrounding the criminal liability of Artificial Intelligence first reached the mainstream in 2018 when one of Uber’s self-driven cars hit a 49-year-old Arizona native on a bicycle. Who was liable here? Was it the manufacturer or was it Uber? Or was it the car’s backup driver? In this case the backup driver was held liable and charged with negligent homicide. This was on the basis that said driver could have intervened and taken control over the vehicle at the crucial moment.

Should the AI operator be held liable in all cases or does the answer to that question differ with the circumstances? This question can largely be answered by the three models of AI entity criminal liability proposed by renowned legal scholar Gabriel Hallevy. 

Perpetration via another liability model

In this model the AI is considered an innocent agent, possessing no human or sentient qualities. This model deems the capability of the AI entity as insufficient to consider it a perpetrator of an offense. That is, the position of the AI is akin to that of a minor or person of unsound mind. Uber’s self-driving car falls under this category as it is simply an instrument, albeit a sophisticated one at that.

The Natural-Probable-Consequence Liability Model

This model is built on the basic principle that a person may be held liable for committing an offense, if that offense is a natural and probable consequence of that person’s conduct.

According to Hallevy, the natural-probable-consequence liability model may be suitable in situations wherein the Al has committed an offense, while the programmer or user had no knowledge of it, intention, or any form of participation in the same. However, in order to be liable for any acts of the AI program, a key condition must be met. The programmer or user must have committed some negligent action. That is, said criminal act of the AI program would have been predictable and avoidable, if not for the negligence of the parties. There is no requirement that programmers and creators should be aware of all possible chances of commission of an offense, as a result of their activity, however they are required to be aware of offences that might rise as a natural or probable consequence of their actions. Thus, to reiterate, if the situation is such that the aforementioned parties should have awareness of the probability of the commission of the specific offense, then they are for all purposes criminally liable for the specific offense. This holds true even if they were not actually aware of said act. 

The Direct Liability Model

This does not assume any dependence of the Al entity on a specific programmer or user, rather the focus is solely on the AI in itself. That is, if the AI is shown to possess the sufficient mens rea (internal element) and actus reus (external element), then it can for all purposes be held liable. 

Legal Application

Hallevy’s liability models undoubtably provide a solid foundation to work on. However, there may be some practical pitfalls when it comes to actually putting said models into practice. 

Hidden stakeholders

One complication is with regard to the number of stakeholders. While it may appear on the onset that the only major stakeholders for an AI program are its creators and users, this not quite the reality. Halevy’s models fail to account for the designer of the AI hardware parts, hardware manufacturers, maintenance engineers and other possible third parties and intermediaries. Problems could arise within the AI due to mistakes on the part of any of the above-listed parties as well. Therefore, it is pertinent that they too be included as a part of the liability chain.

Rogue AI programs

Quite recently, Luda Lee, an AI-powered chatbot went rogue. Created by a Korean startup to chat with South Korean users over Facebook, it soon started displaying some alarming behavior. It made bigoted statements against women, minors, foreigners and those with disabilities. It even went so far as to share private user data. Such a situation begs the question regarding liability. Would it be reasonable to hold the creators liable for something untended and unpredictable that has clearly gone out of their hands? Then again would it be safe to let such crimes go unpunished, especially when the crimes are racial in nature and even go so far as to leak personal data? This expands to the larger problem pertaining to whether autonomous AI programs can be truly punished, given that they could not possibly respond to punishment like human beings. However, punishment in the form of deactivation and reprogramming could serve the purpose to some extent.

Giving AI legal personality

There is quite a bit of debate between legal scholars on whether AI programs should be bestowed with legal personhood. Those in favour say that granting artificially intelligent entities legal personhood will make them accountable under the law just like corporations. They further argue that this provision will prove crucial, as a preliminary step in equipping the existing legal system to handle AI related legal problems in the future. Meanwhile, scholars on the other side of the spectrum argue that while AI personhood might be a requirement in the future, it is at present premature and possibly not the best time to bestow AI programs with what we call a legal personality. This argument is based on the fact that the scope of AI is at present very much unclear. Also, there is little to no clarity as to what economic gains or benefits could possibly be reaped as a result of making such a bold move like giving AI programs a legal personality. Further there is a strong argument that AI programs are yet to achieve what we consider the cognitive and moral requirement to be considered legal persons, this is because the science behind Artificial Intelligence is still relatively in its infant stages. This opinion is shared by the European Union which had quite recently issued three resolutions addressing the growth of AI. Interestingly all three resolutions stood firmly in opposition to providing AI programs and entities with any form of legal personality.

Indian Context

In June 2018, NITI Aayog, the public policy think tank of the government of India released a policy paper titled ‘National Strategy for Artificial Intelligence’. This paper examined in detail the importance of AI in a variety of sectors. Also, the 2019 budget proposed the   launching of a national programme on AI. However, in spite of the strong show of intent by the government on the technological front there has been no initiative to set up a legal foundation for the upcoming challenges that AI might bring. It is very important that India start drafting a legal framework to deal with AI programs. Countries like the USA are already preparing draft regulations on driverless vehicles. Meanwhile in Germany, there are certain provisions in the German Traffic Act, regarding the liability of automated or semi-automated vehicles. In addition to this, there is the EU resolution on robotics, which governs AI programs as well. Meanwhile Russia too is considering formulating a legislation akin to that of the EU, called the Grishin Law. The draft law will bring about amendments to provisions of the Civil Code of the Russian Federation. This draft pushes for strict liability. That is whether or not the AI is autonomous, it imposes the full burden on the developer, operator, or manufacturer. Furthermore, this legislation is expected to also account for the legal conditions based on how AI can be used in society as well. Given that many other large countries have already taken steps to reform their legal system to better suit AI, it is pertinent that we too follow suit. That is, in order to be truly prepared for the AI revolution, it would be wise to revamp and restructure some of our existing laws. 

Considering that AI might disrupt labour markets with the increased innovation that it brings, it is crucial to restructure our labour laws, which have remained unchanged for quite some time now.

Data is something that is very much interconnected with AI. The legislation in India surrounding data, especially personal data is insufficient at best. However, the government has made some effort to rectify this. In fact, in 2009, section 43A was added to the IT act. It essentially created a private cause of action for compensating any person, against a business or corporation that has been negligent in implementing and maintaining the requisite amount of security practices and procedures, in handling of sensitive personal data, thereby causing loss. While these provisions may seem good on paper, it is not necessarily effective in reality. This because of the slow nature of India’s justice system.  As such it is a must that we create stronger data protection laws so as to ensure a solid framework is in place when AI technologies eventually becomes widespread in India.

Conclusion

The development of AI programs could prove to be instrumental in facilitating humanity’s continued advancement. However, to mitigate any unforeseen problems it is of utmost importance that we begin facilitating a proper legal framework for AI. This however is not simply limited to laws pertaining directly to AI. This includes legislation in connected fields like data protection and Intellectual Property. Furthermore, to set up a strong AI framework, one must focus on resolving problems related to rogue AI programs and hidden stakeholders that are not accounted for. Also, the raging debate on the bestowment of legal personality for AI programs, needs to be resolved as well, to create some semblance of what we would call a solid legal framework for AI.

Read also:- The Criminology of Future: Envisioning the Impending Nexus of Medical Jurisprudence and Criminology

Mr.Pratap Muthalaly
Mr.Pratap Muthalaly

Winner in the 1st edition of the article writing competition, “लेख-SHASTRA”.

Posted in: Uncategorised

Leave a Reply

Your email address will not be published.