AI 3 years ago

Why do we Need a Legal Framework for AI in India?

With massive amounts of data, specifically personal data collected, maintained and processed every day, inadequate security and ethical problems are just a few examples of cons that come with AI

In the 20th century, the inventions of the printing press and industrialisation were pivotal moments in the never-ending human quest for technological advancements; and had a massive influence over the way societies thought, functioned and interacted with each other. Sure, computers and digital technology have been available to us for decades now, but we are only now realising their potential and the extent to which we can stretch and enhance their capabilities.

India stepped onto the global technological stage with a bang back in 2015 when it launched the Digital India campaign. Since then, in just over 6 short years, it is safe to say that India is now witnessing an accelerated state of innovation. From administrative, governing and legal to communication or entertainment enterprises, almost every aspect of function in the country is now shaped and enhanced by advanced technology – to the extent that today, words like Artificial Intelligence and Machine Learning have seeped into everyday vocabulary. The Government of Telangana rolled out the country’s first AI assisted process for welfare schemes; the Supreme Court of India, just this year launched its own AI based portal in the judicial system to assist judges with research; and, until the day this article was written, the Finance Ministry has been developing the new Income tax portal as a Mission Mode Project under the National e-Governance Plan – a monumental initiative to connect e-Governance systems throughout the country and create a nation-wide network for electronic delivery of government services.

The technology comes with a lot of convenience and advantages, yes; but as with any evolving trend, it also comes with its own risk factors. In an interview to The Hindu last month, Anil Valluri, regional VP of Palo Alto Networks, a multinational cybersecurity company, stated that Ransomware will dominate the cybercrime landscape’. A study conducted by ProPublica, a US-based non-profit organisation known for its investigative journalism, found that AI-based pretrial risk assessment systems held racial and economic biases. With massive amounts of data, specifically personal data collected, maintained and processed every day, inadequate security and ethical problems are just a few examples of cons that come with AI. In response to it, earlier this year, the EU adopted a legal framework for AI, one that focuses on mitigating the risk factors specific to AI-enabled solutions.

Advanced technologies like AI-based programs, self-driving cars or automated processes were devised with the sole purpose of assisting human beings, of enabling and enhancing the scope and reach of human endeavour. There is no doubt that the technology at hand is up to the task. Seeing that it is evolving and upping itself every day, there is almost no debate amongst experts as to whether the solutions we devise now will be able to cater to the needs of the world’s largest democracy. But with so much at stake, and so much dependent on the correct and rightful usage of Artificial Intelligence and Machine learning, it stands to reason that the country too adopts a legal framework on Artificial Intelligence that imposes certain obligations on businesses and enterprises across multiple sectors.