Monday, November 25, 2024

India needs a principles-based approach to regulate AI

Last week, amid all the buzz and hoopla around artificial intelligence (AI), an open letter authored by a few notable individuals threw a bucket of cold water on all the excitement. Apparently inspired by a letter from the Future of Life Institute, it raised fears that AI would threaten jobs in the country and hinted at the doom that would befall us if we did not immediately regulate it.

As regular readers of this column will know by now, I am bullish on AI. I believe it will be the transformative technology that creates the next big orbit shift in the way society functions. As with all ‘tech’-tonic changes, it will change the ways in which we work, making many of the jobs that currently exist irrelevant. But in their place will come new jobs, and fresh skills that humankind will have to learn in order to make the most of the opportunities that it offers. Which is why I don’t share their pessimism.

Having said that, there certainly is merit in starting to think about how AI ought to be regulated. There is no doubt that it is on track to becoming an all-pervasive technology that seeps into various aspects of our lives. That being the case, many of the regulatory frameworks we currently rely on will become redundant. And it is never too early to start thinking about how to deal with this.

As it happens, over the past few years, more than a few countries have attempted to do just that. The US Office of Science and Technology Policy issued a Blueprint for a Bill of Rights for AI that took a predictably laissez faire approach. Apart from reiterating the need to protect users from unsafe and ineffective systems, ensure that AI systems are designed so that they don’t discriminate and take steps to address privacy concerns around notice and user autonomy, the blueprint did not explicitly stipulate what AI companies actually have to do.

The European Commission, on the other hand, has come up with a full-blown legislative proposal that lists in excruciating detail how it is going to regulate “high risk AI systems”. This includes requiring AI companies to ensure constant iterative evaluation of risks; making sure they only use error-free data-sets for training; and imposing on them an obligation to establish audit trails for transparency. It also intends to establish a European AI Board and institute a penalty regime that is even steeper than the General Data Protection Regulation (GDPR), with fines of up to 6% of global turnover for transgressions.

Both these regulatory proposals attempt to fix what we believe we know—based on our current experience—is wrong about algorithmic systems. They prevent the discrimination we’ve seen these systems perpetrate because they have been trained on human data, with all its implicit biases. And attempt to mitigate privacy harms that could occur when AI systems use our information for purposes other than what they were collected for or process it without notice.

These are issues that do need to be addressed, but designing our regulatory strategy to only solve issues after they have become problems will not help us deal with a technology that is capable of evolving as rapidly as AI is. Just as applying a traditional approach to liability will be utterly misplaced.

From what we have seen so far of generative AI, it is capable of unpredictable emergent behaviour that often has no bearing whatsoever on the programming it received. These systems are adaptive, capable of making inferences far beyond what their human developers might have envisioned. They are also autonomous, making decisions that often have no corelation whatsoever with the express intentions of their human creators. And are often executed without their control. If our regulatory solution is to hold the developers of these systems personally liable for this emergent behaviour, they will be forced to shut down any further development for fear of the liabilities they will have to suffer on account of the very emergent behaviour that is its strength.

What if there is another way? What if we adopt an agile approach to AI regulation that is grounded in a set of cross-cutting principles that describe, at a very high level, what we expect AI systems to do (and not do)? We can apply these principles across all the different ways in which AI is, and will be, deployed—across a wide range of sectors and applications. Sector regulators can then refer to these principles, using them to identify harms at the margin and take appropriate corrective action before the effects become too widespread.

This is the approach that the UK government seems to be taking in its recently published Pro-Innovation Approach to Regulating AI. Rather than putting in place a new regulatory framework, it intends to follow an agile and iterative approach designed to learn from actual experience and to continuously adapt. Recognizing that rigid legislation can retard technology innovation, it does not intend to place these principles on a statutory footing. Instead, it is looking to issue them on a non-statutory basis, so that they can be implemented by existing regulators, which will leverage their domain-specific expertise to tailor regulations to the specific contexts in which AI is used.

So far, India has refrained from regulating AI despite the exhortations of a few to do so post-haste. However, when we do eventually start, we would be well advised to follow the UK approach. AI has much to offer us and we should not stifle its potential.

Rahul Matthan is a partner at Trilegal and also has a podcast by the name Ex Machina. His Twitter handle is @matthan.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint.
Download The Mint News App to get Daily Market Updates.

More
Less

#India #principlesbased #approach #regulate

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles