One of the many recent mysteries is why hundreds of extremely intelligent and rich people think that a moratorium on further development of artificial intelligence is feasible, or that a six month hiatus is sufficient for us to figure out what to do about it. Technology development is a ‘prisoner’s dilemma’ with millions of competing participants making it impossible to get everyone to cooperate. Top-tier competitors are more likely to cheat on the moratorium in the expectation that others will do so, which will render such as moratorium useless, and worse, drive the industry underground.
Yet, it shows that even Silicon Valley’s techno-determinists are worried about the social consequences of artificial intelligence. They just don’t know what to do about it.
One of the biggest challenges of public policy in recent years has been how to govern an area broadly called ‘tech’. This includes trans-national technology platforms, social media, algorithm-driven information delivery and artificial intelligence. The advantages are clear, immediate and popular. The risks are not readily obvious and manifest themselves before we have had the time to properly assess them. Countries committed to liberal democracy and rule-of-law face a stiffer challenge: How do you mitigate risks without damaging basic freedoms and civil liberties? The answer, in many countries, is to abridge freedoms in the pursuit of national security and public order. The inability of public policy to catch up with technology is the big story of our times.
The answer is not to slow technological development down, even if that were possible. It is to speed up public policy by throwing the world’s best minds at the problem. Instead of calling for moratoria, tech billionaires would do better by directing massive amounts of financial resources into technology policy research. It would appear a no-brainer to suggest that accelerating human intelligence is a good way to manage the artificial variety.
What do we do in the meantime? The place to start is to identify what it is that we should protect. That, in my view, is cognitive autonomy or the human freedom to think. The reason social media platforms have so much political power is that can influence what individuals and entire societies believe. More than general artificial intelligence enslaving us, we should be more concerned about some humans using artificial intelligence to accumulate power over others: Power that is undeserved, unaccountable and unchallengeable. So protecting the mind from being influenced without its consent, and without societal safeguards, should be our first step.
In recent years, I am sure you have been irritated by websites asking permission to drop cookies into your browser. After the EU mandated this under the GDPR, the once free information highway has been riddled with turnstiles and speed breakers, slowing down the flow of information. I am still irritated by these cookie warnings, but no longer resent them. That is because I realized that whatever might be their role in protecting privacy, they warn consumers of something that ought to concern them. And they have this effect precisely because they get in the way and irritate us.
This suggests a way forward for technology that uses influence algorithms and artificial intelligence: Present users with clear warnings and obtain consent before delivering those messages, videos or chat sessions. Give users the ability to opt out and settle for a non-algorithmic, non AI-enhanced digital life. After all, caveat emptor is one of the oldest concomitants of market capitalism. It needs to be made real for the Information Age. And if sellers don’t voluntarily disclose information, market regulators must require them to do so.
Such an obligation to declare tools being used and obtain consent should be accompanied with penalties for non-consensual, covert or coercive information delivery. Indeed, the entire information supply chain can be secured in this manner. Upstream providers must inform their downstream counterparts of algorithmic or AI generated content, triggering the requirement to get ultimate end-user consent. Yes, this adds to the compliance burden of all information providers on the web, from individual blogs and websites to massive global platforms. However, the stakes are so consequential to human civilisation that the additional costs are worth it.
It’s a bit like labelling on food products. Displaying health warnings on tobacco and alcohol products strikes a balance between public health and individual choice. Nutritional information allows people to choose how much and what kind of food they wish to consume. We could do the same for information products. Indeed we should do the same for any technology that has the potential to impact cognitive autonomy.
As I wrote in my previous column, we should not allow dire predictions of future apocalypses to get in the way of doing what we can to manage immediate risks. Despite the wonderful achievements and potential of ChatGPT and generative artificial intelligence, many of the dramatic threats attributed to AI are speculative. This causes us to ignore the immediate and extant threat: to our epistemology. While we do not know the full answer yet, we know where to start.
Nitin Pai is co-founder and director of The Takshashila Institution, an independent centre for research and education in public policy.
Download The Mint News App to get Daily Market Updates.
More
Less
#Lets #insist #full #disclosure #consent #algorithm