At the same time, we have witnessed the rise of artificial intelligence (AI), driven by tools like ChatGPT. AI has captured the imagination of many with its ability to achieve efficiency gains across sectors such as healthcare, education and public service delivery.Â
With both DPI and AI promising to improve societal outcomes, a critical question arises: Can AI and DPI work together to achieve shared goals? And if so, will their integration amplify risks such as data misuse, privacy violations, exclusion and systemic bias?
Also Read: Arun Maira: Don’t let techo-optimism over AI crowd out concerns of equity
Real-world examples have already shown the potential harms of these technologies—from incorrect automated cash transfers in Telangana to biased fraud detection algorithms in the Netherlands. While AI can enhance DPI to improve public services, realizing these benefits would depend on factors like data quality, robust safeguards, the choice of appropriate AI models and strong accountability mechanisms.
Nonetheless, despite the potential risks, AI and DPI have already been working together for some time to deliver public services.Â
There are many processes in these delivery systems that can be automated with AI. For example, cash transfers for pensions or scholarships can be automatically disbursed once beneficiaries meet predetermined eligibility criteria. This requires two components: a platform interoperable with databases to determine eligibility (a key feature of the DPI approach) and an AI model to assess eligibility (as defined by humans) and execute transfers. Automated decision-making systems (ADMSs) exist for this.
Also Read: India’s drive to globalize Digital Public Infrastructure: Time to take stock
Indeed, ADMSs are already used by Indian government agencies. For instance, Telangana’s Samagra Vedika identifies and disburses cash transfers to eligible beneficiaries, while smart-city missions use such models to analyse CCTV data and alert officials to law-and-order situations.
Beyond automation, AI models are also used to predict future outcomes. Unlike rule-based ADMSs, predictive models rely on self-learning algorithms built through statistical techniques like machine learning, deep learning and neural networks.Â
These models analyse large data-sets to identify patterns, generate insights and predict outcomes. For instance, they can predict extreme weather, enabling better city planning, or traffic congestion, helping urban planners design better mobility infrastructure.
DPI set-ups like the India Urban Data Exchange can enable such applications by facilitating data exchange among city departments, government agencies, private actors and civil society. This interoperability can improve the accuracy of AI models and enhance DPI performance in delivering essential public services.
Also Read: Elon Musk’s DOGE may have a lesson or two to learn from Malaysia
While the integration of AI and DPI offers substantial benefits, it also brings risks, as outlined earlier. We need robust safeguards and accountability mechanisms to mitigate these risks and ensure ethical and inclusive use of these technologies. But should these technologies be used together at all?
There is no simple answer to that, as the risks are as significant as the rewards. Both AI and DPI rely heavily on the quality and quantity of data. While DPI systems enable data sharing across disparate databases, AI models are trained on this data. If these databases are incomplete, biased, unrepresentative or outdated, or contain errors, the AI models trained on them are likely to produce harmful and unintended outcomes.
Also Read: Proving you’re human in the age of AI could become as easy as ABC
For instance, in Telangana, algorithms incorrectly identified a deceased rickshaw puller as a motor vehicle owner, excluding his widow from welfare benefits. In smart cities like Chandigarh, Nagpur and Indore, maintenance staff are required to wear trackers that act as surveillance tools, automatically deducting pay if workers deviate from assigned schedules or routes determined by an ADMS.
Globally, similar challenges exist. For instance, in the Netherlands, a self-learning algorithm used for a childcare benefits programme disproportionately targeted ethnic and racial minorities, flagging non-Dutch nationals as high-risk. The algorithm’s opaque reasoning highlighted the difficulty of explaining decisions made by AI models.
Predictive AI can be risky too. While traffic predictions and weather forecasts are relatively accurate, predicting life outcomes is far more complex. AI cannot account for random shocks, human agency, cultural values and luck.Â
Yet, predictive models have been used in healthcare, criminal justice and education in the US to determine such outcomes as who should receive better healthcare, be released from jail, or is likely to drop out of school.Â
Also Read: Hang on… Did Microsoft just admit that AI could dumb us down?
Reports suggest that similar models have been used in India for policing through systems such as crime mapping, analytics and predictive systems used by the Delhi Police to predict criminal hotspots. Studies, however, show these models are often ineffective, given their limitations.
In sum, integrating AI and DPI requires careful consideration of several factors: the quality and quantity of available data, the risk of unintended outcomes and the limitations of predictive AI. DPI-AI integration should be approached with due caution.
The author is a Senior Associate at Artha Global.
#DPI #integration #elevate #quality #public #services #risks