by Arshi Aadil
Jan 15, 2026
5 min The blog examines how AI today turns administrative data into faster, more targeted decisions to transform India’s core public programs in food security, agriculture, disaster risk management, education, and grievance resolution. It also warns that the same tools could quietly deepen exclusion without clear guardrails on data use, bias, transparency, and human oversight. We also look at how a risk-based and accountable approach can ensure AI strengthens public policy rather than undermining it.
For decades, governments have struggled with how to turn broad welfare goals into clear rules, such as who should be included, on what criteria, and how much support each person should receive. Policy choices around merit, reservations, and socioeconomic vulnerability have effectively decided who benefits and who does not. Public policy has always been about using imperfect information to make decisions that are as fair and just as possible. The advent of AI will turbocharge this process.
AI can now scan millions of data points, and predictions can be made in seconds. Today, technology has made it easier to identify where the floods will hit, which geographies in a country are most food insecure, which children are most likely to drop out of school, and who is most susceptible to health hazards. All this was not possible before, but AI has made it easier and faster to execute such dynamic and real-time analytics at a more local level.
We can see this shift in the core sectors. In the agriculture sector, crop insurance the PM Fasal Bima Yojana, reportedly added coverage of about 10 million hectares and 8.5 million farmers in recent cycles. These programs experiment with remote sensing and automated yield assessments. AI models can now combine weather, soil, and satellite data to help the government determine which blocks require additional irrigation support, which crops to encourage, and where to focus extension staff.
For example, our team at MSC co-designed the Bihar Krishi platform with the state agriculture department. We built a voice-first, AI-enabled interface that offers local-language audio advisories, voice-based scheme search, and personalized soil-health recommendations. The platform makes AI-driven agricultural advice accessible to more than 750,000 smallholders. It won a national DigiTech award for its efforts in farmer empowerment.

Disaster risk management follows the same path. Between 1995 and 2024, India has faced more than 400 extreme weather events and suffered more than 80,000 deaths. Annual disaster-led deaths have again crossed 3,000 in 2024–25. In this context, AI-enhanced early warnings, impact forecasting, and evacuation planning are no longer futuristic and have become an essential tool for survival. MSC’s 2025 case study, prepared for the GSMA of India’s SACHET public warning system, shows the importance of multichannel early warning systems. These systems combine cell phones with radio, television, social media, sirens, and other channels to ensure that everyone at risk is notified on time.
Use cases with potential for AI-driven improvement
Beyond these early deployments, the nation offers significant room for AI to strengthen the current public systems. India’s food security net under the National Food Security Act covers about 800 million people, each with a fixed monthly grain entitlement at a fixed subsidized price. AI layered on top of this infrastructure could make the system more responsive. It will anticipate where demand will spike due to migration and move stocks accordingly, as well as flag places where offtake is unusually low.
Education is another front where the need is obvious. The Annual Status of Education Report 2024 shows that only 23.4% of Class III children in government schools can read a Class II text, and 45.8% of Class VIII students can do basic arithmetic. AI cannot replace teachers, but it can help policymakers see, almost in real time, where learning falters and which interventions are effective. It also identifies children who consistently struggle or do not progress in basic skills.
Moreover, current developments offer a broader governance opportunity. Grievance portals and citizen feedback systems are being digitized at scale to provide policymakers a textured, bottom-up view of where the state fails and why. Alongside this, the India AI Mission is a political signal that AI is not a side experiment but part of the state’s core toolkit.
However, these examples also highlight the risks of incorporating AI into public policy. Most datasets in the country reflect imbalances in caste, patriarchy, and regional differences, alongside uneven state capacity. Models trained on this data can learn that specific communities are at higher risk, are less creditworthy, or are less deserving of support. They then quietly code that conclusion into welfare targeting, enforcement, or policing. A biased official can be challenged, but a biased model wrapped in technical language is much harder to contest.
AI systems hallucinating at scale is another source of danger. A 1% error rate in a consumer app is a problem. However, a 1% misclassification in a system that touches 800 million food security beneficiaries or tens of millions of farmers is a failure on a national scale. When crop loss models underestimate damage or when an AI-powered fraud detection system mislabels genuine beneficiaries as “suspicious,” the result is lost food entitlements, unpaid claims, and mistrust.
Cross-cutting risks also affect the overall public information space. Deepfakes can inflame tensions, synthetic news can distort public discussion, and automated micro-targeting can make it easier to manipulate opinion than to engage with it honestly. Together, these tools can reshape the environment in which people discuss, understand, and ultimately decide policies.
The realistic path now is to recognize that AI will shape policymaking and put the right guardrails in place, rather than keep AI out. We suggest several practical directions:
Adopt a risk-based framework for AI in government:
– Distinguish clearly between low-stakes uses, for instance, basic predictive analytics and dashboards, and high-stakes decisions, such as ration eligibility, disaster evacuation planning, crop loss assessment, or school placement;
– Strengthen requirements for transparency, documentation, testing, and human oversight as the impact on people’s rights and entitlements increases.
Make explainability a core obligation, not a technical afterthought:
– Ensure that when a model influences an individual decision, such as stopping a pension, it offers a clear, accessible explanation of why that decision was made;
– Build simple, human appeal routes into every high-stakes AI system, along with logs and review mechanisms, from the design stage.
Protect beneficiary data and set strict limits on reuse:
– Allow program data for defined public purposes only, with no sharing of personal data without explicit consent;
– Include no training, resale, and secondary use clauses and strong audit rights in all AI vendor contracts.
Embed AI within a broader accountability ecosystem:
– Align the use of AI with the Digital Personal Data Protection Act to set boundaries on surveillance, profiling, and secondary use of personal data;
– Equip regulators with the technical capacity to challenge algorithmic systems used in public programs;
– Enable independent researchers and civil society to audit real-world impacts.
Use India’s digital public infrastructure to set the standard:
– Make open standards and APIs, as well as privacy-aware public datasets, the default for AI in public policy;
– Create an internal registry of AI systems and publish information on those that directly affect citizens’ rights and entitlements.
When we look at the bigger picture, AI will not write and govern policies for us. It will only change how we see problems and solutions. The task is to use these tools to make informed and fair decisions. If public institutions can combine the power of AI with clear rules and accountability, they will serve the public interest better without losing sight of the people behind the data.
Based on this agenda, MSC has also cofounded the Alliance for Inclusive AI with BFA Global and Caribou. We are committed to developing practical “small AI” solutions that expand opportunity for underserved communities across the Global South.
Leave comments