India’s Artificial Intelligence (AI) ambitions took a significant leap forward when Union Minister Ashwini Vaishnaw announced that the country would launch an indigenous AI model and establish an AI Safety Institute (AISI) under the Safe and Trusted Pillar of the IndiaAI Mission.
Rather than relying on rigid regulations that may quickly become outdated, governments worldwide are establishing AISIs to address potential AI risks. Since 2023, the U.K., the U.S., Singapore, and Japan, among others, are setting up AISIs. These are not just government-backed research/testing institutes, but a part of the global AISI network that seeks to facilitate “common technical understanding of AI risks”. Recently, the U.K.’s AISI unveiled its open source platform called ‘Inspect’ to evaluate models in a range of areas such as their core knowledge, ability to reason, and autonomous capabilities. The U.S.’s AISI convened an inter-departmental taskforce to tackle national security and public safety risks posed by AI. Singapore’s AISI is focusing on content assurance, safe model design, and rigorous testing. Each of these initiatives underscores the need for technical rigour and international collaboration.
India-specific solutions
India must prioritise imminent local concerns. A critical issue is AI systems’ inaccuracy and their potential to perpetuate discrimination in an Indian setting. The Ministry of Electronics and Information Technology (MeitY) announced that the AISI is set to operate on a hub-and-spoke model, collaborating with academic institutions, startups, industry players, and government departments. This will ensure that India’s unique socioeconomic landscape, linguistic diversity, and technological gaps are addressed.
India’s vibrant startup ecosystem offers valuable lessons. Startups such as Karya are tackling the problem of unrepresentative data by empowering rural communities to create high-quality datasets in Indian languages. Others are advancing multilingual AI development, ensuring inclusivity and accessibility. These efforts highlight how India-specific solutions can address technical challenges while fostering social equity. India’s AISI should build on such initiatives.
The Indian AISI is already seeking to advance indigenous research and development, leveraging Indian datasets. Under the Safe and Trusted pillar, the IndiaAI Mission has already selected eight Responsible AI Projects and launched a second round of Expression of Interest. This focuses on critical areas such as watermarking and labelling, ethical AI frameworks, risk assessment and management, and deep-fake detection tools.
Simultaneously, our AISI should collaborate with global AISIs to understand and mitigate global risks. It should take a leaf from the Bletchley Declaration, signed at the U.K. AI Safety Summit, which focuses on global threats such as cybersecurity threats and disinformation.
Common global understanding
India’s AISI cannot operate in isolation. To effectively govern AI, it must strike a balance between local relevance and global alignment. This requires adopting international standards while adapting them to India’s context. Interoperability is key, as it enables seamless collaboration and accountability across borders.
A crucial first step is to establish a global standardised AI safety taxonomy. Today, technical experts, policymakers, social scientists, and legal professionals may use varying terminologies for discussing AI-related concerns. This divergence and the inherent complexity of AI systems creates communication barriers that hinder safety assessments. A standardised taxonomy would enable meaningful multidisciplinary research by ensuring all stakeholders speak the same language when evaluating AI systems and also clearly attribute responsibilities across the AI supply chain.
Second, India’s AISI must also support the creation of an international notification framework for AI model development. This framework would encourage AISIs worldwide to share information about the purpose and potential impact of powerful AI models. Such transparency would enable coordinated governance and help India prepare digital infrastructure for the safe deployment of advanced AI systems.
India’s leadership within the Global South places it in a unique position to champion inclusive AI governance. Many emerging economies lack the resources and technical expertise to establish their own AISIs. India can lead a collective effort in the Global South to co-develop AI safety frameworks and evaluation metrics to tackle local challenges.
The MeitY-UNESCO collaboration on India’s AI readiness provides a strong foundation by identifying gaps in ethical development and deployment of AI. Leveraging these insights, India’s AISI can develop comprehensive frameworks and guidelines that promote both safe AI development and deployment. Additionally, through the ongoing projects under IT Ministry’s IndiaAI Mission, India is focusing on themes such as machine unlearning, synthetic data generation, AI bias mitigation, and privacy-enhancing tools. These can serve as the building blocks of a robust AI safety ecosystem.
India’s AISI should develop indigenous tools and frameworks that embed responsible AI principles by design. At the same time, it must actively engage with the global AISI network to ensure interoperability and collaboration.
Rutuja Pol, Lead, Government Affairs, Ikigai Law; Aarya Pachisia, Associate, Ikigai Law
Published – March 05, 2025 12:15 am IST