The United Nations Educational, Scientific and Cultural Organization (UNESCO)’s 2021 Framework, Recommendation on the Ethics of Artificial Intelligence (AI), endorsed by 193 countries and adopted on November 23, 2021, guides its work in AI ethics and governance. It supports member states through tools like the Readiness Assessment Methodology (RAM). UNESCO also established the Global AI Ethics Governance Observatory, offering resources and insights for ethical AI. The observatory hosts the AI Ethics and Governance Lab, focusing on responsible innovation, standards, institutional capacity, generative AI, and neurotechnology.
Scope of Application
The Recommendation focuses on addressing ethical concerns related to AI within its scope. It treats AI ethics as a flexible and value-based framework to guide societies in managing AI’s effects on people, communities, and nature. Instead of defining AI, the Recommendation highlights features like reasoning, learning, and decision making. It covers ethical issues throughout the AI lifecycle, involving various actors such as researchers and users. AI systems could affect jobs, education, health, rights, and the environment. The Recommendation emphasises the importance of ethics in education, science, culture, and information, while encouraging governments to create legal frameworks and ensure responsible AI development.
Aims and Objectives
The Recommendation aims to promote the ethical and peaceful use of AI for the benefit of humanity and the environment. It offers practical guidance rooted in shared values; emphasises inclusion and sustainability; and encourages global cooperation and shared responsibility among all stakeholders.
The objectives of the Recommendation are to provide a global framework of values and actions to help countries shape AI-related laws and policies in line with international standards. It encourages ethical practices at all stages of the AI lifecycle by individuals, institutions, and companies. It also seeks to uphold human rights, dignity, equality, including gender equality and protect the environment, ecosystems, and cultural diversity. It further promotes inclusive, interdisciplinary dialogue on AI ethics and supports fair access to AI advancements. It emphasises benefit-sharing, especially for low- and middle-income countries (LMICs), including least developed countries (LDCs), landlocked developing countries (LLDCs), and small island developing states (SIDS).
Values and Principles
UNESCO’s recommendations on the ethics of AI highlight the importance of aligning AI development and use with universally accepted values, human rights and international laws. AI actors, including governments, developers, businesses, and civil society are encouraged to ensure that all AI-related laws, practices, and systems support global objectives like the Sustainable Development Goals (SDGs). Since values may sometimes conflict, ethical decisions must be lawful, fair, inclusive, and aimed at building trust. This trust depends on transparency, open engagement with stakeholders and thorough risk management throughout an AI system’s lifecycle.
A central value emphasised is the protection and promotion of human dignity and rights at every stage of AI development and deployment. AI systems must never harm or dehumanise individuals and must treat everyone equally, regardless of gender, race, ability or background. Special attention should be paid to protecting vulnerable groups such as children, older people, and individuals with disabilities. AI must be used to improve the quality of life without violating fundamental rights. Furthermore, all stakeholders, including governments, the private sector, civil society and academia, must work together to ensure that AI strengthens rather than undermines human rights.
Environmental sustainability is another critical value. AI systems should minimise environmental harm and contribute to the protection of ecosystems. They must comply with environmental laws and reduce their carbon footprint. Equally important is the promotion of inclusiveness and diversity. People from all backgrounds should have the opportunity to participate in AI development and the digital divide, especially in developing countries, must be addressed through international cooperation. AI should unite rather than divide people, encouraging peaceful coexistence, justice, and mutual respect.
Several guiding principles accompany these values. Proportionality and risk management are key; AI should only be used when necessary and in ways appropriate to its purpose. Risk assessments must be conducted to prevent harm to individuals, communities, and the environment. High-risk applications require human oversight and AI must not be used for mass surveillance or social scoring.
Ensuring the safety and security of AI systems throughout their lifecycle is essential. These systems must be built on secure and privacy-protective frameworks. Equally important is fairness and non-discrimination. AI must promote social justice, with attention to the needs of all groups, particularly marginalised communities. Efforts must focus on inclusive access to AI technologies and addressing existing inequalities.
Sustainability must be integrated into AI policies, with ongoing assessments to ensure alignment with social, economic, and environmental goals. Privacy and data protection are fundamental rights that must be preserved at every stage of the AI lifecycle. Privacy impact assessments, ethical data use, and privacy-by-design practices are required.
AI systems must remain under human control. Legal and ethical responsibility must be clearly assigned, especially in high-stakes decisions. Transparency and explainability are vital for public trust, allowing individuals to understand how AI decisions are made and request explanations. Accountability mechanisms, including audits and ethical reviews, must be established to ensure responsible AI use.
Finally, raising awareness and improving AI literacy are crucial. Public understanding of AI and its impact must be encouraged through accessible education and civic engagement. Inclusive and adaptive governance is also essential. Member states must retain the right to regulate data within their borders and AI governance should involve multiple stakeholders, from global institutions to marginalised communities, to ensure ethical, equitable, and effective AI development and use.
Areas of Policy Action
UNESCO outlines several key policy areas to operationalise its values and principles on the ethical use of AI. These policy actions are intended to guide member states in aligning AI development with human rights, sustainability, and inclusive governance.
Ethical Impact Assessment is a fundamental requirement. Member states and private sectors should create frameworks to evaluate how AI systems affect human rights, the environment and vulnerable populations. These assessments must involve transparency, public engagement and testing before deployment. Governments should regulate AI with audits and monitoring to ensure its responsible, inclusive, and ethical use across society.
Ethical Governance and Stewardship involves setting up governance systems that are inclusive, transparent, and accountable. These frameworks should protect human rights, encourage innovation, and support international collaboration. Regulatory mechanisms should include impact assessments, independent ethical oversight, data governance, diversity, legal accountability, and support for human involvement in AI decisions.
Data Policy focuses on managing data used in AI systems responsibly. Member states must implement data governance strategies that ensure data quality, security and privacy, while respecting international data protection laws. Ethical data sharing, interoperability, and collaborative platforms should be encouraged to benefit society and uphold human rights.
Development and International Cooperation is key to ensuring ethical AI in global contexts. States and international corporations should include AI ethics in development agendas, particularly supporting lower-income and vulnerable nations, such as LMICs, LDCs, etc. This includes fostering research, addressing digital divides, and promoting culturally relevant ethical frameworks.
Environmental and Ecosystem Protection should be embedded in AI policies. Governments and businesses must minimise the environmental impact of AI across its lifecycle, including emissions and energy consumption. AI should support sustainable practices in energy, agriculture, and disaster resilience. Indigenous and local communities must be involved and the precautionary principle should guide high-risk environmental applications.
Gender Equality must be promoted through AI systems. This involves eliminating bias, supporting women’s participation in science, technology, engineering, and mathematics (STEM) fields, including information and communication technologies (ICT) and ensuring safe and inclusive AI environments. Gender-sensitive impact assessments, funding for women-led initiatives and anti-harassment policies are essential. UNESCO could aid by collecting and sharing best practices.
Culture should be preserved through AI, particularly endangered and indigenous languages. States should support participatory AI projects in the cultural sector, train artists, and ensure diversity in AI-generated content. Issues related to intellectual property in AI must further be addressed.
Education and Research require cross-sector collaboration to ensure equitable AI education and digital literacy. Curricula should include ethics, coding, and media awareness while protecting student data. Ethical AI research must be promoted through interdisciplinary collaboration aligned with international law.
Communication and Information policy should leverage AI to enhance access to information, protect freedom of expression and ensure ethical content moderation. This includes promoting media literacy, providing redress mechanisms, and countering misinformation.
Economy and Labour policy must address AI’s impact on jobs and ensure that workers are equipped with relevant skills. This includes transparency in skill demands, updating curricula and providing reskilling opportunities. Fair labour transitions, market competitiveness, and protection against monopolies are essential, especially in vulnerable economies.
Health and Social Well-Being should be supported by ethical AI. AI must be used in alignment with human rights, with privacy protections, human oversight and safety, especially in mental health. Patients should be involved in system design and AI must avoid bias. Regulation is needed to govern human-robot interaction, prevent manipulation and protect mental health, especially for children and youth, who must be actively involved in shaping AI’s role in their lives.
Monitoring and Evaluation
Member states should monitor and evaluate AI ethics policies using both qualitative and quantitative methods suited to their national contexts. UNESCO would support this by offering tools like Ethical Impact Assessments, readiness evaluations, training, and data sharing. Inclusive monitoring should involve vulnerable groups and use international standards, supported by bodies like ethics commissions and regulatory sandboxes.
India’s AI policy and Strategic Initiatives—A Summary
India is advancing rapidly in the field of AI, underpinned by strong governmental initiatives and international collaborations. A notable development is the AI RAM, a joint effort by UNESCO and the MeitY, aiming to shape an India-centric AI policy. This initiative seeks to align the country's AI ecosystem with ethical principles, such as transparency, inclusiveness, and fairness, as outlined in UNESCO’s Global Recommendation on the Ethics of AI. A consultation meeting was held in New Delhi in November 2024, involving government, academia, industry, and civil society, which marked the beginning of five planned AI RAM sessions.
The core of India’s national AI strategy is the INDIAai Mission, supported by over Rs 10,000 crore. A significant component of this mission is the ‘Safe and Trusted AI’ pillar, which focuses on ethical development, self-assessment guidelines, and indigenous governance tools. The mission aims to make AI beneficial across various sectors and ensure responsible deployment.
Key speakers like Tim Curtis of UNESCO emphasised strategic cooperation for responsible AI, while MeitY secretary, Krishnan, stressed awareness, digital skills, data privacy, and the role of the Digital Personal Data Protection (DPDP) Act. Abhishek Singh, additional secretary at MeitY, outlined the mission’s seven core pillars, including compute capacity, data access, skilling, and ethical frameworks. He highlighted India’s unique opportunity to lead in using AI for public good in sectors such as health care, education, and agriculture.
Two projects under the ‘Safe and Trusted AI’ initiative were discussed: the ‘AI Ethical Certification Project’, ensuring fairness in AI systems and the ‘Privacy Enhancing Strategy Project’, which focuses on privacy-preserving machine learning.
India’s roadmap to a comprehensive AI policy includes consultations and breakout sessions to explore governance, infrastructure, workforce development, and sector-specific AI applications. The insights from these sessions are expected to shape a robust, inclusive, and actionable national AI framework.
To democratise AI, India is making infrastructure accessible to students, startups and innovators, aiming to reduce the dominance of global tech giants. The INDIAai Mission and the creation of AI Centres of Excellence (CoEs) support the broader Viksit Bharat 2047 vision, which seeks to establish India as a global AI leader through economic growth, governance enhancement, and inclusive progress.
A significant milestone is India’s plan to build one of the world’s largest AI compute infrastructures, with 18,693 GPUs. Already, 10,000 GPUs have been deployed in the mission’s first phase. The open GPU marketplace allows affordable access to high-performance computing, especially for startups and researchers. GPUs would be made available at subsidised rates of Rs 100 per hour, compared to global rates of US$ 2.5–3. Indigenous GPU development is also underway, alongside the construction of five semiconductor plants to ensure a stable AI hardware supply chain.
Understanding the role of data in AI, India launched the INDIAai Dataset Platform, offering anonymised datasets across domains like agriculture, climate, and urban planning. This resource would improve the accuracy and inclusiveness of AI applications. AI CoEs in areas such as health care and sustainable cities are complemented by new initiatives like a Rs 500 crore CoEs in education and five national CoEs for skilling, developed in collaboration with international partners.
India is also focused on developing its own AI models and language technologies. Projects like BharatGen, the world’s first government-funded multimodal large language model (LLM) program, aim to build foundational models for public service. Other initiatives include Sarvam-1, an LLM supporting 10 Indian languages and Everest 1.0, which supports 35 languages with plans to expand to 90. Tools like Chitralekha and Digital India Bhashini promote accessibility through AI-powered translation and transcription services.
AI is further being integrated into India’s Digital Public Infrastructure (DPI), which includes Aadhaar, UPI, and DigiLocker. AI-enhanced DPI proved its capability at the Mahakumbh 2025, where it facilitated crowd management and multilingual assistance via real-time AI tools. These solutions set a new standard in AI-driven public event management.
India’s AI workforce is growing rapidly. The government is aligning academic curricula with the National Education Policy (NEP) 2020 and introducing AI, 5G, and semiconductor training at various educational levels. Programs like INDIAai Future Skills promote AI education nationwide, while Ph.D. fellowships and Data and AI Labs in Tier 2 and Tier 3 cities ensure broader access. India now leads globally in AI skill penetration and talent growth, including among women, with a projected need for over one million AI professionals by 2026.
Despite global economic challenges, India’s AI ecosystem, particularly in Generative AI (GenAI), is thriving. GenAI startup funding reached US$ 51 million in Q2 FY2025, with high AI adoption across workplaces. About 70 per cent of employees now use AI tools, a significant rise from 50 per cent the previous year. The AI market is projected to grow at 25–35 per cent compound annual growth rate (CAGR), contributing to job creation and innovation. Over 520 incubators and accelerators are supporting this growth.
India’s approach to AI regulation is pragmatic. Instead of stringent laws, the government funds academic institutions to develop AI safeguards for issues like deepfakes and privacy. This techno-legal model balances innovation with ethical oversight.
Conclusion
In conclusion, India, through collaborative policymaking, investment in infrastructure, development of indigenous models and talent cultivation, is positioning itself as a global leader in ethical and inclusive AI. The AI RAM sessions and UNESCO collaboration are foundational steps in establishing an AI governance model tailored to India’s specific needs and aspirations.
© Spectrum Books Pvt Ltd.
