books

UNESCO’s Global Norms on AI Ethics and Governance and India’s AI Policy

The United Nations Educational, Scientific and Cultural Organization (UNESCO)’s 2021 framework, Recommendation on the Ethics of Artificial Intelligence (AI), endorsed by 193 countries and adopted on November 23, 2021, guides its work in AI ethics and governance. It supports member states through tools like the Readiness Assessment Methodology (RAM). UNESCO also established the Global AI Ethics Governance Observatory, offering resources and insights for ethical AI. The observatory hosts the AI Ethics and Governance Lab, focusing on responsible innovation, standards, institutional capacity, generative AI, and neurotechnology.

Scope of Application

The Recommendation focuses on addressing ethical concerns related to AI within its scope. It treats AI ethics as a flexible, value-based framework to guide societies in managing AI’s effects on people, communities, and nature. Instead of defining AI, the Recommendation highlights features like reasoning, learning, and decision- making. It covers ethical issues throughout the AI lifecycle, involving various actors such as researchers and users. AI systems could affect jobs, education, health, rights, and the environment. The Recommendation emphasises the importance of ethics in education, science, culture, and information, while encouraging governments to create legal frameworks and ensure responsible AI development.

Aims and Objectives

The Recommendation aims to promote the ethical and peaceful use of AI for the benefit of humanity and the environment. It offers practical guidance rooted in shared values, emphasises inclusion and sustainability, and encourages global cooperation and shared responsibility among all stakeholders.

The objectives of the Recommendation are to provide a global framework of values and actions to help countries shape AI-related laws and policies in line with international standards. It encourages ethical practices at all stages of the AI lifecycle by individuals, institutions, and companies. It also seeks to uphold human rights, dignity, equality, including gender equality and protect the environment, ecosystems, and cultural diversity. It further promotes inclusive, interdisciplinary dialogue on AI ethics and supports fair access to AI advancements. It emphasises benefit-sharing, especially for low- and middle-income countries (LMICs), including least developed countries (LDCs), landlocked developing countries (LLDCs), and small island developing states (SIDS).

Values and Principles

 All AI actors must uphold values and principles by aligning laws, regulations, and practices with human rights and international standards. These actions should support global goals like the Sustainable Development Goals (SDGs). Since values may conflict, decisions must be lawful, balanced, and inclusive. Trust in AI relies on transparency, stakeholder engagement and effective risk management throughout its lifecycle.

Values The Recommendation emphasises that human dignity and human rights must be respected, protected and promoted throughout the entire lifecycle of AI systems. All individuals have equal worth, regardless of race, gender, origin or ability and no one should be harmed or degraded by AI. AI systems must improve people’s quality of life without violating their rights. Vulnerable groups, such as children, older persons, and people with disabilities must be treated with care and never objectified.

Human rights must guide AI development, with all sectors, including governments, private companies, civil society and academia, working to ensure AI supports rights rather than undermining them. Technologies should be used to advocate for human rights and not to restrict them.

Environmental protection is further a core concern. AI systems should be designed to support ecological sustainability, reduce harm, and minimise their carbon footprint. All actors must follow environmental laws and aim to preserve ecosystems.

The Recommendation also calls for inclusiveness and respect for diversity across AI development and use. Participation in AI processes should be open to all, regardless of background. People must have the freedom to choose how they interact with AI and never take advantage of lack of necessary technological infrastructure, skills and education, as well as legal frameworks, especially in LMICs, LDCs, LLDCs and SIDS, and must be addressed through international cooperation.

AI should contribute to peaceful, just, and inter-connected societies. It should foster solidarity and respect among individuals and between humans and the environment. The use of AI must not divide people or undermine autonomy and freedom. Instead, AI systems should promote justice, equity, and shared responsibility across society. Peaceful coexistence and mutual care are seen as essential values that AI should uphold throughout its lifecycle.

Principles The Recommendation sets the following principles:

Proportionality and Risk Management AI technologies alone could not ensure the well-being of humans, environment or ecosystems. Their use should be limited to what is necessary to achieve valid and lawful goals, with careful consideration of the context in which they are applied. To avoid potential harm to individuals, communities, societies, and the environment, it is essential to carry out thorough risk assessments. These assessments should be paired with preventive measures to minimise negative impacts.

The selection of AI methods must be well justified. This involves ensuring that they are suitable and proportionate to the intended purpose, do not violate human rights or core values and are appropriate for the specific context. The chosen methods must also be grounded in sound science. In high-stakes situations, especially those involving life or death, human oversight must be maintained. Furthermore, AI should never be used for purposes such as social scoring or mass surveillance.

Safety and Security AI systems should be designed and operated in ways that avoid safety risks and protect against vulnerabilities throughout their lifecycle. Safe and secure AI should be supported by sustainable, privacy-protective data access frameworks that enhance training and validation using quality data.

Fairness and Non-Discrimination AI actors should promote social justice and safeguard fairness and non-discrimination in compliance with international law. This includes ensuring that AI benefits are accessible to all, considering the needs of various groups, such as different age groups, cultural systems, language groups, persons with disabilities, girls and women, and marginalised or vulnerable people. Member states should work for the promotion of inclusive access to AI systems, tackle digital divides, and ensure equity in access and participation in AI system lifecycle.

Sustainability The development of sustainable societies relies on achieving objectives across human, social, cultural, economic, and environmental dimensions. AI technologies could either benefit or hinder sustainability objectives, depending on their application. Hence, continuous assessment of AI’s impact on sustainability is necessary, considering its implications for the SDGs.

Right to Privacy and Data Protection Privacy must be respected, protected, and promoted throughout the lifecycle of AI systems. Data should be collected, used, shared, archived, and deleted in ways consistent with international law and relevant legal frameworks. Adequate data protection frameworks and governance mechanisms should be established, ensuring compliance throughout the AI system lifecycle. Algorithmic systems require privacy impact assessments, including societal and ethical considerations and should adopt a privacy-by-design approach.

Human Oversight and Determination At every stage of the AI system lifecycle, it should be possible to assign ethical and legal responsibility to an individual or legal entity. While AI could be used to support decision-making for efficiency, humans must always retain the final responsibility and accountability. In particular, decisions involving life and death should not be ceded to AI systems.

Transparency and Explainability Transparency and explainability of AI systems are essential to ensure the respect, protection, and promotion of human rights and ethical principles. Transparency involves providing meaningful information about the system’s capabilities, limitations, and purpose. Explainability refers to making the outcomes of AI systems intelligible and providing insight into how decisions are made. Individuals should be informed when decisions are made by AI systems and have the opportunity to request explanations. Transparency and explainability contribute to accountability and trust in AI systems.

Responsibility and Accountability AI actors and member states should respect, protect, and promote human rights, and fundamental freedoms, assuming ethical and legal responsibility throughout the AI system lifecycle. Ethical responsibility and liability for decisions based on AI systems should always be attributable to AI actors corresponding to their role. Appropriate oversight, impact assessment, audit and due diligence mechanisms should be developed to ensure accountability for AI systems and their impact.

Awareness and Literacy Public awareness and understanding of AI technologies and the value of data should be promoted through open and accessible education, civic engagement, digital skills, and AI ethics training. Learning about the impact of AI systems should be grounded in human rights and fundamental freedoms, considering their impact on individuals and communities.

Multi-stakeholder and Adaptive Governance and Collaboration Data use must respect international law and national sovereignty. Countries have the right to regulate data within their borders in line with privacy laws and human rights standards. Inclusive governance of AI requires the participation of a wide range of stakeholders. These include governments, global organisations, tech experts, civil society, researchers, educators, media, private companies, and community representatives. Efforts should be made to adopt open standards and ensure collaboration. Governance should also adapt to technological changes and support the participation of marginalised communities, including indigenous peoples (diverse communities and the original inhabitants of specific territories, often with distinct cultures, languages, and traditions) and others whose voices are often underrepresented.

Areas of Policy Action

The policy actions described in the following policy areas operationalise the values and principles set out in this Recommendation.

Ethical Impact Assessment Member states and businesses should create ethical impact assessment frameworks to evaluate the benefits, risks, and social effects of AI systems, especially on human rights, the environment, and vulnerable groups. These assessments must include transparency, public participation, oversight, and testing before deployment. Governments should regulate AI through audits, explainability and continuous monitoring, ensuring AI systems respect human rights, reduce inequalities, and support inclusive and ethical use across society.

Ethical Governance and Stewardship Member states should establish inclusive, transparent, and accountable AI governance frameworks that ensure oversight, human rights protection, and cross-border cooperation. They must promote transparency, prevent harm, and support innovation without burdening smaller actors. This includes impact assessments, regulatory strategies, independent ethics oversight, diversity, inclusive participation, responsible data use, legal accountability and support for human oversight, fairness and public trust across the AI system lifecycle.

Data Policy Member states should develop inclusive data governance strategies ensuring quality, secure and privacy-respecting AI training data. They must uphold international data protection laws, promote transparency, user control, and oversight. Open data, robust datasets, ethical data sharing, interoperability and collaborative platforms should be encouraged to foster innovation, human rights, and public benefit.

Development and International Cooperation Member states and transnational corporations should integrate AI ethics into global forums and ensure that AI use in key development sectors aligns with ethical principles. They should foster international cooperation, research and innovation, especially involving LMICs, in particular LDCs, LLDCs, and SIDS. They should promote culturally relevant ethical frameworks and bridge technological divides while respecting international law.

Environment and Ecosystems Member states and businesses should evaluate and reduce the environmental impact of AI systems throughout their lifecycle, including carbon emissions, energy use, and raw material extraction. Compliance with environmental laws is essential. States should incentivise ethical AI solutions for environmental protection, disaster resilience and sustainability, involving local and indigenous communities. AI should support sustainable energy, agriculture, pollution control, and infrastructure. Resource-efficient AI methods should be prioritised and the precautionary principle should be applied when environmental risks are high or benefits unproven.

Gender Member states should ensure that AI contributes to gender equality by protecting women’s rights and safety throughout the AI lifecycle and integrating a gender perspective in ethical impact assessments. They should fund gender-responsive initiatives, develop inclusive digital policies. Furthermore, they should promote women in science, technology, engineering, and mathematics (STEM), including information and communication technologies (ICT) disciplines. They should also close gender gaps in AI access, leadership, and education. States must eliminate gender bias in AI, support female participation and entrepreneurship, enforce anti-harassment policies, and share best practices with UNESCO aiding through a global repository.

Culture Member states should promote the use of AI to preserve and enhance cultural heritage, including endangered and indigenous languages through education, participatory initiatives, and natural language processing (NLP) assessments. They should support AI training for artists, raise awareness among cultural industries and ensure diverse, inclusive cultural content via algorithmic recommendations. States must address AI’s impact on intellectual property and encourage research in this area. Institutions like museums and libraries should use AI to improve access to and visibility of their collections.

Education and Research Member states should collaborate with various sectors to ensure inclusive AI literacy and ethical education at all levels, promoting skills like coding, digital literacy, critical thinking, and media awareness. They should address AI’s impact on human rights, ensure equitable access, and promote gender and diversity inclusion. AI in education must enhance learning without compromising data privacy or cognitive development. States should also support ethical AI research, promote interdisciplinary collaboration and ensure that AI use in science is critically assessed and aligned with human rights and international law.

Communication and Information Member states should leverage AI to improve access to information, support freedom of expression, and enhance transparency in content moderation. They must promote digital literacy, ensure diverse viewpoints, enable user redress mechanisms, counter misinformation, and support ethical AI use in journalism and media reporting.

Economy and Labour Member states should assess AI’s impact on labour markets, especially in labour-intensive economies and update education systems to equip current and future workers with core interdisciplinary and technical skills. Transparency about skill demands and curriculum updates is essential. Collaboration among governments, academia, industry, and civil society should align training with future workforce needs. States must ensure fair transitions for at-risk workers through upskilling, social protections, and financial support. They should fund research on AI’s labour impact, promote competitive markets, prevent monopolies and ensure that ethical AI standards are respected across borders, especially in vulnerable developing countries.

Health and Social Well-Being Member states should use AI to enhance public health, align its use with human rights and promote global solidarity. AI in health must be safe, effective, privacy-respecting, and include human oversight, especially in mental health. Patients should be involved in development processes and AI applications must avoid bias and misuse of data. States should regulate human-robot interactions to protect mental well-being, prevent manipulation and promote transparency. Research should address long-term psychological impacts, particularly on children and youth should be actively engaged in shaping AI’s role in their lives and health futures.

Monitoring and Evaluation

Member states should transparently monitor and evaluate AI ethics policies, using both qualitative and quantitative methods, tailored to their specific contexts. UNESCO would support this by developing methodologies for ethical impact assessment (EIA), readiness assessments, and evaluating policy effectiveness. It would also provide training materials, collect and share data, and promote best practices. Monitoring should include diverse stakeholder participation, especially vulnerable groups, and use standardised indicators aligned with international law. Mechanisms like ethics commissions, observatories, and regulatory sandboxes should be considered to ensure adherence to ethical principles and foster continuous, risk-proportionate evaluations across public and private sectors.

India’s AI Policy and Initiatives

According to the Global AI Ethics and Governance Observatory country profiles, India is in the process of completing the AI RAM. On November 14, 2024, the UNESCO South Asia Regional Office held a meeting on AI safety and ethics at UNESCO House in New Delhi. The event was organised with the Ministry of Electronics and Information Technology (MeitY) and Ikigai Law (a law and public policy firm) as the implementing partner. This meeting was the first of five planned consultation meetings under the AI RAM, a joint effort by UNESCO and MeitY. The goal is to create an India-focused AI policy report that highlights the country’s strengths and growth areas in AI and offers practical advice for using AI responsibly and ethically in different sectors.

This consultation had participants from government, academia, industry, and civil society to discuss how India could align its AI ecosystem with the ethical principles, outlined in UNESCO’s Global Recommendation on the Ethics of AI. The key values which were highlighted during the consultation included transparency, inclusiveness, and fairness. The AI RAM of India is to identify areas for involvement, especially in building regulatory and institutional frameworks. As India’s AI sector is growing at a faster pace, aligning with these ethical standards would create a safe, trustworthy, and inclusive AI environment that would support the broader goal of making AI beneficial for everyone.

Advancing Responsible AI Governance in India

India is taking a significant step forward in the field of AI through the launch of INDIAai Mission, an ambitious initiative backed by more than Rs 10,000 crore in funding. A central component of this mission is the Safe and Trusted AI pillar, which reflects the country’s strong commitment to ensuring safety, accountability, and ethical practices in the development and use of AI. The mission seeks to promote indigenous AI frameworks, robust governance tools, and comprehensive self-assessment guidelines. These efforts aim to empower innovators, strengthen the ethical foundation of AI development, and make the benefits of AI accessible across various sectors of the society.

Key Insights of the Consultation

To further these goals, the national consultation featured expert presentations and breakout sessions that contributed valuable recommendations towards shaping India’s AI policy landscape.

Tim Curtis, Director of UNESCO’s South Asia Regional Office, emphasised the alignment between UNESCO’s ethical AI initiatives and INDIAai Mission, highlighting their shared objective of promoting responsible AI through strategic cooperation and actionable policy frameworks. He further reaffirmed UNESCO’s commitment to building an inclusive and ethically governed AI ecosystem in collaboration with Indian institutions.

In the keynote address, S. Krishnan, secretary at MeitY, highlighted the importance of creating awareness and equipping individuals with the necessary skills and tools to ensure responsible AI governance. He underscored the need to balance innovation with the principles of transparency, data privacy and security, in line with evolving global standards. He further emphasised the role of Digital Personal Data Protection (DPDP) Act in supporting responsible AI practices and noted India’s active participation in international AI platforms that seek to harmonise innovation with effective regulation.

Abhishek Singh, additional secretary at MeitY, elaborated on the government’s vision to foster and AI ecosystem that would promote innovation while being grounded in responsible governance. He offered a detailed overview of the seven core pillars of the INDIAai Mission, which include compute capacity, ensuring broad access to datasets, building a skilled AI workforce, and developing ethical governance frameworks. He acknowledged the contributions of UNESCO in conducting AI readiness assessments and emphasised India’s potential to lead the development and deployment of responsible AI, particularly for the benefit of the Global South (developing or less developed countries). Furthermore, it is India’s unique opportunity to demonstrate the use of AI for social good in essential sectors, such as health care, education, and agriculture. He stressed that these advancements must be driven by inclusive, transparent, and secure practices developed through collaboration between academia, industry, and government institutions.

As part of MeitY’s Safe and Trusted AI initiative, two key projects were highlighted. Avinash Agarwal, Deputy Director General at DOT, presented the ‘AI Ethical Certification Project’ aimed at creating tools to ensure fairness in AI. Dr Ranjitha Prasad, assistant professor at IIT Delhi, introduced the ‘Privacy Enhancing Strategy Project’, focused on developing standards for privacy-preserving machine learning.

Roadmap to India’s AI Policy

The consultation also featured in-depth breakout sessions that explored critical areas, such as AI governance, infrastructure development, workforce capacity, and the sector-specific application of AI technologies. These discussions brought together diverse stakeholders and facilitated a collaborative effort to identify current challenges, prioritise opportunities, and build a policy roadmap that supports ethical, inclusive, and impactful AI deployment. The insights and recommendations generated from these sessions would serve as a foundational step in crafting a comprehensive AI policy that ensures the well-being of society while driving technological advancement.

Democratising AI in India

Since the consultation meeting in November 2024, India has been witnessing a transformative shift in the field of AI, moving beyond the domain of global tech giants and elite institutions. Through visionary policies and strategic initiatives, the Indian government is ensuring that students, startups, and innovators have access to cutting-edge AI infrastructure. These efforts are creating a level playing field and encouraging widespread innovation. Central to this transformation is the INDIAai Mission, along with the establishment of AI Centres of Excellence (CoE), which collectively aim to build a self-reliant AI ecosystem. These initiatives align with the broader vision of Viksit Bharat 2047, which aspires to position India as a global AI powerhouse by using AI for economic development, improved governance, and inclusive societal progress.

Building a Robust AI Compute and Semiconductor Ecosystem

India is rapidly scaling its AI computing capabilities to match its expanding digital economy. A major component of this vision is the INDIAai Mission 2024, which includes the creation of one of the world’s largest AI compute infrastructures with 18,693 Graphics Processing Units (GPUs). This infrastructure is designed to support the development of AI models specific to Indian languages and contexts. Already, 10,000 GPUs have been made available in the mission’s first phase, with more to follow soon.

In a unique move, India has launched an open GPU marketplace, enabling access to high-performance computing for startups, researchers, and students. This approach stands in contrast to other countries where such infrastructure is controlled by large corporations. To ensure a stable and secure supply chain, the government has selected ten companies to provide GPUs and has also set a goal to develop indigenous GPU technology within the next three to five years.

A common compute facility would be launched to offer GPU access at highly subsidised rates, just Rs 100 per hour compared to the global cost of US$ 2.5 to US$ 3 per hour. In parallel, India is advancing its semiconductor manufacturing sector with five semiconductor plants under construction. These efforts would not only boost AI development but also enhance India’s position in the global electronics market.

Driving AI Innovation through Open Data and Centres of Excellence

Understanding the critical role of data in AI development, the government has launched the INDIAai Dataset Platform, which would serve as the country’s largest repository of anonymised, non-personal datasets. This platform would empower researchers and startups by offering seamless access to diverse datasets across sectors, such as agriculture, climate, and urban planning. This is expected to enhance the accuracy, inclusiveness, and reliability of AI applications.

To further accelerate AI research and deployment, centres of excellence (CoE) have been set up in health care, agriculture, and sustainable cities, with a new CoE in education, announced in the 2025 Budget, with a Rs 500 crore allocation. Additionally, five National Centres of Excellence for Skilling would be established in collaboration with international partners. These centres aim to equip the youth with industry-relevant skills in AI and manufacturing, thereby supporting the government’s ‘Make for India, Make for the World’ vision.

Developing India’s Own AI Models and Language Technologies

India is actively working on developing its own foundational AI models, including Large Language Models (LLMs) and Small Language Models (SLMs), through initiatives led by the INDIAai platform. One notable initiative is BharatGen, the world’s first government-funded multimodal LLM programme, launched in Delhi in 2024. BharatGen brings together top Indian academic institutions to develop AI models focused on language, speech, and computer vision for public service delivery and citizen engagement.

Projects like Sarvam-1 and Hanooman’s Everest 1.0 exemplify this effort. Sarvam-1 is an LLM with 2 billion parameters that support ten Indian languages, while Everest 1.0 supports 35 and aims to expand to 90 languages. Tools like Chitralekha, an open-source video transcription platform and Digital India Bhashini, an AI-powered translation and voice access platform, are also being developed to improve digital accessibility in Indian languages.

Integrating AI with Digital Public Infrastructure

India’s Digital Public Infrastructure (DPI), which includes platforms such as Aadhaar, UPI, and DigiLocker, is being enhanced with AI to deliver smarter, faster, and more inclusive services. This unique model combines public funding with private-sector innovation, offering a foundation on which companies could build application-specific solutions. The global relevance of India’s DPI was highlighted at the G20 Summit, where multiple countries showed interest in adopting similar frameworks.

At the Mahakumbh 2025, AI-powered DPI played a pivotal role in managing logistics at the world’s largest human gathering. Real-time monitoring of railway passenger movement helped in crowd dispersal, while a Bhashini-powered chatbot provided multilingual assistance, real-time translation, and voice-based lost-and-found services. These AI tools, integrated with Indian Railways and UP Police systems, significantly improved the safety and efficiency of the event, setting a new benchmark in AI-driven public event management.

Cultivating AI Talent and Expanding Workforce Capabilities

India’s workforce is central to its AI growth story. With the country adding one Global Capability Centre (GCC) every week, there is a clear demand for skilled professionals in AI and emerging technologies. To meet this demand, the government is overhauling academic curricula in alignment with the National Education Policy (NEP) 2020, introducing AI, 5G, and semiconductor training at the undergraduate and postgraduate levels.

Through the INDIAai Future Skills programme, AI education is being promoted across all levels of higher education. Fellowships are being offered to Ph.D. candidates in AI from top-ranked institutions. Data and AI Labs are being set up in Tier 2 and Tier 3 cities to ensure inclusive access to AI training. A pilot INDIAai Data Lab is already operational at National Institute of Electronics and Information Technology (NIELIT), Delhi.

According to the Stanford AI Index 2024, India ranks first globally in AI skill penetration. The AI talent pool has grown 263 per cent since 2016, with India now home to 16 per cent of the world’s AI talent. The country also leads in AI skill penetration for women. Industry reports estimate that India’s AI market would reach US$ 28.8 billion by 2025, with over one million AI professionals required by 2026. India is now among the top five fastest-growing AI talent hubs, globally—Other four being Singapore, Finland, Ireland, and Canada.

AI Adoption and Industry-Led Innovation

India’s AI ecosystem is evolving from pilot projects to large-scale and production-ready solutions. Despite global economic slowdowns, the Indian Generative AI (GenAI) sector has shown exceptional resilience and growth. A recent Boston Consulting Group (BCG) report noted that 80 per cent of Indian businesses consider AI a strategic priority and nearly 70 per cent plan to increase AI-related investments in 2025.

GenAI startup funding surged to US$ 51 million in the second quarter of FY2025, particularly among B2B (Companies that leverage AI to provide Software-as-a-service (SaaS) solutions to other business) and agentic (a broader concept of solving issues with limited supervision) AI startups. The workplace too has seen rapid AI adoption, with 70 per cent of employees using AI tools, a significant jump from 50 per cent the previous year. Small and medium businesses (SMBs) are also benefiting from AI through improved customer engagement and revenue growth.

India’s AI economy is projected to grow at a compound annual growth rate (CAGR) of 25–35 per cent, contributing both to innovation and job creation. The startup ecosystem is thriving, with over 520 active tech incubators and accelerators, including AI-focused initiatives like T-Hub MATH, which supported over 60 startups in early 2024.

A Balanced Approach to AI Regulation

India is taking a pragmatic approach to AI regulation, focusing on enabling innovation while addressing risks. Rather than relying solely on strict legislation, the government is funding universities and IITs to develop AI-based safeguards for issues like deep fakes, privacy breaches, and cybersecurity. This techno-legal approach allows innovation to flourish while ensuring that ethical boundaries are respected. India’s strategy aims to create a regulatory environment where growth and responsibility go hand in hand, securing AI’s role as a force for inclusive and sustainable development.

Way forward

UNESCO and MeitY are working together to implement UNESCO’s Global Recommendation on the Ethics of AI through policies suited to India’s specific AI environment. The AI RAM sessions across India aim to encourage inclusive ethical and sustainable AI governance through broad stakeholder participation. India’s strong government support, focus on indigenous AI models, investment in digital infrastructure, and talent and commitment to open data and accessible computing are driving innovation and inclusion. This proactive strategy is boosting India’s digital economy, promoting self-reliance in key technologies and positioning the country as a global leader in ethical and impactful AI development.

Conclusion

To conclude, all the member states and relevant stakeholders should uphold and actively implement the ethical principles, values and principles related to AI as outlined in the Recommendation. Moreover, member states are encouraged to strengthen their efforts through collaboration with national and international organisations—including NGOs, corporations, and scientific bodies—whose work aligns with the goals of this framework. To support these efforts, key instruments include developing a UNESCO Ethical Impact Assessment methodology and forming national commissions for AI ethics.

Member states should understand this Recommendation as a whole, besides the foundational values and principles which are complementary and interrelated. Nothing should be interpreted as replacing, altering or otherwise prejudicing states’ obligations or rights under international law. This Recommendation should not be taken as an approval for any state, other political, economic or social actor, group or person to engage in any activity or perform any act contrary to human rights, fundamental freedoms, human dignity, and concern for the environment and ecosystems, both living and non-living.

© Spectrum Books Pvt Ltd.

 

spectrum-books-logo

  

Spectrum Books Pvt. Ltd.
Janak Puri,
New Delhi-110058

  

Ph. : 91-11-25623501
Mob : 9958327924
Email : info@spectrumbooks.in