
Fairer AI, An introduction to Ethics and Justice
An introduction to ethics and justice explores AI-driven design’s ethical and technical challenges, covering crucial topics. Participants will gain a deep understanding of bias, including algorithmic bias, and its impact on AI systems.
The course delves into the “Black Box Phenomenon,” highlighting the unpredictable nature of AI evolution and its challenges. It introduces concepts of interpretability and explainability in AI, emphasizing their importance in understanding and communicating AI decision-making processes.
The program also examines power dynamics and ownership in AI development and deployment, discussing ethical solutions, principles, regulations, and frameworks that guide responsible AI development. Participants will explore equitable AI development practices through social justice, regulation, and intersectional lenses.

Ethical Challenges
Identify key ethical concerns in the development and deployment of AI systems, including bias, harm, and lack of accountability.
Algorithmic Bias
Explain how bias enters AI systems, with a focus on algorithmic discrimination and its social consequences.
Black Box Phenomenon
Understand the implications of opaque AI systems and the challenges they pose for transparency, control, and trust.
Fairness & Explainability
Engage with foundational principles of fairness, interpretability, and explainability in AI development.
Power and Ownership
Critically examine how power dynamics, labor, and ownership structures influence the direction and impact of AI technologies.
Ethical Frameworks
Navigate major ethical frameworks and regulatory approaches that guide responsible and justice-oriented AI development.
Target
This module is designed for mid-to advanced-level professionals and decision-makers across a wide range of fields, including but not limited to: Designers and engineers, Policymakers and legal experts, Researchers and academics, Consultants, strategists, and innovation leads, Educators and facilitators, Activists and community organizers.
Whether working in technology, education, governance, social impact, design, or corporate leadership, participants are invited to explore the ethical dimensions of AI through a multidisciplinary and intersectional lens.
No prior technical knowledge of AI or formal background in ethics or social justice is required. The module begins with foundational concepts and builds a shared vocabulary to support deeper analysis and collaboration.

Acquired Competences
Ethical Risk Analysis in AI Systems
Identify and articulate key ethical risks in AI design and deployment—such as algorithmic discrimination, social harm, lack of consent, or transparency failures. Develop a critical lens to detect potential injustices embedded in AI technologies and the systems surrounding them.
Understanding and Communicating Algorithmic Bias
Explain how bias is introduced into AI systems through data, design choices, and deployment context. Translate these insights into accessible language for different audiences, and connect them to real-world examples of exclusion, harm, or inequality.
Evaluating Opaque Systems & Black Box Models
Critically assess the limits of opaque or “black box” AI models, focusing on how their lack of interpretability undermines user trust, accountability, and responsible governance. Build arguments for transparent and traceable systems in technical, organizational, or policy settings.
Mapping Power, Labor & Ownership in AI
Navigate key ethical frameworks, justice theories, and regulatory models to inform decision-making about AI. Apply these tools to evaluate existing systems or co-design more accountable, equitable, and socially informed approaches to AI governance.
This course is especially valuable for those shaping or influencing how AI is developed, deployed, or regulated—whether in public, private, or civil society contexts.
