Skip to main content Skip to navigation

WM9QD-15 Ethical Artificial Intelligence Implementation

Department
WMG
Level
Taught Postgraduate Level
Module leader
Awinder Kaur
Credit value
15
Module duration
4 weeks
Assessment
100% coursework
Study location
University of Warwick main campus, Coventry

Introductory description

This module explores the ethical, legal, and societal challenges surrounding the development and deployment of Artificial Intelligence (AI). With a strong applied focus, students will critically engage with real-world case studies, governance frameworks, and risk mitigation strategies, while exploring issues such as bias, fairness, transparency, accountability, and privacy. Students will examine AI ethics across industries—healthcare, automotive, and financial services—and learn to implement principles such as Privacy by Design and Human-in-the-Loop design.

The module promotes hands-on application of ethical AI frameworks and industry tooling, equipping students to make responsible decisions as future AI practitioners.

Module aims

The Ethical Artificial Intelligence Implementation module aims to:

  1. Equip students with a critical understanding of ethical principles, risks, and responsibilities in AI.

  2. Explore the application of legal frameworks, industry governance standards, and contractual risk strategies.

  3. Enable practical skills in ethical risk auditing, transparency techniques, and fairness assessments.

  4. Encourage inclusive dialogue, collaboration, and ethical deliberation through interactive, role-based debates.

  5. Provide domain-specific insight into ethical challenges in AI across key sectors.

Outline syllabus

This is an indicative module outline only to give an indication of the sort of topics that may be covered. Actual sessions held may differ.

1: Foundations of AI Ethics
Overview of ethical theories (utilitarianism, deontology, virtue ethics)
Historical AI ethics failures and lessons learned
Ethical principles for trustworthy AI (European Union (EU); Institute of Electrical and Electronics Engineers (IEEE) ; Organisation for Economic Co-operation and Development (OECD))

2: Bias, Fairness, Transparency and Privacy
Algorithmic bias, fairness metrics, and lifecycle fairness auditing
Practical explainability: SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), model cards
Privacy by Design (PbD), managing Personally Identifiable Information (PII), data residency and cross-border risks

3: Governance, Regulation and Responsibility
UK, EU (AI Act), US policy landscape: comparative analysis
Governance frameworks (e.g. NIST, OECD, ISO standards)
Tooling risk scoring and supplier indemnity strategies
Responsibility and accountability in autonomous systems

4: Ethical AI in Practice: Sectoral Insights & Debates
Case studies: healthcare, autonomous driving, credit scoring
Human-in-the-Loop (HITL) design and AI literacy
Role-based ethical deliberation and scenario analysis
Ethical audits of AI use cases

Learning outcomes

By the end of the module, students should be able to:

  • Critically evaluate ethical theories and apply them to the development and governance of AI systems.
  • Analyse and assess AI systems for bias, fairness, transparency, and privacy using appropriate methodologies and tools.
  • Formulate and justify ethical guidelines and governance strategies that align with regulatory and legal requirements across jurisdictions.
  • Appraise the societal and industry-specific impacts of AI through evidence-based ethical reasoning.
  • Synthesise diverse stakeholder perspectives to address complex ethical dilemmas in AI through collaborative debate and communication.

Indicative reading list

Dubber, M.D., Pasquale, F. & Das, S. 2021;2020;, The Oxford handbook of ethics of AI, 1st edn, Oxford University Press, New York, NY, United States of America.

Díaz-Rodríguez, N., Del Ser, J., Coeckelbergh, M., López de Prado, M., Herrera-Viedma, E. & Herrera, F. 2023, "Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation", Information fusion, vol. 99, pp. 101896.

Russell, S.J. 2019, Human compatible: artificial intelligence and the problem of control, Allen Lane, UK.

International

Topics are of high international demand

Subject specific skills

Ethical Reasoning
Regulatory Knowledge
Bias Detection and Mitigation
Privacy Protection

Transferable skills

Critical Thinking
Communication
Collaboration
Problem-Solving

Study time

Type Required
Lectures 10 sessions of 1 hour (7%)
Seminars 20 sessions of 1 hour (13%)
Online learning (independent) 30 sessions of 1 hour (20%)
Private study 30 hours (20%)
Assessment 60 hours (40%)
Total 150 hours

Private study description

Private study will include preparing for lectures and seminars, reviewing lecture notes, and engaging with required readings and multimedia resources

Costs

No further costs have been identified for this module.

You must pass all assessment components to pass the module.

Assessment group A
Weighting Study time Eligible for self-certification
Assessment component
Group Assessment 30% 18 hours No

Stakeholder role-based debate on a real-world ethical AI dilemma, demonstrating collaborative ethical reasoning, societal impact reflection, and communication across diverse perspectives.

Peer Marking Process will be adopted in this assessment

Reassessment component
Individual Presentation with Group Reflection Yes (extension)

This assessment involves analysing a real-world case study involving ethical considerations in AI development and deployment. Students are expected to critically engage with the principles of collaborative reasoning by exploring how multi-stakeholder debate might inform or challenge ethical decisions in the chosen case study. Students will prepare and submit a recorded individual presentation.

Assessment component
Individual Ethical AI Audit Report 70% 42 hours Yes (extension)

An in-depth ethical audit of a selected AI system, addressing bias, fairness, transparency, regulatory alignment, privacy, and sectoral context. Includes evidence of tool use (e.g., SHAP, AI Fairness 360) and governance frameworks.

Reassessment component is the same
Feedback on assessment

Written feedback for group assessment and individual essay.

There is currently no information about the courses for which this module is core or optional.