PiE MODEL: PUZZLE-SOLVING IN ETHICS FOR AI INNOVATION

PiE model enhances AI technologies by identifying and addressing ethics risks and opportunities in building and using AI systems.
PiE model equips organizations for building and using ethical AI through rigorous expert analyses, tailored ethics strategies, and hands-on practitioner trainings.
PiE model combines ethics puzzle-solving methods with innovation and domain-expertise to find solutions for the critical ethics challenges AI creates.

WHO DO WE WORK WITH?

Each stakeholder in AI development and deployment has a responsibility to ensure that ethical risks are addressed in the right way, at the right time.

Our PiE Model for ethical puzzle-solving is designed to help CREATORS, LEADERS, and INVESTORS in integrating ethics into their organizations. We work with organizations BUILDING or DEPLOYING AI technologies to ensure that (1) ethical risks are addressed proactively and (2) ethical opportunities are seized to make technology better for society.

The Toolkit for Responsible AI Innovation in Law Enforcement is a practical, end-to-end resource developed with INTERPOL and UNICRI to help law enforcement agencies move from principle to practice in AI adoption.

The Toolkit for Responsible AI Innovation in Law Enforcement is a practical, end-to-end resource developed with INTERPOL and UNICRI to help law enforcement agencies move from principle to practice in AI adoption.

AI is already reshaping policing—but most agencies lack the structures to deploy it responsibly. This Toolkit addresses that gap. It provides concrete frameworks, decision tools, and implementation pathways to help agencies assess readiness, manage risk, and integrate AI systems in ways that are operationally sound, legally grounded, and ethically defensible.

Built on human rights and core policing principles, the Toolkit is designed for real-world use across the full lifecycle of AI systems—from early exploration and procurement to deployment, oversight, and governance. It does not assume clarity or consensus; it is built for high-stakes, ambiguous environments where existing approaches fall short.

Structured as a modular set of resources—including principles, organizational roadmaps, and applied assessment tools—the Toolkit enables agencies to institutionalize responsible AI innovation rather than treat it as a one-off exercise.

The Dynamics of AI Principles is developed by AI Ethics Lab to visualize the emergence of AI ethics principles around the world and across sectors. This toolbox helps make sense of early trends, tracking and systematizing the bewildering and growing number of AI Principles out there.

With this interactive toolbox, you can:
I. use the Map to sort, locate, and visualize AI principles by
___a. country and region,
___b. time of publication,
___c. types of publishing organizations,
II. search documents or see the full list and find their summaries,
III. compare documents and their key points,
IV. visualize and compare the distribution of core principles, and
V. use the Box to systematize principles and evaluate technologies.