PiE MODEL: PUZZLE-SOLVING IN ETHICS FOR AI INNOVATION

PiE model enhances AI technologies by identifying and addressing ethics risks and opportunities in building and using AI systems.
PiE model equips organizations for building and using ethical AI through rigorous expert analyses, tailored ethics strategies, and hands-on practitioner trainings.
PiE model combines ethics puzzle-solving methods with innovation and domain-expertise to find solutions for the critical ethics challenges AI creates.

WHO DO WE WORK WITH?

Each stakeholder in AI development and deployment has a responsibility to ensure that ethical risks are addressed in the right way, at the right time.

Our PiE Model for ethical puzzle-solving is designed to help CREATORS, LEADERS, and INVESTORS in integrating ethics into their organizations. We work with organizations BUILDING or DEPLOYING AI technologies to ensure that (1) ethical risks are addressed proactively and (2) ethical opportunities are seized to make technology better for society.

AI Ethics Lab co-presents: A Conversation on AI & Morality with Jamie Metzl & Cansu Canca

At a time when much of the public conversation is concerned about the many potential dangers of AI, Jamie Metzl argues that when developed and deployed wisely, AI can be a force for good, protect us from some of modern societal harms, help shape a modern moral compass for humanity, and herald a more ethically-enriching future for ourselves, our communities, our species, and our planet. As technological progress accelerates, Jamie’s new book “The AI Ten Commandments” challenges us to find new ways to use AI for expanding human wisdom, at new scales, and perhaps paradoxically, help enrich our humanity.

Prof. Cansu Canca, Founder and Director of AI Ethics Lab and Research Associate Professor in Philosophy at Northeastern University will join Jamie to challenge him. MIT Prof. Manolis Kellis will help moderate the discussion.

Sign up HERE.

The Toolkit for Responsible AI Innovation in Law Enforcement, developed with INTERPOL and UNICRI, is a practical resource designed to move agencies from principle to practice in AI adoption.

AI is already reshaping policing—but most agencies lack the structures to deploy it responsibly. This Toolkit closes that gap with concrete frameworks, decision tools, and implementation pathways that support readiness assessment, risk management, and the integration of AI systems in legally grounded and ethically defensible ways.

Built for real-world use, it spans the full lifecycle of AI systems—from early exploration and procurement to deployment, oversight, and governance. Structured as a modular set of resources, it enables agencies to institutionalize responsible AI innovation, particularly in high-stakes and ambiguous contexts where existing approaches fall short.

The Dynamics of AI Principles is developed by AI Ethics Lab to visualize the emergence of AI ethics principles around the world and across sectors. This toolbox helps make sense of early trends, tracking and systematizing the bewildering and growing number of AI Principles out there.

With this interactive toolbox, you can:
I. use the Map to sort, locate, and visualize AI principles by
___a. country and region,
___b. time of publication,
___c. types of publishing organizations,
II. search documents or see the full list and find their summaries,
III. compare documents and their key points,
IV. visualize and compare the distribution of core principles, and
V. use the Box to systematize principles and evaluate technologies.