ABOUT

AI ETHICS LAB

AI Ethics Lab aims to detect and solve ethical issues in building and using AI systems to enhance technology development.

A pioneer in the field, AI Ethics Lab became active in 2017 and developed its PiE (puzzle-solving in ethics) Model, a unique approach in the industry to integrate ethics solutions into the innovation process.

What makes the Lab different? We put ethics into action in all stages of the innovation process. We don’t spend our time with debates, rules, codes, and approvals—we work with you to design solutions and build ethical systems.

The Lab has 2 areas of focus: CONSULTING and RESEARCH.

▶︎

Consulting

In consulting, the Lab utilizes the PiE Model to fully integrate ethics into the development and deployment of AI technologies. The model’s components are customized to meet the needs of the organization. We help organizations create ethical technologies through

  • assessment of organizational ethical readiness and risks, and designing a roadmap for integrating ethics into their operations (see our ROADMAP service);
  • analysis of projects and products to find solutions for complex ethical problems (see our ANALYSIS services);
  • constructing organizational strategy to determine organizational ethical stance and operating procedures for recurrent and future ethical risks (see our STRATEGY services);
  • training the creators to be the “first-responders” to ethical issues (see our TRAINING services).

Get in touch with us to discuss how we can strengthen your organization against ethics risks!

LEARN MORE                              CONTACT US

Research

In research, the Lab functions as an independent center where multidisciplinary teams of philosophers, computer scientists, legal scholars, and other experts focus on analyzing ethical issues related to AI systems. Our teams work on various projects ranging from research ethics in AI to global guidelines in AI ethics.

  • Join our mailing list to be informed of our projects and contact us to discuss your research with us.