‘During a recent Iron Mountain Executive Exchange event, Cansu Canca, Founder + Director of AI Ethics Lab spoke about the topic of responsible AI and ethics by design. Dr. Canca shared her perspective on how organizations should be building ethics into their AI systems. There was so much to discuss that she later sat down with Iron Mountain to carry on the conversation.’
AI Ethics Lab collaborated with Intenseye to evaluate and enhance the ethical aspects of their AI systems for workplace safety. Intenseye’s vision is to use AI systems to reduce harm and increase well-being of workers.
AI Ethics Lab utilized its tool, the Box, to evaluate Intenseye’s systems and offer guidance. Read our report here.
AI Ethics Lab’s toolbox Dynamics of AI Principles is extensively featured in Stanford Human-Centered AI’s AI Index Report 2021.
This 4th edition of the report provides a comprehensive analysis of the field of AI.
AI Ethics Lab’s toolbox is used for the analyses in Chapter 5.1: AI Principles and Frameworks.
Article available online! 📄
“In any given set of AI principles, one finds a wide range of concepts like privacy, transparency, fairness, and autonomy. Such a list mixes core principles that have intrinsic values with instrumental principles whose function is to protect these intrinsic values. […] Understanding these categories and their relation to each other is the key to operationalizing AI principles that can inform both developers and organizations.”
(forthcoming in December “Communications of the ACM” Computing Ethics column)
University of California, Berkeley, Haas School of Business released its Playbook on Mitigating Bias in AI. The playbook, authored by Genevieve Smith and Ishita Rustagi, drew from interviews with various experts in the field, including the Lab’s director, Dr. Canca
AI Ethics Lab olarak Coronathon Türkiye’ye verdiğimiz destek kapsamında, kazanan projelere etik danışmanlık sunduk ve projeleri etik açıdan değerlendirdik.
* * *
As AI Ethics Lab, we contributed to the Coronathon Turkey initiative by offering ethics consultation to the projects that won grants.
Thanks to privacy-by-design technology, population-wide mandatory use of digital contact tracing apps (DCT) can be both more efficient and more respective of privacy than conventional manual contact tracing, and considerably less intrusive than current lockdowns. Even if counterintuitive, mandatory private-by-design DCT is therefore the only ethical option for fighting COVID-19.
As coronavirus cases increase worldwide, institutions keep their communities informed with frequent updates—but only up to a point. They share minimal information such as number of cases, but omit the names of individuals and identifying information.
Many institutions are legally obligated to protect individual privacy, but is this prohibition of transparency ethically justified?
After the Lab’s workshop on “Black Mirror and Philosophy”, Dr. Ihle and Dr. Canca’s chapter on ‘White Christmas’ is published. The chapter focuses on the tension between privacy and access to information.
About the book:
“A philosophical look at the twisted, high-tech near-future of the sci-fi anthology series Black Mirror, offering a glimpse of the darkest reflections of the human condition in digital technology.”
Is the solution to our ethical problems in AI lie in using the Universal Declaration of Human Rights instead of ethical theories? In a short post published in the “Articles & Insights” section of the United Nations University, Centre for Policy Research, Lab’s director Cansu Canca explains why the UDHR cannot be the answer.
A New Model for AI Ethics in R&D
In this special issue of Forbes AI on building ethical AI, Lab’s director Cansu Canca describes AI Ethics Lab’s approach on integrating ethics into the AI research and development.
A User-Focused Transdisciplinary Research Agenda for AI-Enabled Health Tech Governance
“A new working paper from participants in the AI-Health Working Group out of the Berkman Klein Center for Internet & Society at Harvard University, the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, and the AI Ethics Lab sets forth a research agenda for stakeholders to proactively collaborate and design AI technologies that work with users to improve their health and wellbeing.”
In their January 2019 issue, IBM Industrious magazine asked women in AI ethics one hard question:
“What conversations about AI are we not having — but should?”
Asking women featured in Lighthouse3’s CEO Mia Dand’s list of “100 Brilliant Women in AI Ethics to Follow in 2019 and beyond”, Industrious compiled answers from female leaders in the area including Francesca Rossi, AI Ethics Global Leader at IBM, Meredith Broussard, NYU Professor and Data Journalist, Karina Vold, AI Researcher at the Centre for the Future of Intelligence at Cambridge University, and AI Ethics Lab’s Director and Moral Philosopher Cansu Canca.
Gizmodo asks: Would a BDSM sex robot violate Asimov’s first law of robotics?
A number of scholars and researchers answered this question including Ryan Calo, professor at the University of Washington School of Law and co-director of the Tech Policy Lab, and Patrick Lin, professor of philosophy and director of the Ethics + Emerging Sciences Group at California Polytechnic State University, and the AI Ethics Lab’s Cansu Canca.
Here is Cansu’s short answer.
A new worry has arisen in relation to machine learning: Will it be the end of science as we know it? The quick answer is, no, it will not. And here is why.
Let’s start by recapping what the problem seems to be. Using machine learning, we are increasingly more able to make better predictions than we can by using the tools of traditional scientific method, so to speak. However, these predictions do not come with causal explanation. In fact, the more complex the algorithms become—as we move deeper into deep neural networks—the better are the predictions and the worse are the explicability. And thus “if prediction is […] the primary goal of science” as some argue, then the pillar of scientific method—understanding of phenomena—becomes superfluous and machine learning seems to be a better tool for science than scientific method.