Iron Mountain interviews Cansu Canca

‘During a recent Iron Mountain Executive Exchange event, Cansu Canca, Founder + Director of AI Ethics Lab spoke about the topic of responsible AI and ethics by design. Dr. Canca shared her perspective on how organizations should be building ethics into their AI systems. There was so much to discuss that she later sat down with Iron Mountain to carry on the conversation.’

Article: ‘Operationalizing AI Ethics Principles’ published @ Communications of the ACM

tool:thebox

Article available online! 📄
“In any given set of AI principles, one finds a wide range of concepts like privacy, transparency, fairness, and autonomy. Such a list mixes core principles that have intrinsic values with instrumental principles whose function is to protect these intrinsic values. […] Understanding these categories and their relation to each other is the key to operationalizing AI principles that can inform both developers and organizations.”
(forthcoming in December “Communications of the ACM” Computing Ethics column)

Etik Değerlendirme: Coronathon Türkiye

AI Ethics Lab olarak Coronathon Türkiye’ye verdiğimiz destek kapsamında, kazanan projelere etik danışmanlık sunduk ve projeleri etik açıdan değerlendirdik.
* * *
As AI Ethics Lab, we contributed to the Coronathon Turkey initiative by offering ethics consultation to the projects that won grants.

Book Chapter: Black Mirror and Philosophy – Dark Reflections

After the Lab’s workshop on “Black Mirror and Philosophy”, Dr. Ihle and Dr. Canca’s chapter on ‘White Christmas’ is published. The chapter focuses on the tension between privacy and access to information.

About the book:

“A philosophical look at the twisted, high-tech near-future of the sci-fi anthology series Black Mirror, offering a glimpse of the darkest reflections of the human condition in digital technology.”

Research Agenda for Designing AI-Health Coaches – by AI Ethics Lab & Harvard Law School

A User-Focused Transdisciplinary Research Agenda for AI-Enabled Health Tech Governance

“A new working paper from participants in the AI-Health Working Group out of the Berkman Klein Center for Internet & Society at Harvard University, the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, and the AI Ethics Lab sets forth a research agenda for stakeholders to proactively collaborate and design AI technologies that work with users to improve their health and wellbeing.”

Responding to IBM’s Question: What we don’t talk about when we talk about AI

In their January 2019 issue, IBM Industrious magazine asked women in AI ethics one hard question:

“What conversations about AI are we not having — but should?”

Asking women featured in Lighthouse3’s CEO Mia Dand’s list of “100 Brilliant Women in AI Ethics to Follow in 2019 and beyond”, Industrious compiled answers from female leaders in the area including Francesca Rossi, AI Ethics Global Leader at IBM, Meredith Broussard, NYU Professor and Data Journalist, Karina Vold, AI Researcher at the Centre for the Future of Intelligence at Cambridge University, and AI Ethics Lab’s Director and Moral Philosopher Cansu Canca.

Gizmodo Asked Us: Would a BDSM Sex Robot Violate Asimov’s First Law of Robotics?

Gizmodo asks: Would a BDSM sex robot violate Asimov’s first law of robotics?

A number of scholars and researchers answered this question including Ryan Calo, professor at the University of Washington School of Law and co-director of the Tech Policy Lab, and Patrick Lin, professor of philosophy and director of the Ethics + Emerging Sciences Group at California Polytechnic State University, and the AI Ethics Lab’s Cansu Canca.

Here is Cansu’s short answer.

Machine Learning as the Enemy of Science? Not Really.

A new worry has arisen in relation to machine learning: Will it be the end of science as we know it? The quick answer is, no, it will not. And here is why.

Let’s start by recapping what the problem seems to be. Using machine learning, we are increasingly more able to make better predictions than we can by using the tools of traditional scientific method, so to speak. However, these predictions do not come with causal explanation. In fact, the more complex the algorithms become—as we move deeper into deep neural networks—the better are the predictions and the worse are the explicability. And thus “if prediction is […] the primary goal of science” as some argue, then the pillar of scientific method—understanding of phenomena—becomes superfluous and machine learning seems to be a better tool for science than scientific method.