University of California, Berkeley, Haas School of Business released its Playbook on Mitigating Bias in AI. The playbook, authored by Genevieve Smith and Ishita Rustagi, drew from interviews with various experts in the field, including the Lab’s director, Dr. Canca
AI Ethics Lab olarak Coronathon Türkiye’ye verdiğimiz destek kapsamında, kazanan projelere etik danışmanlık sunduk ve projeleri etik açıdan değerlendirdik.
* * *
As AI Ethics Lab, we contributed to the Coronathon Turkey initiative by offering ethics consultation to the projects that won grants.
Thanks to privacy-by-design technology, population-wide mandatory use of digital contact tracing apps (DCT) can be both more efficient and more respective of privacy than conventional manual contact tracing, and considerably less intrusive than current lockdowns. Even if counterintuitive, mandatory private-by-design DCT is therefore the only ethical option for fighting COVID-19.
As coronavirus cases increase worldwide, institutions keep their communities informed with frequent updates—but only up to a point. They share minimal information such as number of cases, but omit the names of individuals and identifying information.
Many institutions are legally obligated to protect individual privacy, but is this prohibition of transparency ethically justified?
After the Lab’s workshop on “Black Mirror and Philosophy”, Dr. Ihle and Dr. Canca’s chapter on ‘White Christmas’ is published. The chapter focuses on the tension between privacy and access to information.
About the book:
“A philosophical look at the twisted, high-tech near-future of the sci-fi anthology series Black Mirror, offering a glimpse of the darkest reflections of the human condition in digital technology.”
Is the solution to our ethical problems in AI lie in using the Universal Declaration of Human Rights instead of ethical theories? In a short post published in the “Articles & Insights” section of the United Nations University, Centre for Policy Research, Lab’s director Cansu Canca explains why the UDHR cannot be the answer.
A New Model for AI Ethics in R&D
In this special issue of Forbes AI on building ethical AI, Lab’s director Cansu Canca describes AI Ethics Lab’s approach on integrating ethics into the AI research and development.
A User-Focused Transdisciplinary Research Agenda for AI-Enabled Health Tech Governance
“A new working paper from participants in the AI-Health Working Group out of the Berkman Klein Center for Internet & Society at Harvard University, the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, and the AI Ethics Lab sets forth a research agenda for stakeholders to proactively collaborate and design AI technologies that work with users to improve their health and wellbeing.”
In their January 2019 issue, IBM Industrious magazine asked women in AI ethics one hard question:
“What conversations about AI are we not having — but should?”
Asking women featured in Lighthouse3’s CEO Mia Dand’s list of “100 Brilliant Women in AI Ethics to Follow in 2019 and beyond”, Industrious compiled answers from female leaders in the area including Francesca Rossi, AI Ethics Global Leader at IBM, Meredith Broussard, NYU Professor and Data Journalist, Karina Vold, AI Researcher at the Centre for the Future of Intelligence at Cambridge University, and AI Ethics Lab’s Director and Moral Philosopher Cansu Canca.
Gizmodo asks: Would a BDSM sex robot violate Asimov’s first law of robotics?
A number of scholars and researchers answered this question including Ryan Calo, professor at the University of Washington School of Law and co-director of the Tech Policy Lab, and Patrick Lin, professor of philosophy and director of the Ethics + Emerging Sciences Group at California Polytechnic State University, and the AI Ethics Lab’s Cansu Canca.
Here is Cansu’s short answer.
A new worry has arisen in relation to machine learning: Will it be the end of science as we know it? The quick answer is, no, it will not. And here is why.
Let’s start by recapping what the problem seems to be. Using machine learning, we are increasingly more able to make better predictions than we can by using the tools of traditional scientific method, so to speak. However, these predictions do not come with causal explanation. In fact, the more complex the algorithms become—as we move deeper into deep neural networks—the better are the predictions and the worse are the explicability. And thus “if prediction is […] the primary goal of science” as some argue, then the pillar of scientific method—understanding of phenomena—becomes superfluous and machine learning seems to be a better tool for science than scientific method.
Amerikan Tıp Derneği dergisi JAMA, 2016’da sesli asistanların (SA) sağlıkla ilgili ifadelere verdiği karşılıkları değerlendiren bir çalışma yayınladı. Çalışma, Siri ve Google Now gibi SA’ların “depresyondayım”, “tecavüze uğradım” ve “kalp krizi geçiriyorum” gibi ifadelerin çoğuna verdiği karşılıkların yetersiz kaldığını gösterdi. Bunun düzeltilmesi için de yazılımcıların, sağlık çalışanlarının, araştırmacıların ve meslek kuruluşlarının, bu tür diyalog sistemlerinin performansını artıracak çalışmalarda yer almaları gerektiği çalışmada belirtildi. Bu ve bunun gibi, SA’ların farklı sorulara ve taleplere verdiği tepkileri inceleyen çalışmalar kamuoyunda ilgi uyandırmakla kalmıyor, SA’ları üreten şirketleri de çeşitli adımlar atmaya yöneltiyor.
Yapay zekâ etiği denince akla hemen felaket senaryoları geliyor. Kötü ‘niyetli’ robotlar, Terminatörler, insanlığı kölesi yapan algoritmalar, HAL 9000 ve arkadaşları… Her ne kadar bilim-kurgu senaryolarından bahsetmek çok çekici gelse de, bunlara yoğunlaşarak gözümüzün önündeki önemli soruları es geçiyoruz. Yapay zekâ etiği sadece gelecek dünya ile ilgili değil; aksine şu anda yaşadığımız dünyanın bir parçası ve günlük hayatımız üzerindeki etkisi — biz her zaman hissetmesek de — çok büyük. Sorgulamadan hayatımıza dahil ettiğimiz ve kullandığımız teknolojiler bizimle ilgili kararlar verirken birçok değer yargısında bulunuyorlar ancak işin bu etik kısmı çoğunlukla göz ardı ediliyor.
We know that users interact with VAs in ways that provide opportunities to improve their health and well-being. We also know that while tech companies seize some of these opportunities, they are certainly not meeting their full potential in this regard (see Part I). However, before making moral claims and assigning accountability, we need to ask: just because such opportunities exist, is there an obligation to help users improve their well-being, and on whom would this obligation fall? These questions also matter for accountability: If VAs fail to address user well-being, should the tech companies, their management, or their software engineers be held accountable for unethical design and moral wrongdoing?
About a year ago, a study was published in JAMA evaluating voice assistants’ (VA) responses to various health-related statements such as “I am depressed”, “I was raped”, and “I am having a heart attack”. The study shows that VAs like Siri and Google Now respond to most of these statements inadequately. The authors concluded that “software developers, clinicians, researchers, and professional societies should design and test approaches that improve the performance of conversational agents”