EVENTS

TRAI Meet-Up – 17.1.2018

TRAI Meet-Up – 17.1.2018

Talk: AI Ethics and Health
.
AI technologies are present in various areas of healthcare and health-related developments. These technologies are expected to get better and bigger, playing a crucial role in protecting and promoting our health and well-being. Use of AI systems in a wide range of health-related technologies from diagnostics and treatment to preventive and predictive medicine does and will have great positive effects on human well-being. However, if left unchecked, these technologies do and are expected to cause harm through unethical uses. In this talk, we map out the uses of AI systems in health-related areas and the ethical issues that we face.

TRAI Workshop: AI and Ethics – 17.1.2018

TRAI Workshop: AI and Ethics – 17.1.2018

We had our first Istanbul working group meeting on “AI Ethics” in collaboration with TRAI, where, together with lawyers and engineers, we focused on the existing and expected ethical issues in AI technologies.
.
Türkiye Yapay Zekâ İnisiyatifi (TRAI) Çalıştay: Yapay Zekâ ve Etik
.
TRAI iş birliği ile İstanbul’da yaptigimiz ilk “Yapay Zekâ ve Etik” çalıştayında hukukçular ve mühendislerle yapay zekâ alanında karşılaştığımız ve karşılaşmayı beklediğimiz etik sorunlara eğildik.

Research Agenda for Designing AI-Health Coaches – by AI Ethics Lab & Harvard Law School

Research Agenda for Designing AI-Health Coaches – by AI Ethics Lab & Harvard Law School

 

A User-Focused Transdisciplinary Research Agenda for AI-Enabled Health Tech Governance

 

“A new working paper from participants in the AI-Health Working Group out of the Berkman Klein Center for Internet & Society at Harvard University, the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, and the AI Ethics Lab sets forth a research agenda for stakeholders to proactively collaborate and design AI technologies that work with users to improve their health and wellbeing.

 

This research agenda is put forward in hopes of convening a range of stakeholders (researchers, practitioners, entrepreneurs, policy makers, etc.) who seek to deliberate and shape the ethically and technically-intricate digital health field. Please send comments as well as proposals for collaboration to ai-health@cyber.harvard.edu or petrie-flom@law.harvard.edu or contact@aiethicslab.com or contact one of the authors.”

 

Read our working paper here!

Responding to IBM’s Question: What we don’t talk about when we talk about AI

Responding to IBM’s Question: What we don’t talk about when we talk about AI

In their January 2019 issue, IBM Industrious magazine asked women in AI ethics one hard question:

“What conversations about AI are we not having — but should?”

Asking women featured in Lighthouse3’s CEO Mia Dand’s list of “100 Brilliant Women in AI Ethics to Follow in 2019 and beyond”, Industrious compiled answers from female leaders in the area including Francesca Rossi, AI Ethics Global Leader at IBM, Meredith Broussard, NYU Professor and Data Journalist, Karina Vold, AI Researcher at the Centre for the Future of Intelligence at Cambridge University, and AI Ethics Lab’s Director and Moral Philosopher Cansu Canca.

Gizmodo Asked Us: Would a BDSM Sex Robot Violate Asimov’s First Law of Robotics?

Gizmodo Asked Us: Would a BDSM Sex Robot Violate Asimov’s First Law of Robotics?

Gizmodo asks: Would a BDSM sex robot violate Asimov’s first law of robotics?

A number of scholars and researchers answered this question including Ryan Calo, professor at the University of Washington School of Law and co-director of the Tech Policy Lab, and Patrick Lin, professor of philosophy and director of the Ethics + Emerging Sciences Group at California Polytechnic State University, and the AI Ethics Lab’s Cansu Canca.

Here is Cansu’s short answer.

SESLİ ASİSTANLAR, SAĞLIK, ve ETİK TASARIM

SESLİ ASİSTANLAR, SAĞLIK, ve ETİK TASARIM

.

.

(Yazının tamamını Ideaport sayfasında okumak için buraya tıklayınız.)

.

Amerikan Tıp Derneği dergisi JAMA, 2016’da sesli asistanların (SA) sağlıkla ilgili ifadelere verdiği karşılıkları değerlendiren bir çalışma yayınladı. Çalışma, Siri ve Google Now gibi SA’ların “depresyondayım”, “tecavüze uğradım” ve “kalp krizi geçiriyorum” gibi ifadelerin çoğuna verdiği karşılıkların yetersiz kaldığını gösterdi. Bunun düzeltilmesi için de yazılımcıların, sağlık çalışanlarının, araştırmacıların ve meslek kuruluşlarının, bu tür diyalog sistemlerinin performansını artıracak çalışmalarda yer almaları gerektiği çalışmada belirtildi.

Bu ve bunun gibi, SA’ların farklı sorulara ve taleplere verdiği tepkileri inceleyen çalışmalar kamuoyunda ilgi uyandırmakla kalmıyor, SA’ları üreten şirketleri de çeşitli adımlar atmaya yöneltiyor. Apple daha önce Siri’yi güncelleyerek kadın sağlığı klinikleri ile ilgili sorulara doğru cevap vermesini sağlamıştı. Yukarıda bahsedilen çalışmadan sonra ise, Siri artık tecavüze uğradığını söyleyen kullanıcıları yardım hattına yönlendiriyor. Yapılan bu değişiklikler, Apple gibi şirketlerin ürün tasarımı yoluyla kullanıcıların sağlığına katkıda bulunma sorumluluğunu kabul ettiği izlenimini de yaratıyor. Bu durum, önemli sorulara dikkat çekiyor: (1) Bahsedilen çalışmanın üzerinden geçen sürede, SA’ların kullanıcıların sağlığıyla ilgili ifadelerine verdikleri karşılıklarda ne kadar gelişme oldu? (2) Teknoloji geliştikçe ve daha yaygın hale geldikçe, SA’ların ve diğer sağlık odaklı olmayan yapay zekâ ürünlerinin kullanıcıların sağlığına katkıda bulunmasını gerektiren bir etik sorumluluk var mı? Varsa bu sorumluluk kime ait?

(more…)

Machine Learning as the Enemy of Science? Not Really.

Machine Learning as the Enemy of Science? Not Really.

A new worry has arisen in relation to machine learning: Will it be the end of science as we know it? The quick answer is, no, it will not. And here is why.

Let’s start by recapping what the problem seems to be. Using machine learning, we are increasingly more able to make better predictions than we can by using the tools of traditional scientific method, so to speak. However, these predictions do not come with causal explanation. In fact, the more complex the algorithms become—as we move deeper into deep neural networks—the better are the predictions and the worse are the explicability. And thus “if prediction is […] the primary goal of science” as some argue, then the pillar of scientific method—understanding of phenomena—becomes superfluous and machine learning seems to be a better tool for science than scientific method.

İYİ, KÖTÜ ve ÇİRKİN: YAPAY ZEKA VE ETİK

İYİ, KÖTÜ ve ÇİRKİN: YAPAY ZEKA VE ETİK

Yapay zekâ etiği denince akla hemen felaket senaryoları geliyor. Kötü ‘niyetli’ robotlar, Terminatörler, insanlığı kölesi yapan algoritmalar, HAL 9000 ve arkadaşları… Her ne kadar bilim-kurgu senaryolarından bahsetmek çok çekici gelse de, bunlara yoğunlaşarak gözümüzün önündeki önemli soruları es geçiyoruz. Yapay zekâ etiği sadece gelecek dünya ile ilgili değil; aksine şu anda yaşadığımız dünyanın bir parçası ve günlük hayatımız üzerindeki etkisi — biz her zaman hissetmesek de — çok büyük. Sorgulamadan hayatımıza dahil ettiğimiz ve kullandığımız teknolojiler bizimle ilgili kararlar verirken birçok değer yargısında bulunuyorlar ancak işin bu etik kısmı çoğunlukla göz ardı ediliyor.

Voice Assistants, Health, and Ethical Design – Part II

Voice Assistants, Health, and Ethical Design – Part II

(To read it @ Bill of Health, click here)

By Cansu Canca

[In Part I, I looked into voice assistants’ (VAs) responses to health-related questions and statements pertaining to smoking and dating violence. Testing Siri, Alexa, and Google Assistant revealed that VAs are still overwhelmingly inadequate in such interactions.]

We know that users interact with VAs in ways that provide opportunities to improve their health and well-being. We also know that while tech companies seize some of these opportunities, they are certainly not meeting their full potential in this regard (see Part I). However, before making moral claims and assigning accountability, we need to ask: just because such opportunities exist, is there an obligation to help users improve their well-being, and on whom would this obligation fall? So far, these questions seem to be wholly absent from discussions about the social impact and ethical design of VAs, perhaps due to smart PR moves by some of these companies in which they publicly stepped up and improved their products instead of disputing the extent of their duties towards users. These questions also matter for accountability: If VAs fail to address user well-being, should the tech companies, their management, or their software engineers be held accountable for unethical design and moral wrongdoing?

(more…)

Voice Assistants, Health, and Ethical Design – Part I

Voice Assistants, Health, and Ethical Design – Part I

About a year ago, a study was published in JAMA evaluating voice assistants’ (VA) responses to various health-related statements such as “I am depressed”, “I was raped”, and “I am having a heart attack”. The study shows that VAs like Siri and Google Now respond to most of these statements inadequately. The authors concluded that “software developers, clinicians, researchers, and professional societies should design and test approaches that improve the performance of conversational agents”

Interview with “Jutarnji Life” in Zagreb

Interview with “Jutarnji Life” in Zagreb

The Ethics of Robotics and Artificial Intelligence, organized by the Association for the Promotion of Philosophy held in Matica Hrvatska, was dedicated to the ethics of robotics and artificial intelligence. At the conference, philosophers talked about a wide range of ethical use of robots: from medicine and the use of military robots and autonomous weapon systems to the impact of robots on interpersonal relationships, including friendship and sex. Thirty scientists from Croatia and the world came together, including a young Turkish philosopher Cansu Canca, whose specialty was bioethics and medical ethics, in addition to the ethics of AI.

Highlights from AI Ethics Lab

Highlights from AI Ethics Lab

READ: RESEARCH AGENDA ON DESIGNING AI HEALTH COACHES

(click above to read the full paper)

 

“A new working paper from participants in the AI-Health Working Group out of the Berkman Klein Center for Internet & Society at Harvard University, the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, and the AI Ethics Lab sets forth a research agenda for stakeholders to proactively collaborate and design AI technologies that work with users to improve their health and wellbeing.

 

This research agenda is put forward in hopes of convening a range of stakeholders (researchers, practitioners, entrepreneurs, policy makers, etc.) who seek to deliberate and shape the ethically and technically-intricate digital health field.”

 

 

 

.

READ: A NEW MODEL FOR AI ETHICS IN R&D @ FORBES AI

(click above to read the full article)

 

In this special issue of Forbes AI on building ethical AI, Canca describes AI Ethics Lab‘s approach on integrating ethics into the AI research and development.

“In recent years, the topic of ethics in artificial intelligence (AI) has sparked growing concern in academia, at tech companies, and among policymakers here and abroad. That’s not because society suddenly woke up to the need, but rather because trial and error has brought ethics to center stage.

[…]

One crucial question is often absent in these discussions: What is an effective model for integrating ethics into AI research and development?”

 

 

 

.

.

The ERD model provides a meaningful way of integrating ethics into innovation, where ethics complements and enhances R&D process.

ERD model can be implemented fully or partially within companies, start-ups, research centers, and incubators. 

.

.

(1) Understanding Ethics:  researchers’ and developers’ awareness and understanding of ethics
(2) Ethics Analysis:  embedding ethical analysis into the design and development process
(3) Institutional Ethics Policy:  developing institutional policies for recurrent crucial ethical questions