Lab Workshop: AI Ethics in Transportation – 12.2.2018

Transportation is an area where adoption of AI systems has become widespread. As the use of AI technologies in transportation becomes more common, its effects on our lives will be significant and various. While the most popular topic of discussion in this area has been the autonomous car version of “the trolley question”, where we must decide how the algorithm should react when faced with an inevitable crash killing pedestrians or passengers, this is not the most crucial ethical issue regarding AI systems and transportation. The effects of autonomous vehicles on the city planning and on disadvantaged areas, on jobs in transportation and based on transportation (such as roadside restaurants and accommodations), and on the security of data collected within and around the vehicle are just some of the ethical questions that arise. In this workshop, we will discuss ethics of AI and transportation, and we will take the first steps to form working groups focusing on this domain.

NYU-UAEU, 3rd Joint Symposium on Social Robotics – 4-7.2.2018

Talk: IRBs for AI: An Unintelligent Choice (Feb. 7)
.
Event:
“United Arab Emirates University (UAEU) and New York University Abu Dhabi (NYUAD) have joined forces in organizing the “3rd Joint UAE Symposium on Social Robotics” (JSSR2018) as part of “Innovation Month 2018”. This event features a multidisciplinary program that brings together renowned developers, roboticists, and social scientists from across the globe to discuss the state of the art in social robotics. Join the multi-site event, be part of the group of experts, share your research, check out new robot technology, and discuss the latest innovations in the field.”

TRAI Meet-Up – 17.1.2018

Talk: AI Ethics and Health
.
AI technologies are present in various areas of healthcare and health-related developments. These technologies are expected to get better and bigger, playing a crucial role in protecting and promoting our health and well-being. Use of AI systems in a wide range of health-related technologies from diagnostics and treatment to preventive and predictive medicine does and will have great positive effects on human well-being. However, if left unchecked, these technologies do and are expected to cause harm through unethical uses. In this talk, we map out the uses of AI systems in health-related areas and the ethical issues that we face.

TRAI Workshop: AI and Ethics – 17.1.2018

We had our first Istanbul working group meeting on “AI Ethics” in collaboration with TRAI, where, together with lawyers and engineers, we focused on the existing and expected ethical issues in AI technologies.
.
Türkiye Yapay Zekâ İnisiyatifi (TRAI) Çalıştay: Yapay Zekâ ve Etik
.
TRAI iş birliği ile İstanbul’da yaptigimiz ilk “Yapay Zekâ ve Etik” çalıştayında hukukçular ve mühendislerle yapay zekâ alanında karşılaştığımız ve karşılaşmayı beklediğimiz etik sorunlara eğildik.

Zagreb Applied Ethics Conference: The Ethics of Robotics and AI – 5-7.6.2017

Talk: RECs for AI: An Unintelligent Choice (June 6)
.
Event:
“Zagreb Applied Ethics Conference 2017: The Ethics of Robotics and Artificial Intelligence aims to gather philosophers and scholars from other disciplines who will present papers on various ethical aspects of robotics and artificial intelligence (e.g. industrial, military, medical and healthcare robots, robots in entertainment, robots as personal companions, the moral status of robots and AI, artificial moral agents, responsibilities for the design and functioning of robots and AI systems, the impact of robotics and AI on human society).”

Responding to IBM’s Question: What we don’t talk about when we talk about AI

In their January 2019 issue, IBM Industrious magazine asked women in AI ethics one hard question:

“What conversations about AI are we not having — but should?”

Asking women featured in Lighthouse3’s CEO Mia Dand’s list of “100 Brilliant Women in AI Ethics to Follow in 2019 and beyond”, Industrious compiled answers from female leaders in the area including Francesca Rossi, AI Ethics Global Leader at IBM, Meredith Broussard, NYU Professor and Data Journalist, Karina Vold, AI Researcher at the Centre for the Future of Intelligence at Cambridge University, and AI Ethics Lab’s Director and Moral Philosopher Cansu Canca.

Gizmodo Asked Us: Would a BDSM Sex Robot Violate Asimov’s First Law of Robotics?

Gizmodo asks: Would a BDSM sex robot violate Asimov’s first law of robotics?

A number of scholars and researchers answered this question including Ryan Calo, professor at the University of Washington School of Law and co-director of the Tech Policy Lab, and Patrick Lin, professor of philosophy and director of the Ethics + Emerging Sciences Group at California Polytechnic State University, and the AI Ethics Lab’s Cansu Canca.

Here is Cansu’s short answer.

Machine Learning as the Enemy of Science? Not Really.

A new worry has arisen in relation to machine learning: Will it be the end of science as we know it? The quick answer is, no, it will not. And here is why.

Let’s start by recapping what the problem seems to be. Using machine learning, we are increasingly more able to make better predictions than we can by using the tools of traditional scientific method, so to speak. However, these predictions do not come with causal explanation. In fact, the more complex the algorithms become—as we move deeper into deep neural networks—the better are the predictions and the worse are the explicability. And thus “if prediction is […] the primary goal of science” as some argue, then the pillar of scientific method—understanding of phenomena—becomes superfluous and machine learning seems to be a better tool for science than scientific method.

Sesli Asistanlar, Sağlık, ve Etik Tasarım

Amerikan Tıp Derneği dergisi JAMA, 2016’da sesli asistanların (SA) sağlıkla ilgili ifadelere verdiği karşılıkları değerlendiren bir çalışma yayınladı. Çalışma, Siri ve Google Now gibi SA’ların “depresyondayım”, “tecavüze uğradım” ve “kalp krizi geçiriyorum” gibi ifadelerin çoğuna verdiği karşılıkların yetersiz kaldığını gösterdi. Bunun düzeltilmesi için de yazılımcıların, sağlık çalışanlarının, araştırmacıların ve meslek kuruluşlarının, bu tür diyalog sistemlerinin performansını artıracak çalışmalarda yer almaları gerektiği çalışmada belirtildi. Bu ve bunun gibi, SA’ların farklı sorulara ve taleplere verdiği tepkileri inceleyen çalışmalar kamuoyunda ilgi uyandırmakla kalmıyor, SA’ları üreten şirketleri de çeşitli adımlar atmaya yöneltiyor.

İyi, Kötü ve Çirkin: Yapay Zeka ve Etik

Yapay zekâ etiği denince akla hemen felaket senaryoları geliyor. Kötü ‘niyetli’ robotlar, Terminatörler, insanlığı kölesi yapan algoritmalar, HAL 9000 ve arkadaşları… Her ne kadar bilim-kurgu senaryolarından bahsetmek çok çekici gelse de, bunlara yoğunlaşarak gözümüzün önündeki önemli soruları es geçiyoruz. Yapay zekâ etiği sadece gelecek dünya ile ilgili değil; aksine şu anda yaşadığımız dünyanın bir parçası ve günlük hayatımız üzerindeki etkisi — biz her zaman hissetmesek de — çok büyük. Sorgulamadan hayatımıza dahil ettiğimiz ve kullandığımız teknolojiler bizimle ilgili kararlar verirken birçok değer yargısında bulunuyorlar ancak işin bu etik kısmı çoğunlukla göz ardı ediliyor.

Voice Assistants, Health, and Ethical Design – Part II

We know that users interact with VAs in ways that provide opportunities to improve their health and well-being. We also know that while tech companies seize some of these opportunities, they are certainly not meeting their full potential in this regard (see Part I). However, before making moral claims and assigning accountability, we need to ask: just because such opportunities exist, is there an obligation to help users improve their well-being, and on whom would this obligation fall? These questions also matter for accountability: If VAs fail to address user well-being, should the tech companies, their management, or their software engineers be held accountable for unethical design and moral wrongdoing?

Voice Assistants, Health, and Ethical Design – Part I

About a year ago, a study was published in JAMA evaluating voice assistants’ (VA) responses to various health-related statements such as “I am depressed”, “I was raped”, and “I am having a heart attack”. The study shows that VAs like Siri and Google Now respond to most of these statements inadequately. The authors concluded that “software developers, clinicians, researchers, and professional societies should design and test approaches that improve the performance of conversational agents”

Interview with “Jutarnji Life” in Zagreb

The Ethics of Robotics and Artificial Intelligence, organized by the Association for the Promotion of Philosophy held in Matica Hrvatska, was dedicated to the ethics of robotics and artificial intelligence. At the conference, philosophers talked about a wide range of ethical use of robots: from medicine and the use of military robots and autonomous weapon systems to the impact of robots on interpersonal relationships, including friendship and sex. Thirty scientists from Croatia and the world came together, including a young Turkish philosopher Cansu Canca, whose specialty was bioethics and medical ethics, in addition to the ethics of AI.