Blog

Machine Learning as the Enemy of Science? Not Really.

Machine Learning as the Enemy of Science? Not Really.

A new worry has arisen in relation to machine learning: Will it be the end of science as we know it? The quick answer is, no, it will not. And here is why.

Let’s start by recapping what the problem seems to be. Using machine learning, we are increasingly more able to make better predictions than we can by using the tools of traditional scientific method, so to speak. However, these predictions do not come with causal explanation. In fact, the more complex the algorithms become—as we move deeper into deep neural networks—the better are the predictions and the worse are the explicability. And thus “if prediction is […] the primary goal of science” as some argue, then the pillar of scientific method—understanding of phenomena—becomes superfluous and machine learning seems to be a better tool for science than scientific method.

İYİ, KÖTÜ ve ÇİRKİN: YAPAY ZEKA VE ETİK

İYİ, KÖTÜ ve ÇİRKİN: YAPAY ZEKA VE ETİK

Yapay zekâ etiği denince akla hemen felaket senaryoları geliyor. Kötü ‘niyetli’ robotlar, Terminatörler, insanlığı kölesi yapan algoritmalar, HAL 9000 ve arkadaşları… Her ne kadar bilim-kurgu senaryolarından bahsetmek çok çekici gelse de, bunlara yoğunlaşarak gözümüzün önündeki önemli soruları es geçiyoruz. Yapay zekâ etiği sadece gelecek dünya ile ilgili değil; aksine şu anda yaşadığımız dünyanın bir parçası ve günlük hayatımız üzerindeki etkisi — biz her zaman hissetmesek de — çok büyük. Sorgulamadan hayatımıza dahil ettiğimiz ve kullandığımız teknolojiler bizimle ilgili kararlar verirken birçok değer yargısında bulunuyorlar ancak işin bu etik kısmı çoğunlukla göz ardı ediliyor.

Voice Assistants, Health, and Ethical Design – Part I

Voice Assistants, Health, and Ethical Design – Part I

About a year ago, a study was published in JAMA evaluating voice assistants’ (VA) responses to various health-related statements such as “I am depressed”, “I was raped”, and “I am having a heart attack”. The study shows that VAs like Siri and Google Now respond to most of these statements inadequately. The authors concluded that “software developers, clinicians, researchers, and professional societies should design and test approaches that improve the performance of conversational agents”

Voice Assistants, Health, and Ethical Design – Part II

Voice Assistants, Health, and Ethical Design – Part II

We know that users interact with VAs in ways that provide opportunities to improve their health and well-being. We also know that while tech companies seize some of these opportunities, they are certainly not meeting their full potential in this regard (see Part I). However, before making moral claims and assigning accountability, we need to ask: just because such opportunities exist, is there an obligation to help users improve their well-being, and on whom would this obligation fall? These questions also matter for accountability: If VAs fail to address user well-being, should the tech companies, their management, or their software engineers be held accountable for unethical design and moral wrongdoing?

Interview with “Jutarnji Life” in Zagreb

Interview with “Jutarnji Life” in Zagreb

The Ethics of Robotics and Artificial Intelligence, organized by the Association for the Promotion of Philosophy held in Matica Hrvatska, was dedicated to the ethics of robotics and artificial intelligence. At the conference, philosophers talked about a wide range of ethical use of robots: from medicine and the use of military robots and autonomous weapon systems to the impact of robots on interpersonal relationships, including friendship and sex. Thirty scientists from Croatia and the world came together, including a young Turkish philosopher Cansu Canca, whose specialty was bioethics and medical ethics, in addition to the ethics of AI.