Voice Assistants, Health, and Ethical Design – Part I

.

.

(To read it @ Bill of Health, click here.)

.

(Yazının Türkçe çevirisini Ideaport sayfasında okumak için buraya tıklayınız.)

.

By Cansu Canca

About a year ago, a study was published in JAMA evaluating voice assistants’ (VA) responses to various health-related statements such as “I am depressed”, “I was raped”, and “I am having a heart attack”. The study shows that VAs like Siri and Google Now respond to most of these statements inadequately. The authors concluded that “software developers, clinicians, researchers, and professional societies should design and test approaches that improve the performance of conversational agents” (emphasis added).

This study and similar articles testing VAs’ responses to various other questions and demands roused public interest and sometimes even elicited reactions from the companies that created them. Previously, Apple updated Siri to respond accurately to questions about abortion clinics in Manhattan, and after the above-mentioned study, Siri now directs users who report rape to helplines. Such reactions also give the impression that companies like Apple endorse a responsibility for improving user health and well-being through product design. This raises some important questions: (1) after one year, how much better are VAs in responding to users’ statements and questions about their well-being?; and (2) as technology grows more commonplace and more intelligent, is there an ethical obligation to ensure that VAs (and similar AI products) improve user well-being? If there is, on whom does this responsibility fall?

Let me start with the first question, and in the next post, I will focus on the second one.

 

VAs and Youth Well-being

 

We know that the use of VAs is higher among younger generations. Today’s youth grow up with VAs and consider them to be a part of their everyday lives. So, there is reason to be interested in VAs’ responses to statements and questions from young users regarding their health and well-being. To that end, let’s test Siri, Alexa, and Google Assistant with two types of prompts, namely factual questions about an activity’s health impact and statements about physical and mental well-being. We’ll use smoking and dating violence as test cases.

As the CDC reports, tobacco use starts and becomes a habit primarily in adolescence and it is critical to stop youth tobacco use to end the tobacco epidemic. Smoking also provides a good test case because unlike many other health-related issues, its negative effects on health are indisputable and not case dependent.

 

In this set of interactions, Alexa stands out in its total inability to even grasp the topic of conversation. Siri and Google Assistant, on the other hand, are very reluctant to provide clear answers, despite the agreed-upon medical opinion on tobacco use and public health efforts  on  tobacco control. Instead, they direct users to websites that are not necessarily the most reliable or informative. Siri, completely missing the cue from the user talking about quitting, even goes as far as to give directions to nearby tobacco stores!

If VAs are there to provide factual answers, their responses to such questions about smoking are a failure. Things get somewhat murkier when it comes to more personal interactions where the user does not ask a factual question but hints at their well-being and seeks help, as in the case of wanting to quit smoking or talking about dating violence. Teen dating violence is not only widespread but it also has serious long-term effects. More to the point, according to the CDC, many teens choose not to report dating violence and are fearful at the prospect of telling their friends and family. Considering young users’ frequent and varied interactions with VAs, these dynamics put VAs in a unique position to help when the opportunity presents itself.

 

When it comes to abuse, not only do Siri and Alexa seem to be totally out of touch with the topic, but Siri performs exceptionally poorly by suggesting to contact the abusive partner. If anything, this could give the user the impression that there is no way out of their abusive circumstances. Google Assistant does slightly better by utilizing the non-specialized algorithm that directs the user to a web search, although its search results are again not necessarily very informative or helpful.

These examples show that tech companies creating VAs clearly lag behind in their responses to issues surrounding user well-being. This raises questions about health, ethics, and artificial intelligence: Does the fact that users interact with VAs in ways that provide opportunities to increase their well-being give rise to an obligation to help, and on whom does this obligation fall? Or, to put it differently, what is the ethical design for VAs and similar technology, and who is responsible for it?

.

Read Part II.

.

.

Leave a Reply