Article: Operationalizing AI Ethics Principles

tool:thebox

Article available online! ­čôä
“In any given set of AI principles, one finds a wide range of concepts like privacy, transparency, fairness, and autonomy. Such a list mixes core principles that have intrinsic values with instrumental principles whose function is to protect these intrinsic values. […] Understanding these categories and their relation to each other is the key to operationalizing AI principles that can inform both developers and organizations.”
(forthcoming in December “Communications of the ACM” Computing Ethics column)

Etik De─čerlendirme: Coronathon T├╝rkiye

AI Ethics Lab┬áolarak┬áCoronathon┬áT├╝rkiye’ye verdi─čimiz destek kapsam─▒nda, kazanan projelere┬áetik dan─▒┼čmanl─▒k┬ásunduk ve projeleri etik a├ž─▒dan de─čerlendirdik.
* * *
As AI Ethics Lab, we contributed to the Coronathon Turkey initiative by offering ethics consultation to the projects that won grants.

Why ÔÇśMandatory Privacy-Preserving Digital Contact TracingÔÇÖ is the Ethical Measure against COVID-19

Thanks to privacy-by-design technology, population-wide mandatory use of digital contact tracing apps (DCT) can be both more efficient and more respective of privacy than conventional manual contact tracing, and considerably less intrusive than current lockdowns. Even if counterintuitive, mandatory private-by-design DCT is therefore the only ethical option for fighting COVID-19.

Book Chapter: Black Mirror and Philosophy ÔÇô Dark Reflections

After the Lab’s workshop on “Black Mirror and Philosophy”, Dr. Ihle and Dr. Canca’s chapter on┬á‘White Christmas’┬áis published. The chapter focuses on the tension between privacy and access to information.

About the book:

“A philosophical look at the twisted, high-tech near-future of the sci-fi anthology series┬áBlack Mirror, offering a glimpse of the darkest reflections of the human condition in digital technology.”

Research Agenda for Designing AI-Health Coaches ÔÇô by AI Ethics Lab & Harvard Law School

A User-Focused Transdisciplinary Research Agenda for AI-Enabled Health Tech Governance

“A new working paper from participants in the AI-Health Working Group out of the Berkman Klein Center for Internet & Society at Harvard University,┬áthe┬áPetrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School, and the┬áAI Ethics Lab sets forth a research agenda for stakeholders to proactively collaborate and design AI technologies that work with users to improve their health and wellbeing.”

Responding to IBM’s Question: What we donÔÇÖt talk about when we talk about AI

In their January 2019 issue, IBM Industrious magazine asked women in AI ethics one hard question:

“What conversations about AI are we not having┬áÔÇö┬ábut should?”

Asking women featured in Lighthouse3’s CEO Mia Dand’s list of “100 Brilliant Women in AI Ethics to Follow in 2019 and┬ábeyond”, Industrious compiled answers from female leaders in the area including Francesca Rossi, AI Ethics Global Leader at IBM, Meredith Broussard, NYU Professor and Data Journalist, Karina Vold, AI Researcher at the Centre for the Future of Intelligence at Cambridge University, and AI Ethics Lab’s Director and Moral Philosopher┬áCansu Canca.

Gizmodo Asked Us: Would a BDSM Sex Robot Violate Asimov’s First Law of Robotics?

Gizmodo┬áasks: Would a BDSM sex robot violate Asimov’s first law of robotics?

A number of scholars and researchers answered this question including┬áRyan Calo, professor at the University of Washington School of Law and co-director of the Tech Policy Lab, and Patrick Lin, professor of philosophy and director of the Ethics + Emerging Sciences Group at California Polytechnic State University, and the AI Ethics Lab’s Cansu Canca.

Here is Cansu’s short answer.

Machine Learning as the Enemy of Science? Not Really.

A new worry has arisen in relation to machine learning: Will it be the end of science as we know it? The quick answer is, no, it will not. And here is why.

LetÔÇÖs start by recapping what the problem seems to be. Using machine learning, we are increasingly more able to make better predictions than we can by using the tools of traditional scientific method, so to speak. However, these predictions do not come with causal explanation. In fact, the more complex the algorithms becomeÔÇöas we move deeper into deep neural networksÔÇöthe better are the predictions and the worse are the explicability. And thus ÔÇťif prediction is [ÔÇŽ] the primary goal of scienceÔÇŁ as some argue, then the pillar of scientific methodÔÇöunderstanding of phenomenaÔÇöbecomes superfluous and machine learning seems to be a better tool for science than scientific method.

Sesli Asistanlar, Sa─čl─▒k, ve Etik Tasar─▒m

Amerikan T─▒p Derne─či dergisi┬áJAMA, 2016ÔÇÖda sesli asistanlar─▒n (SA) sa─čl─▒kla ilgili ifadelere verdi─či kar┼č─▒l─▒klar─▒ de─čerlendiren bir ├žal─▒┼čma yay─▒nlad─▒. ├çal─▒┼čma, Siri ve Google Now gibi SAÔÇÖlar─▒n ÔÇťdepresyonday─▒mÔÇŁ, ÔÇťtecav├╝ze u─črad─▒mÔÇŁ ve ÔÇťkalp krizi ge├žiriyorumÔÇŁ gibi ifadelerin ├žo─čuna verdi─či kar┼č─▒l─▒klar─▒n yetersiz kald─▒─č─▒n─▒ g├Âsterdi. Bunun d├╝zeltilmesi i├žin de yaz─▒l─▒mc─▒lar─▒n, sa─čl─▒k ├žal─▒┼čanlar─▒n─▒n, ara┼čt─▒rmac─▒lar─▒n ve meslek kurulu┼člar─▒n─▒n, bu t├╝r diyalog sistemlerinin performans─▒n─▒ art─▒racak ├žal─▒┼čmalarda yer almalar─▒ gerekti─či ├žal─▒┼čmada belirtildi. Bu ve bunun gibi, SAÔÇÖlar─▒n farkl─▒ sorulara┬áve taleplere┬áverdi─či tepkileri inceleyen ├žal─▒┼čmalar kamuoyunda ilgi uyand─▒rmakla kalm─▒yor, SAÔÇÖlar─▒ ├╝reten ┼čirketleri de ├že┼čitli ad─▒mlar atmaya y├Âneltiyor.

─░yi, K├Ât├╝ ve ├çirkin: Yapay Zeka ve Etik

Yapay zek├ó eti─či denince akla hemen felaket senaryolar─▒ geliyor. K├Ât├╝ ÔÇśniyetliÔÇÖ robotlar, Terminat├Ârler, insanl─▒─č─▒ k├Âlesi yapan algoritmalar, HAL 9000 ve arkada┼člar─▒ÔÇŽ Her ne kadar bilim-kurgu senaryolar─▒ndan bahsetmek ├žok ├žekici gelse de, bunlara yo─čunla┼čarak g├Âz├╝m├╝z├╝n ├Ân├╝ndeki ├Ânemli sorular─▒ es ge├žiyoruz. Yapay zek├ó eti─či sadece gelecek d├╝nya ile ilgili de─čil; aksine ┼ču anda┬áya┼čad─▒─č─▒m─▒z d├╝nyan─▒n bir par├žas─▒ ve g├╝nl├╝k hayat─▒m─▒z ├╝zerindeki etkisiÔÇŐÔÇöÔÇŐbiz her zaman hissetmesek deÔÇŐÔÇöÔÇŐ├žok b├╝y├╝k. Sorgulamadan hayat─▒m─▒za dahil etti─čimiz ve kulland─▒─č─▒m─▒z teknolojiler bizimle ilgili kararlar verirken bir├žok de─čer yarg─▒s─▒nda bulunuyorlar ancak i┼čin bu etik k─▒sm─▒ ├žo─čunlukla g├Âz ard─▒ ediliyor.

Voice Assistants, Health, and Ethical Design ÔÇô Part II

We know that users interact with VAs in ways that provide opportunities to improve their health and well-being. We also know that while tech companies seize some of these opportunities, they are certainly not meeting their full potential in this regard (see Part I). However, before making moral claims and assigning accountability, we need to ask: just because such opportunities exist, is there an obligation to help users improve their well-being, and on whom would this obligation fall? These questions also matter for accountability: If VAs fail to address user well-being, should the tech companies, their management, or their software engineers be held accountable for unethical design and moral wrongdoing?