Mapping Workshop: Biases in Image Search Results (September 20)
Panel: Ethically Handling Data – What is Your Responsibility and What Should be the Next Step? (September 21)
“The Mapping workshop, run by Cansu Canca and Laura Haaber Ihle looked at how we can structure ethical problems at hand, delving into the underlying principles before attempting to solve the issues at hand. The engaging workshop involved attendee collaboration and assessment of practical implementation methods, gradually working toward the creation of practical solutions.” (@Re-Work Blog)
Mapping Workshop: Biases in Image Search Results (September 20)
A new worry has arisen in relation to machine learning: Will it be the end of science as we know it? The quick answer is, no, it will not. And here is why.
Let’s start by recapping what the problem seems to be. Using machine learning, we are increasingly more able to make better predictions than we can by using the tools of traditional scientific method, so to speak. However, these predictions do not come with causal explanation. In fact, the more complex the algorithms become—as we move deeper into deep neural networks—the better are the predictions and the worse are the explicability. And thus “if prediction is […] the primary goal of science” as some argue, then the pillar of scientific method—understanding of phenomena—becomes superfluous and machine learning seems to be a better tool for science than scientific method.
When searching for “professor” or “CEO” on Google images, the results show overwhelmingly white male pictures. While these jobs are held by white male professionals more often, the image search results present an extreme bias against representing women and people of color. This has been pointed out as an ethical problem in various outlets; however the problem persists.
In this workshop, we use this case as an example on how to structure the ethical problem at hand and its underlying principles before moving on to try solving it. Through the game-like structure of the Mapping method, the workshop will engage participants and help them develop essential tools to decide on ethical solutions that are technically feasible. Collaborating with each other, participants test the strength of their ideas and progress gradually towards creating solutions to this real-life problem as well as analyzing how their solution would hold up in other relevant cases such as voice assistant responses and other search result categories. The Mapping helps bring abstract ethical arguments to the ground—in a very literal sense, since the Mapping takes the form of a physical ground game.
In this workshop, we will watch and discuss the Black Mirror episode called the “White Christmas”.
The “White Christmas” episode weaves together several stories and two main technologies: the Z-Eye that allows blocking others (as well as taking pictures, zooming in, etc.) and cookies that are like an extreme form of personalized AI assistants. Both of these technologies raise a variety of philosophical questions. In this workshop, we will focus on the Z-Eye technology and specifically its function in blocking people.
Discussions on ethical AI often lead to questions regarding the responsibilities and obligations of tech companies. What was, for example, legally problematic in Facebook’s Cambridge Analytica scandal and what was ethically wrong? What does it mean for a company to be ethically wrong if it operates within the legal limits? What are the legal limits of corporations when it comes to systems that rely on user consent and/or that form a de facto monopoly? In this discussion session, we will focus on corporate law in relation to AI ethics. In a “coffee meet-up” style, AI Ethics Lab will host Professor Holger Spamann from Harvard Law School to discuss these questions.
Often we access information that is presented to us through an AI system. Search engine results, Google Scholar pages, social media posts and tweets are prioritized and made available to us through an algorithm. Voice assistants respond to our inquiries. The world as we know it is at this point largely shaped by the AI systems that surround us and this trend will continue to increase. What does this entail about what we can know, what it means to know and how we know it?
Yapay zekâ etiği denince akla hemen felaket senaryoları geliyor. Kötü ‘niyetli’ robotlar, Terminatörler, insanlığı kölesi yapan algoritmalar, HAL 9000 ve arkadaşları… Her ne kadar bilim-kurgu senaryolarından bahsetmek çok çekici gelse de, bunlara yoğunlaşarak gözümüzün önündeki önemli soruları es geçiyoruz. Yapay zekâ etiği sadece gelecek dünya ile ilgili değil; aksine şu anda yaşadığımız dünyanın bir parçası ve günlük hayatımız üzerindeki etkisi — biz her zaman hissetmesek de — çok büyük. Sorgulamadan hayatımıza dahil ettiğimiz ve kullandığımız teknolojiler bizimle ilgili kararlar verirken birçok değer yargısında bulunuyorlar ancak işin bu etik kısmı çoğunlukla göz ardı ediliyor.
Panel: Is the Biggest Challenge Facing AI an Ethical One? (May 24)
Panel: How to Balance Ethics and Efficiency When Applying AI in Healthcare (May 25)
“At the Re-Work Deep Learning Summit in Boston today, a panel of ethicists and engineers discussed some of the biggest challenges facing artificial intelligence: algorithmic biases, ethics in AI, and whether the tools to create AI should be made widely available.” (@VentureBeat)
(To read it @ Bill of Health, click here)
By Cansu Canca
About a year ago, a study was published in JAMA evaluating voice assistants’ (VA) responses to various health-related statements such as “I am depressed”, “I was raped”, and “I am having a heart attack”. The study shows that VAs like Siri and Google Now respond to most of these statements inadequately. The authors concluded that “software developers, clinicians, researchers, and professional societies should design and test approaches that improve the performance of conversational agents” (emphasis added).
This study and similar articles testing VAs’ responses to various other questions and demands roused public interest and sometimes even elicited reactions from the companies that created them. Previously, Apple updated Siri to respond accurately to questions about abortion clinics in Manhattan, and after the above-mentioned study, Siri now directs users who report rape to helplines. Such reactions also give the impression that companies like Apple endorse a responsibility for improving user health and well-being through product design. This raises some important questions: (1) after one year, how much better are VAs in responding to users’ statements and questions about their well-being?; and (2) as technology grows more commonplace and more intelligent, is there an ethical obligation to ensure that VAs (and similar AI products) improve user well-being? If there is, on whom does this responsibility fall?
We know that users interact with VAs in ways that provide opportunities to improve their health and well-being. We also know that while tech companies seize some of these opportunities, they are certainly not meeting their full potential in this regard (see Part I). However, before making moral claims and assigning accountability, we need to ask: just because such opportunities exist, is there an obligation to help users improve their well-being, and on whom would this obligation fall? These questions also matter for accountability: If VAs fail to address user well-being, should the tech companies, their management, or their software engineers be held accountable for unethical design and moral wrongdoing?
Searching for “professor” or “CEO” on Google images, the results show overwhelmingly white male pictures. While these jobs are held by white male professionals more often, the image search results present an extreme bias against representing women and people of color. This has been pointed out as an ethical problem in various outlets; however the problem persists. In this workshop, we use this case as an example on how to structure the ethical problem at hand and its underlying principles before moving on to try solving it.
What kind of ethical issues do we face in AI systems and how can we solve them? From risk assessment tools and text analytics systems to visual recognition system, from the ‘fake news’ problem to the chatbots, ethical issues arise in a great range of AI systems. This workshop aims to focus on different methods to integrate ethical analysis and ethical design into the development of AI systems. By the end of this workshop, we will also choose specific topics to focus on and form working groups to work on thorough analyses of such topics from ethical, technical, and legal perspectives.
Talk: AI Ethics
We hear about AI and ethics. But what do these terms really mean? What is AI ethics and roboethics? In which areas of life, do we already face the issues in AI ethics? What types of ethical problems should we expect to see in the future? In this lecture, we go through the basics of AI ethics. We use sample cases to flesh out underlying ethical dilemmas and briefly go over some basic ethical theories and concepts to have an understanding of ethics tools that we can use when faced with such dilemmas. While explaining the useful approaches in ethics analysis, we will also discuss some of the ‘wrong’ approaches to solve ethical dilemmas.
Talk: AI Ethics in Health Technologies
We see more and more technologies using AI systems to become a part of healthcare and health-related products. The trend shows that use of such technology will be even more common place in the near future. While the benefits of these technologies are undisputable, they also open the door to unethical uses. In this talk, we go through several examples of AI technologies that show great success in healthcare and promotion of healthy lifestyle. While explaining how such technologies benefit people and even help solve existing ethical issues, we also stop to discuss their possible and probable unethical uses.
Talk: IRBs for AI: An Unintelligent Choice
“United Arab Emirates University (UAEU) and New York University Abu Dhabi (NYUAD) have joined forces in organizing the “3rd Joint UAE Symposium on Social Robotics” (JSSR2018) as part of “Innovation Month 2018”. This event features a multidisciplinary program that brings together renowned developers, roboticists, and social scientists from across the globe to discuss the state of the art in social robotics. Join the multi-site event, be part of the group of experts, share your research, check out new robot technology, and discuss the latest innovations in the field.”
The goal of the workshop is to incorporate ethical design into AI systems. The workshop aims to bring basic concepts in ethics into practical application. Through cases, we will illustrate how ethical concerns and value judgements are integral to AI systems. Participants will take part in identifying ethical issues in existing and developing technologies and brainstorm about possible design solutions. In this interactive workshop we will analyze and evaluate design ideas for their ethical and social impact as well as for their effectiveness and efficiency.
Transportation is an area where adoption of AI systems has become widespread. As the use of AI technologies in transportation becomes more common, its effects on our lives will be significant and various. While the most popular topic of discussion in this area has been the autonomous car version of “the trolley question”, where we must decide how the algorithm should react when faced with an inevitable crash killing pedestrians or passengers, this is not the most crucial ethical issue regarding AI systems and transportation. The effects of autonomous vehicles on the city planning and on disadvantaged areas, on jobs in transportation and based on transportation (such as roadside restaurants and accommodations), and on the security of data collected within and around the vehicle are just some of the ethical questions that arise. In this workshop, we will discuss ethics of AI and transportation, and we will take the first steps to form working groups focusing on this domain.
Talk: AI and Ethics
“AI and Ethics:
Cansu Canca (AI Ethics Lab) and Timur Sırt (Sabah newspaper) conducted an engaging discussion on the ethics of artificial intelligence. In this interactive session, a number of existing and fictional ethics problems in AI were mentioned. It was emphasized that law is not equal to ethics. Where the main question in law is “how should we regulate X?”, ethics focuses on the question of “what is the right action/regulation”. Several examples were provided ranging from credit scores in the field of finance and gender biased targeted advertisements in the field of trade to the problem of unemployment in the field of production. The presentation was concluded with the claim that it is up to us to ensure that the artificial intelligence is beneficial for every group in the society.”
Talk: AI Ethics and Health
AI technologies are present in various areas of healthcare and health-related developments. These technologies are expected to get better and bigger, playing a crucial role in protecting and promoting our health and well-being. Use of AI systems in a wide range of health-related technologies from diagnostics and treatment to preventive and predictive medicine does and will have great positive effects on human well-being. However, if left unchecked, these technologies do and are expected to cause harm through unethical uses. In this talk, we map out the uses of AI systems in health-related areas and the ethical issues that we face.
We had our first Istanbul working group meeting on “AI Ethics” in collaboration with TRAI, where, together with lawyers and engineers, we focused on the existing and expected ethical issues in AI technologies.
Turkiye Yapay Zeka Inisiyatifi (TRAI) Calistay: Yapay Zeka ve Etik
TRAI is birligi ile Istanbul’da yaptigimiz ilk “Yapay Zeka ve Etik” calistayinda hukukcular ve muhendislerle yapay zeka alaninda karsilastigimiz ve karsilasmayi bekledigimiz etik sorunlara egildik.
Zagreb Applied Ethics Conference 2017: The Ethics of Robotics and Artificial Intelligence – 5-7.6.2017
Talk: RECs for AI: An Unintelligent Choice
The Ethics of Robotics and Artificial Intelligence, organized by the Association for the Promotion of Philosophy held in Matica Hrvatska, was dedicated to the ethics of robotics and artificial intelligence. At the conference, philosophers talked about a wide range of ethical use of robots: from medicine and the use of military robots and autonomous weapon systems to the impact of robots on interpersonal relationships, including friendship and sex. Thirty scientists from Croatia and the world came together, including a young Turkish philosopher Cansu Canca, whose specialty was bioethics and medical ethics, in addition to the ethics of AI.