Invited Talk: AI Ethics – The Basics: What? Why? How?
Popular culture often directs our attention to a future with evil robots and AI that ends the world. But AI systems are already part of our everyday lives and they bring plenty of ethical questions with them. In this talk, organized by Turkish Ethics and Reputation Society (TEID), Cansu Canca (AI Ethics Lab) and Burcu Tuzcu Esin (Moroglu Arseven) will explore the ethical and legal questions in relation to AI systems that we face here and now.
Invited Talk: AI Ethics – The Basics: What? Why? How?
Our Talks (December 16):
Machine Learning in Healthcare and its Effects on Clinical Research – by Cansu Canca
A New Viral Epistemology – by Laura Haaber Ihle
The theme of the 11th International Conference on Applied Ethics is “Science, Technology, and Future Generations”.
Talk: AI-Robots Shaping Human Decisions
Robots that use AI systems have the capacity to interact with people as intelligent agents. This interaction becomes almost seamless when such robots process and use natural language to converse with individuals and utilize facial and bodily expressions to complement the conversation. As natural language processing, speech synthesis, and robot expressiveness develop further, we should expect AI-robots to ‘blend in’ to our normal course of life as companions and coaches among others. Such AI-robots present novel ethical questions regarding their communication of information. Decisions that determine how AI-robots communicate information will affect individuals’ knowledge formation, decision-making, and goal determination.
Workshop: Quo Vadis Human, Transhuman, Posthuman? (November 28)
Invited Talk: IRBs for AI: An Unintelligent Choice (November 29)
“Middle East Technical University Department of Philosophy and Center of Applied Ethics (UEAM) organize the 3rd National Congress of Applied Ethics. With rapid developments in science and technology and transformations in social life, the search for answers to ethical questions has become inevitable. It is the responsibility of all researchers, especially philosophers, to determine and evaluate the ethical dimensions of the impacts of these developments on human and environment. In the 3rd Applied Ethics Congress, we focus on the ethical issues that arise in the fields of science, technology, humanitarian and sustainable development with philosophical dimensions.”
Ethics Chat: AI Ethics and AI Ethics Lab
AI and Data Ethics Group:
“The AI and Data Ethics Group provides faculty and students from all disciplines the opportunity to study and discuss emerging issues and current research related to information, data, computing and AI ethics. Topics, readings and speakers are decided upon by members of the group on an ongoing basis. Examples of topics include justice and fairness in machine learning, the form and extent of rights to information and technology access, the appropriate roles of institutions to prevent dissemination of misinformation, the responsible collection and sharing of data, AI research oversight models, and the moral status of artificial intelligences. The group also aims to encourage and develop information ethics research projects and collaborations by its members. Students and faculty from any discipline are encouraged to join. If you are interested in joining this group, or have any questions about it, inquiries should be directed to John Basl, Assistant Professor of Philosophy.”
Invited Talk: AI and Ethical Design
Recently, we hear about AI and ethics very often. But what do these terms really mean? What is AI ethics and roboethics? In which areas of life, do we already face the issues in AI ethics? What types of ethical problems should we expect to see in the future? From risk assessment tools and text analytics systems to visual recognition system, from the ‘fake news’ problem to the chatbots, ethical issues arise in a great range of AI systems. In this talk, we go through the basics of AI ethics. We use sample cases to flesh out underlying ethical dilemmas.
Panel: Bias, Ethics, and Safety
“As Artificial Intelligence (AI) begins to percolate into our everyday lives, we must take a step back to think about the effects of such technologies on our lives.
How does AI embody our value system?
Whose interests are advanced by an AI system?
Do AI systems learn humanly intuitive correlations? If not, can we contest the system?
We aim to explore these pressing normative questions to deep dive into AI + Society. Specifically, we will discuss the questions AI raises regarding bias, ethics, and privacy, and we will explore what a fair, accountable, and transparent AI system looks like.”
Mapping Workshop: Biases in Image Search Results (September 20)
Panel: Ethically Handling Data – What is Your Responsibility and What Should be the Next Step? (September 21)
“The Mapping workshop, run by Cansu Canca and Laura Haaber Ihle looked at how we can structure ethical problems at hand, delving into the underlying principles before attempting to solve the issues at hand. The engaging workshop involved attendee collaboration and assessment of practical implementation methods, gradually working toward the creation of practical solutions.” (@Re-Work Blog)
In this workshop, we will watch and discuss the Black Mirror episode called the “White Christmas”.
The “White Christmas” episode weaves together several stories and two main technologies: the Z-Eye that allows blocking others (as well as taking pictures, zooming in, etc.) and cookies that are like an extreme form of personalized AI assistants. Both of these technologies raise a variety of philosophical questions. In this workshop, we will focus on the Z-Eye technology and specifically its function in blocking people.
When searching for “professor” or “CEO” on Google images, the results show overwhelmingly white male pictures. While these jobs are held by white male professionals more often, the image search results present an extreme bias against representing women and people of color. This has been pointed out as an ethical problem in various outlets; however the problem persists.
In this workshop, we use this case as an example on how to structure the ethical problem at hand and its underlying principles before moving on to try solving it. Through the game-like structure of the Mapping method, the workshop will engage participants and help them develop essential tools to decide on ethical solutions that are technically feasible.
Discussions on ethical AI often lead to questions regarding the responsibilities and obligations of tech companies. What was, for example, legally problematic in Facebook’s Cambridge Analytica scandal and what was ethically wrong? What does it mean for a company to be ethically wrong if it operates within the legal limits? What are the legal limits of corporations when it comes to systems that rely on user consent and/or that form a de facto monopoly? In this discussion session, we will focus on corporate law in relation to AI ethics. In a “coffee meet-up” style, AI Ethics Lab will host Professor Holger Spamann from Harvard Law School to discuss these questions.
Often we access information that is presented to us through an AI system. Search engine results, Google Scholar pages, social media posts and tweets are prioritized and made available to us through an algorithm. Voice assistants respond to our inquiries. The world as we know it is at this point largely shaped by the AI systems that surround us and this trend will continue to increase. What does this entail about what we can know, what it means to know and how we know it?
Panel: Is the Biggest Challenge Facing AI an Ethical One? (May 24)
Panel: How to Balance Ethics and Efficiency When Applying AI in Healthcare (May 25)
“At the Re-Work Deep Learning Summit in Boston today, a panel of ethicists and engineers discussed some of the biggest challenges facing artificial intelligence: algorithmic biases, ethics in AI, and whether the tools to create AI should be made widely available.” (@VentureBeat)
Searching for “professor” or “CEO” on Google images, the results show overwhelmingly white male pictures. While these jobs are held by white male professionals more often, the image search results present an extreme bias against representing women and people of color. This has been pointed out as an ethical problem in various outlets; however the problem persists. In this workshop, we use this case as an example on how to structure the ethical problem at hand and its underlying principles before moving on to try solving it.
What kind of ethical issues do we face in AI systems and how can we solve them? From risk assessment tools and text analytics systems to visual recognition system, from the ‘fake news’ problem to the chatbots, ethical issues arise in a great range of AI systems. This workshop aims to focus on different methods to integrate ethical analysis and ethical design into the development of AI systems. By the end of this workshop, we will also choose specific topics to focus on and form working groups to work on thorough analyses of such topics from ethical, technical, and legal perspectives.