Human-AI Interaction in the Presence of Ambiguity: From Deliberation-based Labeling to Ambiguity-aware AI
Ambiguity, the quality of being open to more than one interpretation, permeates our lives. It comes in different forms including linguistic and visual ambiguity, arises for various reasons and gives rise to disagreements among human observers that can be hard or impossible to resolve. As artificial intelligence (AI) is increasingly infused into complex domains of human decision making it is crucial that the underlying AI mechanisms also support a notion of ambiguity. Yet, existing AI approaches typically assume that there is a single correct answer for any given input, lacking mechanisms to incorporate diverse human perspectives in various parts of the AI pipeline, including data labeling, model development and user interface design.
This dissertation aims to shed light on the question of how humans and AI can be effective partners in the presence of ambiguous problems.
Mike’s PhD supervisors were Edith Law and Kate Larson.
Mike Schaekermann’s research is located at the intersection of human-computer interaction, machine learning and medicine. He is currently works as an Applied Scientist at Amazon Web Services (AWS).