PULSE Presents: Configuring “Explainability”: Empirical Investigations into Everyday Encounters with Machine Learning

March 21, 2019 12:00 PM - 1:00 PM

The rapid growth of machine learning (ML) systems in recent years has sparked renewed attention to questions of technological sensemaking and in particular the explainability or interpretability of such systems (a growing technical subfield commonly called "Explainable AI" or "XAI"). But while technical approaches are necessary – they are not sufficient – for comprehensibility of AI systems. In this talk, Christine T. Wolf draws on two ongoing studies which chart the interactional aspects of ML interpretability and the ways in which understanding and coherence emerge dynamically through interactions of various kinds (including interactions with screens, with datasets, with algorithms, with outputs, with people, with practices, and so on). Together, these studies ethnographically investigate situated practices at two points along the ML lifecycle – development and end-use. Ms. Wolf examines the situated nature of ML interpretation and discusses how conceiving of explainability as a sociotechnically-configured phenomena extends – and challenges – our current discourses around AI, policy, and design.


​Christine T. Wolf is a Research Staff Member at IBM Research Almaden (San Jose, CA). Her research investigates how people make sense of (and transform) emergent technologies through everyday practice.