Responsible AI: Appropriate Trust and LLM Fabrication with Mihaela Vorvoreanu, PhD

Subscribe to the Cohere Podcast: Apple Podcasts | Amazon Music | Spotify Podcasts

AI will force us to rethink our mental models and interfaces for future human-computer interactions.

In the recent Cohere podcast, co-hosts Bill Johnston and Dr. Lauren Vargas dive deep into AI ethics with Mihaela Vorvoreanu, Director of UX Research and RAI Education for Microsoft's Aether. The conversation primarily focuses on "appropriate trust" in AI, emphasizing the need for accountability in AI systems.

In her conversation with Bill and Lauren, Mihaela boldly challenges listeners to rethink Large Language Models (LLMs) not merely as information retrieval systems but as systems of fabrication. She further confronts the exaggerated notion of AI as super capable and superhuman, shedding light on the common errors made by LLM-based systems and the implications for users. This episode offers invaluable perspectives on AI ethics, vigorously questioning the hyperbole surrounding AI and making the field more comprehensible to a broader audience.

And of course, AI systems are probabilistic, right? So they’re inevitably going to be wrong. So, Therefore, even for just this reason alone, we need to think about interacting with computers a little bit differently.

Because people are not used to necessarily, the previous interaction paradigm did not occur for computers being wrong as part of normal operation. Being wrong meant an error. An error that could theoretically be fixed and eliminated. Whereas here you have to make dealing with errors and failures part of normal operation.
— Mihaela Vorvoreanu, PhD

In this episode, we discuss the following: 

  • [2:00] Introducing Mihaela Vorvoreanu, PhD and Aether, Microsoft’s initiative for AI Ethics and Effects in Engineering and Research

  • [8:00] Discussing the concept of responsible AI

  • [11:44] Challenging terms such as “hallucinogenic”

  • [12:24] Reframing LLMs as systems of fabrication

  • [30:48] Sharing information about the direction of Aether’s research

Mentioned in this episode: 

  • Advancing human-centered AI: Updates on responsible AI research

  • Microsoft HAX Toolkit https://aka.ms/haxtoolkit

  • Overreliance on AI: Literature Review https://aka.ms/overreliance_review

  • Responsible AI Maturity Model https://aka.ms/raimm

About our guest(s): 

In her current role, Mihaela leads research and education aimed at advancing the practice of RAI.

Before joining Microsoft, she had an accomplished academic career, most recently at Purdue University, where she established and led the undergraduate and graduate UX Design and research programs.

https://www.linkedin.com/in/mihaelavorvoreanu/

Call-to-Action(s):

Previous
Previous

The Rich History and the Exciting Future of the Maker Movement with Dale Dougherty

Next
Next

Evolution of Learning: Moving Beyond Communities of Practice with the Wenger-Trayners