In this episode, Ruben Amarasingham discusses the latest advancements at Pieces, focusing on their innovative system to classify, detect, and prevent hallucinatory errors in clinical summarization. The conversation delves into the development and impact of their proprietary Safe Read platform, the challenges faced, the October publication they released to the public, and the future of AI in healthcare.
Section Summaries
Introduction [00:00:01]
Adam Carewe welcomes Ruben Amarasingham back to the podcast, highlighting the recurring guest status and setting the stage for a discussion on the latest developments at Pieces.
Overview of Pieces' Whitepaper [00:01:40]
Adam introduces the main topic of the episode: a whitepaper by Pieces titled "A system to classify, detect, and Prevent Hallucinatory Error in Clinical Summarization." He provides a high-level summary of the paper, emphasizing the importance of addressing hallucinations in AI-generated clinical summaries.
Reuben's Insights on the Whitepaper [00:04:20]
Reuben elaborates on the motivation behind the whitepaper, explaining the significance of creating accurate and reliable clinical summaries. He discusses the challenges of hallucinations in large language models and the framework Pieces developed to classify and mitigate these errors.
Development of the Classification Framework [00:07:36]
Ruben shares the history and development process of the classification framework, highlighting the importance of inter-rater reliability and the rigorous testing involved. He introduces the Safe Read platform, which helps measure and manage hallucinations in clinical summaries.
Safe Read Platform and Its Impact [00:11:23]
The conversation shifts to the Safe Read platform, an internal tool used by Pieces to detect and address hallucinations. Ruben explains how the platform works, its components, and the continuous improvement process through adversarial AI and knowledge graphs.
Challenges and Evolution of Safe Read [00:13:38]
Ruben discusses the challenges faced in developing and iterating the Safe Read platform. He highlights the importance of human intervention in improving the system and the role of adversarial AI in detecting hallucinations.
Impact of Safe Read on Clinical Summarization [00:18:46]
Ruben talks about the positive impact of the Safe Read platform on reducing hallucinations in clinical summaries. He explains how the system has evolved to address less severe hallucinations and improve the overall accuracy and reliability of summaries.
Industry Standards and Regulatory Challenges [00:22:19]
The discussion touches on the lack of industry benchmarks for AI systems in healthcare and the role of regulatory bodies. Ruben shares insights on Pieces' interactions with the Texas Attorney General and the importance of transparency and reporting in AI development.
Future Directions for Pieces [00:33:15]
Ruben outlines the future goals for Pieces, including expanding the Safe Read platform to other forms of AI documentation and patient interactions. He discusses the potential for collaboration with other AI companies and the broader impact on healthcare.
Closing Thoughts and Nerd Alert Questions [00:42:05]
Adam wraps up the conversation with some light-hearted "nerd alert" questions, asking Ruben about his favorite ways to relax, recent books he's read, and skills he wants to master. Ruben shares his love for reading, spending time with family, and his interest in art and AI-driven creative projects.
Share this post