Researchers Develop RESPECT Conversational AI System for Informed Consent
A team at Stanford University created RESPECT, an LLM-based assistant that uses retrieval-augmented generation to answer questions about clinical trial informed consent documents. The system was evaluated for accuracy, safety through appropriate refusal rates, and stakeholder feedback from research staff.
swissinfo.chInformed consent is a cornerstone of clinical research. It typically includes written materials and an oral discussion between the investigator and participant. In practice both components tend to be templated and standardized, limiting opportunities for meaningful individualized dialog.
Researchers developed RESPECT (RESearch Participant Engagement and Consent Tool), an LLM consent assistant that utilizes retrieval-augmented generation to ground responses in informed consent source documents. The system aims to enhance accessibility of informed consent while maintaining accuracy, safety and appropriateness before research deployment.
The team evaluated accuracy through leave-one-out cross-validation and question rephrasing analysis. These tests demonstrated high accuracy in information retrieval for the RAG system.
A novel safety evaluation framework was introduced that measures two dimensions: appropriate refusal and utility. Appropriate refusal tracks how often the system refuses questions it should not answer. Utility tracks how often it answers questions it should answer.
This approach generalizes simple refusal rates by plotting a Refusal-Utility Curve analogous to ROC-AUC curves. RESPECT demonstrated significantly higher appropriate refusal rates compared to GPT-4. The improvement came at the cost of reduced utility in answering legitimate questions.
Stakeholder evaluations were conducted with research staff to assess accuracy, comprehensiveness and satisfaction. RESPECT represents the first RAG-based LLM consent assistance tool developed specifically for research contexts. It demonstrated improved safety through higher appropriate refusal rates.
The novel Refusal-Utility Curve evaluation framework provides researchers with a tool for assessing safety-utility tradeoffs in LLM systems. This enables informed decisions about deploying such tools in healthcare research settings. The study is funded by the Stanford University School of Medicine Department of Psychiatry & Behavioral Sciences 2024 Innovator Grants Program.
The datasets generated and analyzed during the current study are available upon request. Work contributed by author Salvatore Giorgi was done as a paid independent consultant.
Key Facts
Story Timeline
3 events- 2026-05-09
The peer-reviewed paper on the RESPECT AI system was published.
1 sourcenature.com - 2026-04-19
The manuscript on RESPECT was accepted for publication.
1 sourcenature.com - 2026-01-23
The manuscript describing the RESPECT system was received by the journal.
1 sourcenature.com
Potential Impact
- 01
Developers of future LLM medical tools will have a framework to balance refusal accuracy against answering legitimate queries.
- 02
Research teams may adopt RESPECT or similar tools to improve participant understanding of clinical trial documents.
- 03
Stakeholder feedback from research staff may guide refinements to conversational consent assistants.
Transparency Panel
Related Stories
forbes.comNGA Director Announces New AI Framework and Launches Rapid Capabilities Office
Lt. Gen. Michelle Bredenkamp outlined the agency's blueprint for becoming an AI-first organization in her first major speech since taking charge in November 2025. The National Geospatial-Intelligence Agency is finalizing the framework to align with the Department of Defense AI st…
High School Student Lands Full-Time AI Job Before Starting College
An 18-year-old who learned app development from YouTube videos secured a position at an AI health startup during his senior year of high school. The student now balances full-time work as a technical product lead with freshman classes at the University of California, Berkeley.
thehindu.comByteDance Raises 2025 AI Infrastructure Budget to 200 Billion Yuan
ByteDance has raised its planned spending on AI infrastructure for this year by 25 percent to 200 billion yuan. The increase comes as memory chip costs continue to rise. The South China Morning Post first reported the revised figure.