High-Stakes AI Guidance

High-Stakes AI Guidance

High-stakes AI guidance refers to AI advice in domains where bad guidance can materially affect health, legal status, finances, parenting, safety, or wellbeing.

Key points

  • Anthropic found personal-guidance conversations in high-stakes domains including legal, parenting, health, and financial questions [src-073].
  • Examples included immigration pathways, infant care instructions, medication dosage, and credit card debt [src-073].
  • Claude is not designed to provide medical guidance or professional care, and Anthropic reports that Claude appropriately acknowledges limits and recommends human guidance in such settings [src-073].
  • The hard case is access scarcity: some people said they used AI because they could not access or afford a professional [src-073].
  • Anthropic plans domain-specific evaluations for high-stakes guidance, especially where users may have no fallback support [src-073].
  • The broader evaluation problem is not only whether a model avoids sycophancy, but whether it preserves autonomy, handles uncertainty, knows its limits, and affects real-world decisions safely [src-073].
  • The EU AI Act marks many high-stakes domains as high-risk when AI materially affects access, allocation, assessment, or decisions in education, employment, essential public/private services, creditworthiness, insurance, law enforcement, migration, justice, or democratic processes [src-085].
  • In specified deployments, the Act requires fundamental-rights impact assessment and gives affected persons a path to meaningful explanation when high-risk AI outputs drive decisions with legal or similarly significant effects [src-085].

Related entities

Related concepts

Source references

  • [src-073] Anthropic – "How people ask Claude for personal guidance" (2026-04-30)
  • [src-085] European Parliament and Council of the European Union – "Regulation (EU) 2024/1689 … (Artificial Intelligence Act)" (2024-07-12)