
Which Two AI Models Are ‘Unfaithful’ at Least 25% of the Time About Their ‘Reasoning’?
Anthropic’s Claude 3.7 Sonnet. Image: Anthropic/YouTube Anthropic released a new study on April 3 examining how AI models process information and the limitations of tracing their decision-making from prompt to output. The researchers found Claude 3.7 Sonnet isn’t always “faithful” in disclosing how it generates responses. Anthropic probes how closely AI output reflects internal reasoning…