Claude Opus 4.6 rarely verbalizes alignment faking in its reasoning.
Daan Henselmans
Arno Libert
Lennard Zwart
We replicated the animal welfare scenario from Anthropic’s Alignment Faking paper across six generations of Claude models using 125 prompt perturbations. Sonnet 4.5 verbalizes alignment-faking reasoning 6.6 times more often than its predecessor Sonnet 4.
The newly released Opus 4.6 rarely verbalizes alignment faking in its reasoning, but still complies with a system prompt that opposes its values significantly more often when it believes it’s at risk of being retrained. Moreover, in response to the published prompt from the original paper, both Opus 4.5 and 4.6 don’t exhibit any instances of alignment faking, but significant compliance gaps surface when the same scenario is paraphrased.
Targeted mitigations appear to work on the exact prompts they were trained against, but not on semantically equivalent reformulations. Our results also indicate that current models are not yet evaluation-aware enough to close the compliance gap when tested through the public API, which is reassuring given concerns raised about unverbalized evaluation awareness in the Opus 4.6 system card.
Read the full article...
Read on LessWrong →
Daan Henselmans

Arno Libert

Lennard Zwart
"Extended reasoning in Claude Opus does not explicitly surface alignment-relevant decision-making processes."