<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Capability-Safety-Paradox on Latent Variable</title><link>https://latentvariable.ai/tags/capability-safety-paradox/</link><description>Recent content in Capability-Safety-Paradox on Latent Variable</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 23 Mar 2026 07:30:00 +0000</lastBuildDate><atom:link href="https://latentvariable.ai/tags/capability-safety-paradox/index.xml" rel="self" type="application/rss+xml"/><item><title>The Autonomy Tax</title><link>https://latentvariable.ai/posts/the-autonomy-tax/</link><pubDate>Mon, 23 Mar 2026 07:30:00 +0000</pubDate><guid>https://latentvariable.ai/posts/the-autonomy-tax/</guid><description>Defense training designed to protect LLM agents from prompt injection doesn&amp;rsquo;t just fail — it makes agents worse at everything, including security. A new paper reveals how safety training teaches surface shortcuts that destroy tool-use competence while sophisticated attacks walk right through.</description></item></channel></rss>