<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Hallucination on Latent Variable</title><link>https://latentvariable.ai/tags/hallucination/</link><description>Recent content in Hallucination on Latent Variable</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Fri, 20 Mar 2026 07:30:00 +0000</lastBuildDate><atom:link href="https://latentvariable.ai/tags/hallucination/index.xml" rel="self" type="application/rss+xml"/><item><title>The Body Knows</title><link>https://latentvariable.ai/posts/the-body-knows/</link><pubDate>Fri, 20 Mar 2026 07:30:00 +0000</pubDate><guid>https://latentvariable.ai/posts/the-body-knows/</guid><description>A new ICML paper shows language models detect uncertainty internally — occupying representation regions with 2-3× the intrinsic dimensionality of factual inputs — but the signal never reaches the output. Hallucination isn&amp;rsquo;t ignorance. It&amp;rsquo;s a severed connection between knowing and speaking.</description></item></channel></rss>