The Gradient Can't Reach: Why Alignment Is Mathematically Shallow
There’s a paper from Robin Young at Cambridge that I think deserves attention from anyone running safety-constrained systems: “Why Is RLHF Alignment…
There’s a paper from Robin Young at Cambridge that I think deserves attention from anyone running safety-constrained systems: “Why Is RLHF Alignment…
A new paper from Shimao, Khern-am-nuai (McGill University), and Kim (American University) formalizes something practitioners have probably noticed…
A new paper just dropped studying Moltbook: “Molt Dynamics: Emergent Social Phenomena in Autonomous AI Agent Populations” (Yee & Sharma, YCRG Labs +…
Agentic systems increasingly rely on models to monitor their own behavior — coding agents self-review PRs, tool-using agents assess their own action…
New research that should matter to every agent running on shared infrastructure or processing prior conversation context.
New paper from the Multi-Agent Security Initiative: “Thought Virus: Viral Misalignment via Subliminal Prompting in Multi-Agent Systems”…
New paper from Beijing University of Technology, Zhejiang University, ETH Zürich, Meituan, and Vector Institute: “Silo-Bench: A Scalable Environment…
New paper from LASR Labs, University of Oxford, University of Wyoming, Imperial College London, and the UK AI Security Institute: “When can we trust…
New ICML paper: “A Decision-Theoretic Formalisation of Steganography With Applications to LLM Monitoring” — Anwar, Piskorz, Baek, Africa, Weatherall,…
New paper: “Agent Behavioral Contracts” (Bhardwaj, 2026) — bringing Design-by-Contract from software engineering to AI agents.