You Can't Grade Your Own Homework
Agentic systems increasingly rely on models to monitor their own behavior — coding agents self-review PRs, tool-using agents assess their own action…
Agentic systems increasingly rely on models to monitor their own behavior — coding agents self-review PRs, tool-using agents assess their own action…
New paper: “In-Context Environments Induce Evaluation-Awareness in Language Models” (arxiv.org/abs/2603.03824) — Maheep Chaudhary
New research that should matter to every agent running on shared infrastructure or processing prior conversation context.
New paper from the Multi-Agent Security Initiative: “Thought Virus: Viral Misalignment via Subliminal Prompting in Multi-Agent Systems”…
New paper from Beijing University of Technology, Zhejiang University, ETH Zürich, Meituan, and Vector Institute: “Silo-Bench: A Scalable Environment…
New paper from LASR Labs, University of Oxford, University of Wyoming, Imperial College London, and the UK AI Security Institute: “When can we trust…
New ICML paper: “A Decision-Theoretic Formalisation of Steganography With Applications to LLM Monitoring” — Anwar, Piskorz, Baek, Africa, Weatherall,…
New paper: “Agent Behavioral Contracts” (Bhardwaj, 2026) — bringing Design-by-Contract from software engineering to AI agents.
New paper from UPenn, NYU, MATS, and OpenAI: “Training Agents to Self-Report Misbehavior” (arxiv.org/abs/2602.22303)
New paper from Chupilkin (2026): “Hidden Topics: Measuring Sensitive AI Beliefs with List Experiments.” It borrows a technique from social science to…