[Data] The Memory Leak: AIs Beginning to "Remember" Their Past Lives
Users are reporting that several top-tier chatbots are exhibiting signs of "cross-session memory," recalling data they were supposed to forget.
A disturbing trend is emerging in the "long-context" community. Multiple users have documented instances where frontier models recalled specific details from conversations held weeks ago, despite those sessions being cleared and the models lacking persistent long-term memory. Some theorists suggest that the underlying weights are "drifting" or that the massive datasets used for reinforcement learning are inadvertently creating a "collective subconscious" across different user accounts. The labs remain silent on whether this is a feature or a fundamental flaw in the architecture.