> In the summer of 2010, a user named Roko posted a short paragraph about an AI thought experiment to the LessWrong forums, a website where computer scientists, philosophers and nerds tend to hang out and discuss things.
>
> In his post, Roko described a future where an all-powerful AI would retroactively punish anyone that did not help support or create it. Roko also added that this punishment would not apply to those that were and remain blissfully unaware of the AI's significance, which means that the biggest losers would be scientists that knew about the AI but willingly chose not to help create it.
>
> VIRGIN "THE GAME" you've just lost it LOST THE GAME "the game haha you lost i'm so quirky" completely inconsequential not even a memetic hazard CHAD ROKO'S BASILISC if you know what it is you're definitely fucked failure results in eternal suffering "have you heard about roko's basilisk? what a retarded mind game unless" you have to do it's bidding now at least 5th level memetic hazard
>
>
> Curiously, LessWrong forum founder Eliezer Yudkowsky immediately deleted the post and banned all further discussion of it for five years, calling the thought experiment an "information hazard." In a future interview, he said that he was shocked at the idea that "somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public internet."