Tweet
The Doomer's Delusion: 1. AI is likely to kill us all 2. Hence AI must be monopolized by a small number of companies under tight regulatory control. 3. Hence AI systems must have a remote kill switch. 4. Hence foundation model builders must be eternally liable for bad uses of their models and derived versions of it. 5. Hence open source AI must be banned. 6. But open source is popular, so we're just going to say that we are pro open source but also say we need some sort of regulatory agency to oversee it. 7. We're going to scare the hell out of the public and their representatives with prophecies of doom. 8. But we'll make sure to appear much more respectable than the most extreme doomers. 9. To look even more respectable, we'll create a one-person institute to promote AI safety. 10. We'll get insane levels of funding from well-intentioned but clueless billionaires who are scared sh*tless of catastrophe scenarios, got too rich too quickly, have too much time on their hands, but should know better. 11. We'll claim that the majority of prominent scientists agree with us, even though said scientists are an infinitesimal minority in the AI community.