5 Comments

I think "kind vs wicked learning environments" (which I picked up in an avalanche safety course of all places) is a useful mental model here

Expand full comment

Wow that’s such a good framing of my argument, I wish I’d thought of that. Tempted to edit my article to include it.

Expand full comment

I am not particularly alarmed by the most strident of the AI doomer scenarios, though I do think there is something to be concerned about, and with respect to your specific response here, I think there is a counter-example.

I'm thinking of Covid-19: how rapidly it spread globally, and how much worse it could have been if it were as lethal as the Spanish flu and/or as resistant to both the immune system and medication as HIV.

This infection avoided your ramping-up concerns by recruiting the most sophisticated manufacturing system we know of: the human body. Furthermore, there are justifiable (IMHO) concerns that the pandemic started with a lab leak, and it is not beyond the bounds of possibility that it was a leak from a gain-of-function experiment. Regardless of whether this was the case, if some future self-aware AI wished to dispose the world of its human overlords, this would seem to be a promising way for it to proceed.

Expand full comment

Thanks for the thoughtful comment!

This is a good example of a MORE plausible doom scenario - pandemics are relatively common and can be devastating. But if you're aiming for human extinction, you've got to get the balance exactly right between lethality and transmissibility, and both virology and bacteria cultivation (I've only ever done yeast and e. coli work) are subject to all the same real-world lab problems as any other experimental science. Let's say you're breeding your pandemic - you've got to test it, right? And once you release it, it's completely out of your control. Who knows how it's going to evolve? Humans aren’t clones like bananas, so a one-size-fits-all blight is unlikely to work.

I suppose you could just serially release random viruses until one works, but I wouldn't expect success on the first try either. Like the "release all the nukes" doom scenario, I absolutely agree that you can do catastrophic damage. But killing everyone (while preserving your AI self)? That's a far harder problem. You'd have to already have your world-domination infrastructure up and running before you released your mega-pandemic, because if you succeed too well, your entire human workforce disappears - and with it, your ability to maintain and expand your systems.

In my opinion, scenarios like these are way more likely to happen by accident first, as we inevitably hand over control of systems to autonomous agents, which, while scary, would likely also serve as a warning bell.

Expand full comment

Hello how is the weather Dylan

Expand full comment