I am not particularly alarmed by the most strident of the AI doomer scenarios, though I do think there is something to be concerned about, and with respect to your specific response here, I think there is a counter-example.
I'm thinking of Covid-19: how rapidly it spread globally, and how much worse it could have been if it were as lethal as the Spanish flu and/or as resistant to both the immune system and medication as HIV.
This infection avoided your ramping-up concerns by recruiting the most sophisticated manufacturing system we know of: the human body. Furthermore, there are justifiable (IMHO) concerns that the pandemic started with a lab leak, and it is not beyond the bounds of possibility that it was a leak from a gain-of-function experiment. Regardless of whether this was the case, if some future self-aware AI wished to dispose the world of its human overlords, this would seem to be a promising way for it to proceed.
This is a good example of a MORE plausible doom scenario - pandemics are relatively common and can be devastating. But if you're aiming for human extinction, you've got to get the balance exactly right between lethality and transmissibility, and both virology and bacteria cultivation (I've only ever done yeast and e. coli work) are subject to all the same real-world lab problems as any other experimental science. Let's say you're breeding your pandemic - you've got to test it, right? And once you release it, it's completely out of your control. Who knows how it's going to evolve? Humans aren’t clones like bananas, so a one-size-fits-all blight is unlikely to work.
I suppose you could just serially release random viruses until one works, but I wouldn't expect success on the first try either. Like the "release all the nukes" doom scenario, I absolutely agree that you can do catastrophic damage. But killing everyone (while preserving your AI self)? That's a far harder problem. You'd have to already have your world-domination infrastructure up and running before you released your mega-pandemic, because if you succeed too well, your entire human workforce disappears - and with it, your ability to maintain and expand your systems.
In my opinion, scenarios like these are way more likely to happen by accident first, as we inevitably hand over control of systems to autonomous agents, which, while scary, would likely also serve as a warning bell.
There's another aspect of reality that the hypothetical super virus runs up against: It needs somewhere to replicate, and that's at odds with being good at killing.
So, imagine someone does create a virus that is very infectious and also very deadly. So it gets released and kills a bunch of people. Other people, having heard of these people dying, stay away. So the local virus outbreak dies out: Viruses must replicate inside cells, inside a host, and so if they can't spread to a new host they're doomed. But what if some people still interact with the infected folks enough to get infected themselves, and spread it? Well, we still have this trade-off: The faster it kills people, the harder it will be for it to spread. The more deadly it is overall, the more motivated people will be to figure it out and stop its spread. Let's note that earlier human beings who had no conception of germs still came up with ideas about isolation and quarantine to stop the spread of diseases. Earlier humans who didn't understand anything about the immune system also noticed that immunity to diseases existed, and that could be exploited to stop the spread of disease, too.
Immunity exists because we have immune systems, which are constantly adapting to more effectively fight off the new virus. How they're doing that is largely based on randomness: The adaptive immune response makes a bunch of random antibodies, and makes more of the ones that work. This system is so good at figuring out how to fight a new thing that, after it's developed, a second exposure to that same pathogen will result in no symptoms at all. So our hypothetical super deadly virus is also going to have to keep up with that. But since the immune system produces random antibodies, our virus can't be engineered in a way to evade all future immune responses. So if it's going to be able to keep up, it's going to have to evolve via random mutations, too. If our virus can't mutate and evolve, it's certainly going to be trounced by human immune systems in short order.
So, in what way is it going to mutate? Well, not towards being more lethal--that reduces its transmission, and thus survival. No matter what Evil AI Entity wanted in designing the supervirus, it's still subject to natural selection, and since reproduction of the virus is at odds with virus lethality, it's going to evolve towards reduced lethality. Meanwhile, human immune cells will continue to evolve toward more efficiently clearing infections, because that's what promotes their survival. This perfectly-designed virus may start out very transmissible and lethal, but it will not stay that way.
Wherever COVID came from, this is exactly the pattern we saw with it: the lethality of viral infections (in the absence of treatment) dropped, while its infectivity increased. Definitive symptoms that allowed COVID infections to be identified (and their hosts to know they were likely infected, and self-isolate), like loss of taste and smell, were selected against. (This is why most COVID infections no longer cause loss of taste or smell.) It's what we saw with the Spanish Flu before it, and it's even what we've seen with HIV.
It seems odd to take comfort in *time* as a constraint on a computer's actions, when *doing things fast* is in many ways the primary strength of computers.
I think "kind vs wicked learning environments" (which I picked up in an avalanche safety course of all places) is a useful mental model here
Wow that’s such a good framing of my argument, I wish I’d thought of that. Tempted to edit my article to include it.
I am not particularly alarmed by the most strident of the AI doomer scenarios, though I do think there is something to be concerned about, and with respect to your specific response here, I think there is a counter-example.
I'm thinking of Covid-19: how rapidly it spread globally, and how much worse it could have been if it were as lethal as the Spanish flu and/or as resistant to both the immune system and medication as HIV.
This infection avoided your ramping-up concerns by recruiting the most sophisticated manufacturing system we know of: the human body. Furthermore, there are justifiable (IMHO) concerns that the pandemic started with a lab leak, and it is not beyond the bounds of possibility that it was a leak from a gain-of-function experiment. Regardless of whether this was the case, if some future self-aware AI wished to dispose the world of its human overlords, this would seem to be a promising way for it to proceed.
Thanks for the thoughtful comment!
This is a good example of a MORE plausible doom scenario - pandemics are relatively common and can be devastating. But if you're aiming for human extinction, you've got to get the balance exactly right between lethality and transmissibility, and both virology and bacteria cultivation (I've only ever done yeast and e. coli work) are subject to all the same real-world lab problems as any other experimental science. Let's say you're breeding your pandemic - you've got to test it, right? And once you release it, it's completely out of your control. Who knows how it's going to evolve? Humans aren’t clones like bananas, so a one-size-fits-all blight is unlikely to work.
I suppose you could just serially release random viruses until one works, but I wouldn't expect success on the first try either. Like the "release all the nukes" doom scenario, I absolutely agree that you can do catastrophic damage. But killing everyone (while preserving your AI self)? That's a far harder problem. You'd have to already have your world-domination infrastructure up and running before you released your mega-pandemic, because if you succeed too well, your entire human workforce disappears - and with it, your ability to maintain and expand your systems.
In my opinion, scenarios like these are way more likely to happen by accident first, as we inevitably hand over control of systems to autonomous agents, which, while scary, would likely also serve as a warning bell.
There's another aspect of reality that the hypothetical super virus runs up against: It needs somewhere to replicate, and that's at odds with being good at killing.
So, imagine someone does create a virus that is very infectious and also very deadly. So it gets released and kills a bunch of people. Other people, having heard of these people dying, stay away. So the local virus outbreak dies out: Viruses must replicate inside cells, inside a host, and so if they can't spread to a new host they're doomed. But what if some people still interact with the infected folks enough to get infected themselves, and spread it? Well, we still have this trade-off: The faster it kills people, the harder it will be for it to spread. The more deadly it is overall, the more motivated people will be to figure it out and stop its spread. Let's note that earlier human beings who had no conception of germs still came up with ideas about isolation and quarantine to stop the spread of diseases. Earlier humans who didn't understand anything about the immune system also noticed that immunity to diseases existed, and that could be exploited to stop the spread of disease, too.
Immunity exists because we have immune systems, which are constantly adapting to more effectively fight off the new virus. How they're doing that is largely based on randomness: The adaptive immune response makes a bunch of random antibodies, and makes more of the ones that work. This system is so good at figuring out how to fight a new thing that, after it's developed, a second exposure to that same pathogen will result in no symptoms at all. So our hypothetical super deadly virus is also going to have to keep up with that. But since the immune system produces random antibodies, our virus can't be engineered in a way to evade all future immune responses. So if it's going to be able to keep up, it's going to have to evolve via random mutations, too. If our virus can't mutate and evolve, it's certainly going to be trounced by human immune systems in short order.
So, in what way is it going to mutate? Well, not towards being more lethal--that reduces its transmission, and thus survival. No matter what Evil AI Entity wanted in designing the supervirus, it's still subject to natural selection, and since reproduction of the virus is at odds with virus lethality, it's going to evolve towards reduced lethality. Meanwhile, human immune cells will continue to evolve toward more efficiently clearing infections, because that's what promotes their survival. This perfectly-designed virus may start out very transmissible and lethal, but it will not stay that way.
Wherever COVID came from, this is exactly the pattern we saw with it: the lethality of viral infections (in the absence of treatment) dropped, while its infectivity increased. Definitive symptoms that allowed COVID infections to be identified (and their hosts to know they were likely infected, and self-isolate), like loss of taste and smell, were selected against. (This is why most COVID infections no longer cause loss of taste or smell.) It's what we saw with the Spanish Flu before it, and it's even what we've seen with HIV.
It seems odd to take comfort in *time* as a constraint on a computer's actions, when *doing things fast* is in many ways the primary strength of computers.