Nice post! I also think that the number of people in the past who feared technology is mostly irrelevant. Even if a few people have hyperbolically claimed that technology would end the world, based on bad arguments, that wouldn't be very relevant to figuring out whether to worry about AI risk. It would only be convincing if a sizeable share of very smart, quite reasonable people, with a history of soberly analyzing other risks (e.g. not thinking climate change would end the world), after carefully considering the arguments on both sides, came to conclude lots of previous technologies would end the world.
Analogy: suppose that since the year 0, a bunch of confused people thought that evolution explained lots of things it didn't actually explain--the weather, the presence of Earth, abiogenesis, and so on. This wouldn't affect the odds of evolution very much. The fact that people have stupidly thought X in the past doesn't mean nothing resembling X might be right in the present. It would be reasonable to say, in response to the hypothetical evolution deniers based on induction, "yes, a few nutters thought that evolution explained things it didn't explain, but now we actually have a plausible story of things it explains, believed by many smart scientists who have carefully considered the arguments on both sides."
In other words, you shouldn't do induction of the form "silly people and tabloid journalists thought X," when smart, sober, and reasonable people in the present think something similar to X. For it to be a reasonable induction, the people making errors in the past must be roughly as reasonable as the people holding the belief in the present.
Regarding the LHC, I would point out that the fears about creating black holes etc were not a priory as silly as that lawsuit makes them out to be. The main argument of the pro-LHC side was that Earth is bombarded by cosmic rays which result in collisions with similar center-of-mass energy as LHC collisions all the time. The counterargument is that the cosmic ray induced collisions happen in frames of reference which are highly relativistic relative to earth, while those at LHC could move relatively slow to Earth. The counterargument to that argument is that cosmic rays also impact much denser objects like neutron stars (which would slow even a highly relativistic micro-BH), and yet these objects exist for us to observe them and have not all been swallowed by black holes.
I think that the estimate of Martin Rees, who gave an upper bound of a 1 in 50 million chance is likely reasonable. After all, there are always unknown unknowns -- if we knew perfectly well what the physics of LHC were beforehand, there would have been no reason to spend billions to build it. (Intuition pump: what is the probability that we are living in an anthropocentric simulation which uses crude approximations for neutron stars but might crash bug in untested code when having to precisely run an LHC-energy collision because the apes are watching closely?) 1:50M is not much, but it would imply that the expected death toll of LHC-induced black holes is around 100 people (which is a minor concern compared to the number of lives which could be saved for its costs, unless you count the impact on future generations).
As a physicist, I would gladly pay my 20 nano-morts to learn that Higgs was right all along, but I can imagine that a random person on the street might have a different opinion on that. (The other argument is that running LHC experiments is occupying a lot of extremely smart people who might join other smart people who do high-frequency trading, found cryptocurrency exchanges or develop AGI, all of which are activities where the expected death toll per marginal genius is orders of magnitude higher.)
> Sort of – it does seem that Edison was sincere in his belief that alternating current was dangerous, which is why, against the advice of his own colleagues, he did not invest in it himself. One could portray this as an example of a credible expert whose misguided fears of the technology he had spent many years studying was responsible for slowing progress and delaying mass adoption, an accusation often levied at so-called “AI doomers”.
I would argue that all things (especially effective voltages) being equal, DC *is* safer than AC. If you look up the definition of extra low voltage -- which denotes voltages where there is normally little danger from electrical shocks, you will find that it is defined as less than 120V for DC, but less than 50V for AC. The reason for that is that ventricular fibrillation -- a common way for people to die in electric accidents through cardiac arrest -- as caused much easier by AC voltages.
Reading the war of the currents Wikipedia article, I furthermore notice that "all being equal" is an overly nice assumption towards AC in that age. Edison's DC was 110V, which would be called an extra-low voltage today. By contrast, the main advantage of AC is that you can easily transform it. Per WP, the AC lines were running at up to 6kV -- a voltage which can kill you even if you are not touching it directly. I would much rather be working with Edison's DC in 1880 than with AC.
What killed DC was not its lack of safety, but its limited usefulness. Electrical power (energy over time) is current times voltage. With DC, there was no good way to change voltages, so you had to transport your electricity at the voltage the end users could safely use. This implied that you needed a high current. The losses in your conductor are proportional to the its length times the current flowing through divided by its cross-section. This means that low-voltage DC requires thick wires, and the longer distance you want to transport it, the thicker wires you need (if you want to keep the losses capped).
By contrast, AC can easily be transformed to higher or lower voltages as required, which enabled you to use much lower currents. If you transform it to 5.5kV, you can use wire cross-sections 50x smaller than the ones Edison had to use -- or you could go 50x the distance if you kept the cross-section the same.
The fact that electricity, even AC is at the safety level it is today (with a separate PE lead, RCDs, specs for isolators and so on) is the result of perhaps 70 years of regulations being written in blood (or charred flesh, in this case).
While I'm flattered to have been one of your examples, I'm somewhat disappointed that you mischaracterized my complaint. Your task, as you put it, to prove "very few people have actually made concrete, since-falsified predictions about AI-caused catastrophes" is exactly my complaint! Specifically, from the article you screenshotted, the complaint is:
"you can’t pin down anyone to any specifics. Arguing with an AI doomer reminds me of nothing more than arguing with a theist, in the sense that they have an unshakeable faith but also refuse to make any testable predictions. All they’ll tell you is that AI is likely to kill us someday, maybe soon but maybe in the far future, and every day gets us closer to it, and there’s absolutely nothing that could happen that will prove them wrong. If AI kills us all? Told you so. If AI doesn’t kill us all? It’ll happen someday. If we solve alignment? Well that’s what they were saying to do all along. If we don’t solve alignment? Well they said we weren’t serious enough about it. If you spread flour on the floor to see the invisible dragon’s footprints? Well, you see, it floats."
This is what I call the Larry Summers problem, after his tendency, no matter what the government does, to say "watch out, that could case inflation!" but be very careful never to say anything that could later be proven wrong. Then when inflation happens, everyone is like "Larry Summers warned us and we didn't listen! What a genius!"
The reason you can't find any concrete, since-falsified predictions about AI-caused catastrophes is that almost nobody is willing to make any concrete predictions about AI-caused catastrophes. It's all vague warnings about how something is going to happen sometime in the future, but nobody knows what or when.
The problem with predicting how AI will turn out is that it is hard. Perhaps it will fizzle out before reaching the state where AIs can do cutting-edge AI research. Perhaps alignment will be trivial and automatic. Or perhaps it would require a theory which we would have discovered a thousand years after we built ASI.
I agree with Sarah that compared to the LHC, the lack of expert theory why it is very likely safe is concerning. We are not talking about P vs NP, where we have no proof either way but strongly suspect P<NP.
Instead, e/acc vs doomer has just about as much hard evidence either way as deist vs atheist. I get that for you, the doomer argument feels like Pascal's wager: we don't know and can't know which one is true, so better err on the side of caution.
However, my argument to Pascal's wager is that for every imaginable God, I can imagine an equal and opposite God with an inverse afterlife assignment function who is just as plausible, so my expected afterlife gain from following any religion are zero.
For the doomer version of Pascal's wager, this does not seem true. If we delay building ASI by a few decades to try and solve alignment first, the probability that an ASI which would have been aligned with humans if we had built it right away will be so annoyed with us that it will kill us out of spite seems much smaller than that these few decades will help us solve alignment. (Nor do I hear many e/accs arguing that alignment is either trivial or impossible, and if it is trivial than we should build ASI, but if it is impossible it is actually crucial to let OpenAI build the Basilisk ASAP lest it gets annoyed with us for being late to summon it.)
I think many doomers are more like agnostics than like committed theists. If one believes that there is a 20% chance we will build ASI before the next AI winter, and a 50% chance that the ASI will be unaligned, that is enough to urge caution on general principles while also not sticking your neck out by making any falsifiable predictions.
It's less like Pascal's Wager and more like climate change. It's a real concern! It could potentially kill us all! But everyone talks about it as though we need to immediately shut it down NOW or else we're doomed, and that's obviously not the case. And much like with climate change, AI scientists have an obvious bias in favor of thinking their area is more important and urgent than it actually is.
The difference is that the climate scientists actually made predictions. And lucky for us, many of their predictions turned out to be incorrect, so now we know we don't need to listen to the most histrionic of the climate doomers. The AI people seemed to have learned that lesson and just aren't making any predictions. It's incredibly frustrating to have people making extremely confident prognostications about the future, to the point that they're willing to invoke the coercive power of the state, but not actually make *any* concrete predictions about what happens if we don't. I refuse to take such people seriously. If you want me to listen to you, your theory needs to be falsifiable. Otherwise you're just selling vibes.
AI safety would be best framed primarily in concrete terms. Recursive self improvement usually has to be justified as a threat when tied to other science fiction scenarios. But people do understand that giving AI control of nuclear weapons is obviously a bad idea no matter how smart it purports to be.
Nice post! I also think that the number of people in the past who feared technology is mostly irrelevant. Even if a few people have hyperbolically claimed that technology would end the world, based on bad arguments, that wouldn't be very relevant to figuring out whether to worry about AI risk. It would only be convincing if a sizeable share of very smart, quite reasonable people, with a history of soberly analyzing other risks (e.g. not thinking climate change would end the world), after carefully considering the arguments on both sides, came to conclude lots of previous technologies would end the world.
Analogy: suppose that since the year 0, a bunch of confused people thought that evolution explained lots of things it didn't actually explain--the weather, the presence of Earth, abiogenesis, and so on. This wouldn't affect the odds of evolution very much. The fact that people have stupidly thought X in the past doesn't mean nothing resembling X might be right in the present. It would be reasonable to say, in response to the hypothetical evolution deniers based on induction, "yes, a few nutters thought that evolution explained things it didn't explain, but now we actually have a plausible story of things it explains, believed by many smart scientists who have carefully considered the arguments on both sides."
In other words, you shouldn't do induction of the form "silly people and tabloid journalists thought X," when smart, sober, and reasonable people in the present think something similar to X. For it to be a reasonable induction, the people making errors in the past must be roughly as reasonable as the people holding the belief in the present.
Great post.
Two minor (if lengthy) comments:
Regarding the LHC, I would point out that the fears about creating black holes etc were not a priory as silly as that lawsuit makes them out to be. The main argument of the pro-LHC side was that Earth is bombarded by cosmic rays which result in collisions with similar center-of-mass energy as LHC collisions all the time. The counterargument is that the cosmic ray induced collisions happen in frames of reference which are highly relativistic relative to earth, while those at LHC could move relatively slow to Earth. The counterargument to that argument is that cosmic rays also impact much denser objects like neutron stars (which would slow even a highly relativistic micro-BH), and yet these objects exist for us to observe them and have not all been swallowed by black holes.
I think that the estimate of Martin Rees, who gave an upper bound of a 1 in 50 million chance is likely reasonable. After all, there are always unknown unknowns -- if we knew perfectly well what the physics of LHC were beforehand, there would have been no reason to spend billions to build it. (Intuition pump: what is the probability that we are living in an anthropocentric simulation which uses crude approximations for neutron stars but might crash bug in untested code when having to precisely run an LHC-energy collision because the apes are watching closely?) 1:50M is not much, but it would imply that the expected death toll of LHC-induced black holes is around 100 people (which is a minor concern compared to the number of lives which could be saved for its costs, unless you count the impact on future generations).
As a physicist, I would gladly pay my 20 nano-morts to learn that Higgs was right all along, but I can imagine that a random person on the street might have a different opinion on that. (The other argument is that running LHC experiments is occupying a lot of extremely smart people who might join other smart people who do high-frequency trading, found cryptocurrency exchanges or develop AGI, all of which are activities where the expected death toll per marginal genius is orders of magnitude higher.)
> Sort of – it does seem that Edison was sincere in his belief that alternating current was dangerous, which is why, against the advice of his own colleagues, he did not invest in it himself. One could portray this as an example of a credible expert whose misguided fears of the technology he had spent many years studying was responsible for slowing progress and delaying mass adoption, an accusation often levied at so-called “AI doomers”.
I would argue that all things (especially effective voltages) being equal, DC *is* safer than AC. If you look up the definition of extra low voltage -- which denotes voltages where there is normally little danger from electrical shocks, you will find that it is defined as less than 120V for DC, but less than 50V for AC. The reason for that is that ventricular fibrillation -- a common way for people to die in electric accidents through cardiac arrest -- as caused much easier by AC voltages.
Reading the war of the currents Wikipedia article, I furthermore notice that "all being equal" is an overly nice assumption towards AC in that age. Edison's DC was 110V, which would be called an extra-low voltage today. By contrast, the main advantage of AC is that you can easily transform it. Per WP, the AC lines were running at up to 6kV -- a voltage which can kill you even if you are not touching it directly. I would much rather be working with Edison's DC in 1880 than with AC.
What killed DC was not its lack of safety, but its limited usefulness. Electrical power (energy over time) is current times voltage. With DC, there was no good way to change voltages, so you had to transport your electricity at the voltage the end users could safely use. This implied that you needed a high current. The losses in your conductor are proportional to the its length times the current flowing through divided by its cross-section. This means that low-voltage DC requires thick wires, and the longer distance you want to transport it, the thicker wires you need (if you want to keep the losses capped).
By contrast, AC can easily be transformed to higher or lower voltages as required, which enabled you to use much lower currents. If you transform it to 5.5kV, you can use wire cross-sections 50x smaller than the ones Edison had to use -- or you could go 50x the distance if you kept the cross-section the same.
The fact that electricity, even AC is at the safety level it is today (with a separate PE lead, RCDs, specs for isolators and so on) is the result of perhaps 70 years of regulations being written in blood (or charred flesh, in this case).
While I'm flattered to have been one of your examples, I'm somewhat disappointed that you mischaracterized my complaint. Your task, as you put it, to prove "very few people have actually made concrete, since-falsified predictions about AI-caused catastrophes" is exactly my complaint! Specifically, from the article you screenshotted, the complaint is:
"you can’t pin down anyone to any specifics. Arguing with an AI doomer reminds me of nothing more than arguing with a theist, in the sense that they have an unshakeable faith but also refuse to make any testable predictions. All they’ll tell you is that AI is likely to kill us someday, maybe soon but maybe in the far future, and every day gets us closer to it, and there’s absolutely nothing that could happen that will prove them wrong. If AI kills us all? Told you so. If AI doesn’t kill us all? It’ll happen someday. If we solve alignment? Well that’s what they were saying to do all along. If we don’t solve alignment? Well they said we weren’t serious enough about it. If you spread flour on the floor to see the invisible dragon’s footprints? Well, you see, it floats."
This is what I call the Larry Summers problem, after his tendency, no matter what the government does, to say "watch out, that could case inflation!" but be very careful never to say anything that could later be proven wrong. Then when inflation happens, everyone is like "Larry Summers warned us and we didn't listen! What a genius!"
The reason you can't find any concrete, since-falsified predictions about AI-caused catastrophes is that almost nobody is willing to make any concrete predictions about AI-caused catastrophes. It's all vague warnings about how something is going to happen sometime in the future, but nobody knows what or when.
The problem with predicting how AI will turn out is that it is hard. Perhaps it will fizzle out before reaching the state where AIs can do cutting-edge AI research. Perhaps alignment will be trivial and automatic. Or perhaps it would require a theory which we would have discovered a thousand years after we built ASI.
I agree with Sarah that compared to the LHC, the lack of expert theory why it is very likely safe is concerning. We are not talking about P vs NP, where we have no proof either way but strongly suspect P<NP.
Instead, e/acc vs doomer has just about as much hard evidence either way as deist vs atheist. I get that for you, the doomer argument feels like Pascal's wager: we don't know and can't know which one is true, so better err on the side of caution.
However, my argument to Pascal's wager is that for every imaginable God, I can imagine an equal and opposite God with an inverse afterlife assignment function who is just as plausible, so my expected afterlife gain from following any religion are zero.
For the doomer version of Pascal's wager, this does not seem true. If we delay building ASI by a few decades to try and solve alignment first, the probability that an ASI which would have been aligned with humans if we had built it right away will be so annoyed with us that it will kill us out of spite seems much smaller than that these few decades will help us solve alignment. (Nor do I hear many e/accs arguing that alignment is either trivial or impossible, and if it is trivial than we should build ASI, but if it is impossible it is actually crucial to let OpenAI build the Basilisk ASAP lest it gets annoyed with us for being late to summon it.)
I think many doomers are more like agnostics than like committed theists. If one believes that there is a 20% chance we will build ASI before the next AI winter, and a 50% chance that the ASI will be unaligned, that is enough to urge caution on general principles while also not sticking your neck out by making any falsifiable predictions.
I really enjoy your podcast, by the way.
It's less like Pascal's Wager and more like climate change. It's a real concern! It could potentially kill us all! But everyone talks about it as though we need to immediately shut it down NOW or else we're doomed, and that's obviously not the case. And much like with climate change, AI scientists have an obvious bias in favor of thinking their area is more important and urgent than it actually is.
The difference is that the climate scientists actually made predictions. And lucky for us, many of their predictions turned out to be incorrect, so now we know we don't need to listen to the most histrionic of the climate doomers. The AI people seemed to have learned that lesson and just aren't making any predictions. It's incredibly frustrating to have people making extremely confident prognostications about the future, to the point that they're willing to invoke the coercive power of the state, but not actually make *any* concrete predictions about what happens if we don't. I refuse to take such people seriously. If you want me to listen to you, your theory needs to be falsifiable. Otherwise you're just selling vibes.
AI safety would be best framed primarily in concrete terms. Recursive self improvement usually has to be justified as a threat when tied to other science fiction scenarios. But people do understand that giving AI control of nuclear weapons is obviously a bad idea no matter how smart it purports to be.