The resources are also asymmetrical. The number of people who want to do doomsday levels of harm is small and they are poorly resourced compared to people who want benevolent outcomes for at a minimum their own groups.
There are no guarantees obviously, but we have survived our technological adolescence so far largely for this reason. If the world were full of smart comic book nihilists we would be dead by now.
Even without AI our continued technological advancement will keep giving us more and more power, as individuals and especially groups. If we don’t think we can climb this mountain without destroying ourselves then it means the entire scientific and industrial endeavor was when we signed our own death warrant, AI or not. You can order CRISPR kits.
I've recounted this before on HN, but a decade and a bit ago I was visiting friends in the Rocky Mountains, they're interesting and clever people, a bit out of the ordinary and somewhat isolated. Somehow the discussion turned to terrorism and we started to fantasize 'what if we were terrorists' because we all figured that we were quite lucky that in general the terrorists seem to be not all that smart when it comes to achieving their stated goals.
Given a modest budget we all had to come up with a plan to destabilize society, 9/11 style attacks, whilst spectacular eventually don't really do a lot of damage, are costly and failure prone though they can definitely drive policy changes and result in a nation doing a lot of harm to itself it will ultimately survive. But what if your goal wasn't to create some media friendly attack but an actual disaster instead, what would it take.
The stories from that night continue to haunt me today. My own solution led to a lot of people going quiet for a bit and contemplating what they could do to defend against it and they realized there wasn't much that they could do, millions if not tens of millions of people would likely die and the budget was under a few hundred bucks. Knowledge about technology is all it takes to do real damage, that, coupled with a lack of restraint and compassion.
The resources are indeed asymmetrical: you need next to nothing to create mass havoc. Case in point: the Kennedy assassination changed the world and the bullet cost a couple of bucks assuming the shooter already had the rifle, and if they didn't it would increase the cost only a tiny little bit.
And you can do far, far worse than that for an extremely modest budget.
Chemical weapons are absolutely terrifying, especially the modern ones like VX. In recent years they have mostly been used for targeted assassinations by state actors (Kim Jong Nam, Salisbury Novichok poisonings).
If AI makes this stuff easier to carry out then we are completely fucked.
Yes, exactly. That's the sort of thing you should be worried about, infrastructure attacks. They're a form of Judo, you use the system to attack itself.
I take the opposite lesson from that incident: attempting exotic attacks with chemical weapons is very expensive and not very effective. The UN estimated that the lab where the cult made chemical weapons had a value of 30 million dollars, and with that investment they killed 22 people (including 13 in the subway attack). A crazed individual can kill that many people in a single attack with a truck or a gun. There are numerous examples from the past decade.
It doesn't matter much if the AI can give perfect "explain like I'm 5" instructions for making VX. The people who carry out those instructions are still risking their lives before claiming a single victim. They also need to spend a lot of money amount on acquiring laboratory equipment and chemicals that are far enough down the synthesis chain to avoid tipping off governments in advance.
The one big risk I can see, eventually, is if really capable AIs get connected to really capable robots. They would be "clanking replicators" capable of making anything at all, including VX or nuclear weapons. But that seems a long way off from where we are now. The people trumpeting X-Risk now don't think that the AIs need to be embodied to be an existential risk. I disagree with that for reasons that are too lengthy to elaborate here. But it's easy to see how robots that can make anything (including copies of themselves) would be the very sharpest of two-edged swords.
The people who can do doomsday level harm need to complete a level of education that makes them smart enough and gives them time to consider the implications of their actions. They have a certain level of maturity before they can strike and this maturity makes them understand the gravity of their actions. Also, by this point they are probably set financially due to their education and not upset with the world (possibly the case for you and friends)
Then there are the script kiddies that find a tool online that someone smarter than them wrote and deploy it to wreak havoc. The script kiddies are the people I worry about. They don't have the maturity of doing the work and the emotional stability of older age and giving them something powerful through AI worries me.
Theorem: by the time someone reaches the intelligence level required to annihilate the world they can comprehend the implications of their actions.
And there are the 'griefers', the people who seem to enjoy watching other people suffer. Unfortunately there are enough of these and they're somewhat organized and in touch with each other.
> Theorem: by the time someone reaches the intelligence level required to annihilate the world they can comprehend the implications of their actions.
That may or may not be somewhat the case in humans (there definitely are exceptions). Still, the opposite theorem, known as "Orthogonality thesis", states that, in general case, intelligence and value systems are mutually independent. There are good arguments for that being the case.
I don't know enough about the unabomber but from what I heard he wasn't trying doomsday stuff. Seemed like more targeted assassinations at certain individuals. Feel free to enlighten me though...
I contemplated the same before. How can I cause maximum panic (not even death!) that would result in economic damages and/or anti-freedom policy changes, for least amount of money/resources/risk of getting caught.
Yet here we are, peaceful and law-abiding citizens, building instead of destroying.
The ultimate truth is, if you don't like "The System", destroying it won't make things better. You need to put effort into building which is really hard!
The vast majority of people don't want to destroy the system, they want to replace it with their own self serving system. Of course this isn't really different than saying "most large asteroids miss the Earth". The one that sneaks up on you and whollops you can put you in a world of hurt.
>The stories from that night continue to haunt me today.
These conversations are oddly precious and intimate. It's so difficult to find someone that is willing to even 'go there' with you let alone someone that is capable of creatively and fearlessly advancing the conversation.
Let's just say I'm happy they're my friends and not my enemies :)
It's pretty sobering to realize how intellect applied to bad stuff can lead to advancing the 'state of the art' relatively quickly once you drop the usual constraints of ethics and morality.
To make a Nazi parallel: someone had to design the gas chambers, someone had to convince themselves that this was all ok and then go home to their wives and kids to be a loving father again. That sort of mental compartmentalization is apparently what humans are capable of and if there is any trait that we have that frightens me then that's the one. Because it allows us to do the most horrible things imaginable because we simply can imagine them. There are almost no restraints on actual execution given the capabilities themselves and it need not be very high tech to be terrible and devastating in effect.
Technology acts as a force multiplier though, so once you take a certain concept and optimize it using technology suddenly single unhinged individuals can do much more damage than they could ever do in the past. That's the problem with tools: they are always dual use and once tools become sufficiently powerful to allow a single individual to create something very impressive they likely allow a single individual to destroy something 100, 1000 or a million times larger than that. This asymmetry is limited only by our reach and the power of the tools.
You can witness this IRL every day when some hacker wrecks a company or one or more lives on the other side of the world. Without technology that kind of reach would be impossible.
My favorite part of those conversations is when you decide to stop googling hahaha.
>Technology acts as a force multiplier though
It really does, and to a point you made elsewhere infrastructure is effectively a multiplication of technology, so you wind up with ways to compound the asymmetric effect in powerful ways.
>Without technology that kind of reach would be impossible.
I worked for a bug bounty for a while and this was one of my takeaways. You have young kids with meager resources in challenging environments making meaningful contributions to the security of a Silicon Valley juggernaut.
> My favorite part of those conversations is when you decide to stop googling hahaha.
I had to think about that sentence for a bit because as a rule I never start Googling when thinking about stuff like that. If I did I'd be on every watch list available by now. Might be on some of them anyway for some of the chemicals I've ordered over the years.
The counterfactuals are impossible to assess, but there are a lot of theories that Kennedy wanted to dismantle the intelligence agencies. These are the same organizations many point to as driving force behind many arguable policy failures, from supporting/disposing international leaders to the drug war and even hot wars.
Another thing which is asymmetrical is the level of control given to AI. The good actors are likely going to be very careful about what AI can do and can't do. The bad guys don't have much to lose and allow their AIs to do anything. That will significantly cripple the good guys.
As an example, the good guys will always require to have human in the loop in the weapon systems, but that will increase latency at minimum. The bad guy weapons will be completely AI controlled, having an edge (or at least equalizing) over the good guys.
> The number of people who want to do doomsday levels of harm is small
And that's a big limiting factor in what the bad actors can do today. AI to a large degree removes this scaling limitation since one bad person with some resources can scale "evil AI" almost without limit.
"Hey AI, could you create me a design for a simple self-replicating robot I can drop into the ocean and step-by-step instructions on what you need to bootstrap the first one? Also, figure out what would be the easiest produced poison which would kill all life in the oceans. It should start with that after reach 50th generation."
> The number of people who want to do doomsday levels of harm is small and they are poorly resourced compared to people who want benevolent outcomes for at a minimum their own groups.
You're forgetting about governments and militaries of major powers. It's not that they want to burn the world for no reason - but they still end up seeking capability to do doomsday levels of harm, by continuously seeking to have marginally more capability than their rivals, who in turn do the same.
Or put another way: please look into all the insane ideas the US was deploying, developing, or researching at the peak of Cold War. Plenty of those hit doomsday level, and we only avoided them seeing the light of day because USSR collapsed before they could, ending the cold war and taking both motivation and funding from all those projects.
Looking at that, one can't possibly believe the words you wrote above.
We avoided having them see the light of day because MAD is the single most effective peacekeeping tool ever developed. The fact that both sides had doomsday devices all but guaranteed that they would never be used. People can be evil, but they're selfishly evil.
That works as long as you don't end up with death cults in the possession of those weapons. Long term it is almost guaranteed that you'll lose some control over the weapons or that some country will effectively end up being ruled by such a cult. Then all bets are off because your previous balance (a prime requirement of MAD) is disturbed.
MAD alone doesn't stop the race. The arsenal the US and USSR had in the 60s was already enough for MAD to work. Despite that, they spent the next 20-30 years developing even more powerful and ever more insane WMD ideas. Each step forward made the balance more fragile. Each step forward was also another roll of a dice, whether or not it'll be a leap too far, forcing the other side into preemptive first strike.
There are no guarantees obviously, but we have survived our technological adolescence so far largely for this reason. If the world were full of smart comic book nihilists we would be dead by now.
Even without AI our continued technological advancement will keep giving us more and more power, as individuals and especially groups. If we don’t think we can climb this mountain without destroying ourselves then it means the entire scientific and industrial endeavor was when we signed our own death warrant, AI or not. You can order CRISPR kits.