SWOT Bot Logo
eMQulv3nVZk

Ex-Google CEO Says AI War Is COMING! (Superintelligence Strategy)



Transcript

Title: Ex-Google CEO Says AI War Is COMING! (Superintelligence Strategy)
Author: TheAIGRID

Transcript:
an hris the director of the center for
AI safety Eric Smith the former CEO and
chairman of Google Alexander Wang the
founder and CEO of SK these three came
together and produced super intelligent
strategy and this is by far one of the
most important documents because I
believe it outlines the super
intelligence strategy that you know
nations are going to take in order to
protect themselves from the Monumental
amount of craziness that is about to
come in the future due to the rapid
advancements in AI now this video might
be a long one but of course I will have
timestamps but it's important to know
that there are so many future events
that could possibly occur that aren't
exactly good and this superintelligent
strategy document seeks to provide a
guide to combat some of those things and
of course navigations around certain
issues which may arise such as unknowns
and many other cases of really difficult
scenarios that we will have to face
sometime in the near future so one of
the things that they talk about here is
the fact that AI is like electricity
they talk about how it is you know
rapidly transforming multiple facets of
society with advances arriving at a pace
and scale that very few anticipated now
they talk about how these developments
compel policy makers to address the
widening spectrum of issues from
economic shifts driven by automation to
the concern conerns about global
competition and they basically talk
about how like unlike specialized
technological tools AI is pretty much
like electricity because this spans
virtually every sector including Finance
health care and defense and they talk
about the fact that this broad
applicability coupled with its rapid
Evolution creates a risk landscape that
is expansive and difficult to predict
and strategic actors must contend with
potential misuse they say that right
here AI has been compared to electricity
for its general purpose nature to
traditional software for its economic
importance or to the printing press for
its cultural impact and of course you
can see right here they State while
these comparisons provide useful entry
points they fail to emphasize the grave
National Security implications of AI a
more productive analogy lies between Ai
and catastrophic dual use nuclear and it
talks about how a more productive
analogy lies between Ai and catastrophic
dual use nuclear chemical biological
Technologies and quite like them AI will
be integral to a nation's power while
posing the potential for mass
destruction and you might be thinking
that is an overstatement trust me it is
not so one of the things that this
document actually tries to get you to
understand is the fact that things do
change quickly it talks about how in
1933 the leading scientist Ernest
Rutherford dismissed the notion of
harnessing Atomic power as moonshine
basically said you know what don't even
try to harness Atomic power it's just a
complete joke and the very next day Leo
slizzard read Rutherford's remarks and
sketched the idea of a nuclear chain
reaction that ultimately birthed the
nuclear age and eventually such figures
as Oppenheimer recognized the Dual
nature of their work and today
apparently we are at a similar stage
previously considered science fiction
but apparently AI has advanced to the
point where machines can learn adapt and
potentially exceed human intelligence in
certain areas and AI experts including
Jeffrey Hinton and Yoshua Benjo Pioneers
in deep learning have expressed
existential concerns about the
Technologies they helped create now if
you aren't familiar with the Manhattan
Project they talk about how there are
Manhattan projects aiming to eventually
build super intelligence and that
they're already underway financed by
many of the most powerful corporations
in the world and of course governments
and you know nation states are going to
be willing to do this because it is a
very very decisive moment for some of
these countries because this could be
the make or break moment and with
several projects underway these are
basically secret projects where they're
going to be funding them with millions
and millions of dollars to ensure that
they get to this first now of course
they talk about this strategy this
entire super intelligence things it
actually requires thinking about the
unthinkable and in this paper they state
that we propose such a strategy to
Grapple with the fundamental questions
along the way what should be done about
lethal autonomous weapons catastrophic
malicious use powerful openweight Ai and
AI powered Mass surveillance how can
Society maintain a shared grasp of
reality what should be done about AI
rights how can humans maintain their
status in a world of mass automation so
many different things and so little
answers now one of the things they talk
about here is of course competition they
talk about how AI May transform the
foundations of Economic and military
power its ability to automate labor
could become the source of economic
competitiveness and in the military
sphere it could be used to dominate
Rivals and we begin looking at economic
power and then turn to the greatest
military implications so you can see
right here they actually start and
talking about how Ai and economic power
and how there's actually going to become
a shift in the terms of you know
economic power because AI chips are
going to basically be the currency of
that economic power and as AI becomes
more integrated into the economy the
possession of advanced AI chips May
Define a nation's power so historically
the wealth and population size underpins
you know an estate influence but the
automation of tasks through AI Alters
this Dynamic a collection of Highly
capable AI agents operating tirelessly
efficiently Rivals a skilled Workforce
effectively turning Capital into labor
and in this new paradigm the power will
depend on both the capability of AI
systems and the number of AI chips on
which they could run nations with
greater access to AI chips could
outcompete others economics
basically stating that look if you have
one nation that's dependent on just
humans and another as but and another
nation that has so many AI chips it's
able to completely automate parts of its
economy that you know economy is going
to be moving 10 times faster it's going
to become a lot more economically
valuable and of course since it's
powered by AI chips that is going to
become basically the currency of the
future and this is where they talk about
the fact that states have long pursued
weapons that could confer a decisive
advantage over rival and the AI systems
introduce new avenues for Pursuit
raising questions about whether certain
breakthroughs such as superintelligence
could undermine deterrent and reorder
Global power structures I think this is
so true the literal fact that you know
like one company out of California
probably will likely reach
superintelligence before the government
is going to certainly have some wide
ranging effects and definitely probably
reorder Global power structures or what
happens if China manages to get to Super
intelligence first and they managed to
you know get essentially have military
dominance you have to understand that
like when you think about how the modern
world is it's all all slowly and
slightly have having this you know
slight undertone of violence at the end
of you know this world order like I know
that sounds pretty crazy but this is
somewhat how things are enforced and the
fact that that global stability is going
to be changed because someone is going
to yield such a decisive advantage
that's going to be pretty dangerous
talks about how you know AI could could
enable military dominance Advanced Air
systems may drive technological
breakthroughs that alter the Strategic
balance similar to the introduction of
new nuclear weapons and could generate
strategic surprises you know that
catches Rivals off guard such a super
weapon May Grant two tiers of Advantage
one might be called a subnuclear
dominance which would allow a state to
project our widely and subdue
adversaries without disrupting nuclear
deterrence and the second possibility is
a strategic Monopoly on Power and that
would upend the nuclear balance entirely
ly and could establish one state
complete dominance and control leaving
the fate of Rivals subjects to it well
either way if you develop super
intelligence first you are going to be
in a very very good position as a
company country whoever you are and I
think this is clear why all of these
companies all of these countries and all
of these governments are pouring
billions of dollars into this because it
is I don't want to say the final
invention but potentially the final
invention and they talk about how super
weapons so subnuclear weapons such as an
a enabled cyber weapon that can suddenly
comprehensively destroy a state's
critical infrastructure exotic EMP
devices Next Generation drones could
confer sweeping advantages without
nullifying an adversary's nuclear
deterrent some super weapons might erode
mutuals to destruction outright a
transparent ocean would you know
threaten submarine stal revealing the
location of nuclear submarines and AIS
might be you know hard to pinpoint all
AIS might be able to pinpoint all
hardened mobile nuclear launchers
further under undermining the nuclear
Triad so this is just absolutely insane
when we think about what a super
intelligent AI is going to be able to do
and this is just going to be absolutely
crazy if this stuff exists because we're
going to have to think about the second
order the third order consequences of
these changes along with the military
now they talk about the implications of
super weapons the fact that super
intelligence is not merely a new weapon
but a fast track to all future military
Innovation and that is quite true if you
do have super intelligence you do have a
fast track to all future military
innovation
and pretty much all kinds of innovation
which is why people companies and you
know countries are really really chasing
this and it talks about how a nation
with sole possession of super
intelligence might be as overwhelming as
the conquistadors were to the Aztecs and
if a state achieves a strategic Monopoly
through AI it could reshape the world
affairs on its own terms which is pretty
crazy and here is the bad thing is that
an AI driven surveillance apparatus may
enable an unshakable totalitarian regime
transforming governance at home and
leverage abroad so that's going to be
pretty crazy because when you think
about it it's like AI superintelligence
if in the hands of the wrong government
they could most certainly engage some
kind of totalitarian totalitarian regime
which is going to be something that you
literally wouldn't be able to break free
from and you have to think about how
previous you know military dictatorships
were toppled how those things fell it's
simply because they simply didn't have
the technology to take over the world
what happens if someone gets super
intelligence and they want to take over
the world they're probably going to be
able to do it if they have that
strategic military advance and they have
all future military innovations it's
it's probably going to happen which is
of course a risk and they talk about the
fact that you know data centers may even
become military targets they talk about
the fact that like the mere pursuit of
this breakthrough which of course many
people are pursuing it could tempt
Rivals to act before their window closes
that's of course leading people to
basically of course have a lot more
conflict so it talks about how the fact
that you know in the nuclear era they
you know proposed preventative nuclear
strikes on the Soviet Union to th its
rise while the United States was
thinking about you know crippling the
Chinese during the early 60s and they
were basically faced with a hard
decision and you're thinking about this
again it's like once again do we have
preventative action like rather than
relying on the corporation or seeking to
outpace other countries C
you know countries may just think you
know what let's actually sabotage or do
data center attacks if the only other
option is to potentially allow that
country to move forward and this is
something that is a real real risk
because if certain countries realize
that look we are never going to win the
AI race they may just seek to slow other
countries down whether it's going to be
by Espionage where it's going to be by
sabotage they may just want to destroy
those data centers and this is where
Yoshi Benjo actually talks about this
and it is you know he spoke about this
months ago but it's becoming clear and
clear by the day the impact super
intelligence will have now of course
they do talk about you know terrorism
and ai's dual use capabilities amplify
terroristic attacks and technologies
that can revolutionize Healthcare or
simplify software development also have
the potential power to you know give
people the ability to create bioweapons
and conduct cyber attacks and this
amplification effect lowers the barriers
to terrorists enabling them to execute
large scale attacks that were previously
limited to nation states now here's
where we talk about Ai and the lower
risk of biot terrorism they actually
talk about a previous story where there
was a Japanese cult that orchestrated
the 1995 subway attack and they actually
operated with limited expertise and
these were people that they weren't that
smart and they managed to produce and
deploy a chemical weapon in the heart of
Tokyo's transit system killing 13 people
and injuring over 5,000 and that attack
paralyzed the city instilling widespread
fear and is demonstrating the Havoc that
determined non-state actor can wreck and
they're talking about the fact that you
know with AI he probably would have been
able to go much further and AI could
provide step-by-step guidance on
designing lethal pathogens sourcing
materials and optimizing methods of
dispersal and what once required
specialized knowledge and resources
could become accessible to individuals
with malevolent intent dramatically
increasing the potential for
catastrophic outcomes basically what
they're saying here is that like we have
a situation on our hands where the
advancement of AI is great because it
allows people to do great things but
that also allows people to do worse
things essentially AI is a tool that
empowers agency but for those who want
to use their agency to commit crimes
like terrorism it's also unfortunately
going to increase the levels of what
they can do now if you've been paying
attention to AI safety and AI companies
you'll know that they actually speak
about something called AI safety level
three which is I think ASL level three
and they always state that they won't
release these models if they do release
that threshold which leads me to believe
and this is something that I've said for
quite some time now is that future AI
models we either a won't get them
released publicly or B you're probably
going to need a license to use certain
AI models because you won't be able to
constrain the models but the information
in it is going to be potentially so
dangerous that anyone who uses the
models you'll have to you know have some
kind of clearance that way you're going
to be able to interact with the model
and I know it does seem pretty crazy but
if if you can't risk that model being
used by the general public then you're
probably going to have to have it
restricted to a certain level of user
that actually has the kind of clearance
to interact with that for example what I
mean by that is that I can't just go
ahead and get some uranium I have to go
and pass pass some clearance checks for
example I just can't go ahead and get a
gun I have to go ahead and pass some
tests also I can't just go ahead and
drive a car I have to pass a simple
driving test so the point I'm trying to
make is that over time I do think there
will be a lot more limits to AI to sort
of counteract this effect because the
future of like open source and stuff
like that I do think that as these
models get much smarter they're
definitely going to be a lot more
chances for individuals to commit a lot
more crimes which is you know one of the
Dual use things which is rather
unfortunate now they talk about the fact
that you know modern bioweapons enhanced
by Aid driven design could exploit
vulnerabilities in human biology with
unprecedented Precision creating
contagions that evade detection and
resist treatment now they're talking
about mirror bacteria which is you know
engineered with reversed molecular
structures that could evade immune
defenses that usually keep pathogens at
Bay and if you think about it of course
some of this stuff is very very far out
but the risks are there if someone's
able to you know develop these pathogens
or you know try and you know kill off a
certain demographic of people you know
people of a certain race people of a
certain eye color people of a certain
you know group I mean they're probably
going to be able to do it with a lot
more accuracy considering the rate at
which these tools are improving now they
talk about the fact that there could be
some cyber attacks on CR critical
attacks on you know critical
infrastructure like cyber attacks and
the problem is is that like this is
something that's already happening I do
remember that Amazon was saying that
they are receiving so many hacking
attacks every single day now I could
even go ahead and find the article and
it's all due to the fact that AI is
enabling you know large scale
sophisticated attacks if you're able to
run your own server a lot of times these
open source tools they actually allow
you to run things on a home network you
can run them on your own server and
imagine you're a able to house various
different agents just doing your hacking
work for you another thing you know even
if it's not you know cyber attacks on
critical infrastructure I'm pretty sure
that many of you guys are aware that
there are more than ever now scams that
are of the realistic nature and as time
gets on and on and on I do think these
scams are going to get more and more
sophisticated and very harder to verify
because you've got things like face swap
you've got things like AIS that sound
super realistic and of course you've got
AI agents that are able to do a variety
of different tasks pretty pretty crazy
and of course it's really really
concerning now when they talk about
cyber attacks on critical infrastructure
the one that actually really concerns me
is that many countries and many you know
like really big countries like America
the UK the EU all these Western
countries they actually do have a
fragile system like the power grids and
the Water Systems they're more fragile
than they do appear and a hack targeting
digital thermostats could force them to
cycle on and off creating damaging power
surges that burn out the Transformers
and these are critical components that
can take years to replace so we could be
in a situation where you know certain
you know uh uh maybe States or you know
countries certain parts of them you know
don't have water for years at a time due
to you know these parts not being
available I mean it's just a situation
where we really do have to think about
the potential for damage here and the
wild scale usage that can happen and you
have to understand that like this also
you know is on the other side you're
also going to have to have the defense
okay like you are going to have to beef
up your defenses so much if this is the
case so of course one of those career
opportunities I do think is going to be
continually expanding is of course cyber
security and defense because the cyber
attacks they're only going to get worse
as time goes on and it talks about how
you know exploiting vulnerabilities and
supervisory control and data acquisition
software compelling sudden load shifts
and driving Transformers beyond safe
limits you know talking about water
treatment facilities tampered sensor
readings could fail to detect dangerous
mixture all of these crazy things and
the worst thing about this is that like
this you know article actually doesn't
talk about this but one of the things
they don't talk about is is the fact
that like even if you have super super
super crazy like you know AI that can
you know defend from any attack there is
still the human element where like
humans are susceptible to messages and
hacks like a lot of hacks that happen
now just because of human era like
humans see an email they may not realize
it they you know talk to a woman online
and the woman just scams them out of
millions of dollars because they think
they're talking to like a beautiful
woman or something like because AI is
now so emotionally intelligent there is
also that aspect as well that like can
definitely exploit people I remember
reading the recent GPT 4.5 paper where
GPT 4.5 was able to successfully get
like a lot of money out of people by
convincing them um in a very specific
method and this is just going to show
you how smart future systems are going
to be now this is where they talk about
you know dual use technology and the
offense defense balance so I probably
should have you know made this a bit
higher quality but essentially what they
talk about here is the sense that you
know dual use technology should it be
proliferated without restrictions and
they basically say that considering the
fact that AI often helps the attackers
more than Defenders this is something
that shouldn't proliferate widely so
they're basically talking about if you
know the offense defense balance is
defense dominant meaning that whatever
technology you have if it allows you to
defend SL help you defend more then that
technology can proliferate widely but if
it is offense dominant then of course
this is something where you have to
limit proliferation because if the
defense you know if the attackers can
cause catastrophic harm which would they
they can then of course you have to
limit proliferation and this is why you
know I've always said the open source AI
at least dangerous open source AI should
be open source because the widescale
harm like in 10 to 20 years just simply
aren't going to be worth it and that's
something that you know you have to
think about you know for example like
they spoke about critical infrastructure
often struggles to defend itself because
it's hard to constantly update those
systems without causing interruptions
and the problem is is that attackers can
easily exploit those weaknesses you know
quickly as possible so you know you have
to understand this technology it does
favor the attackers and we have to ask
if this misuse could lead to a
catastrophe and basically what this
craft says is that it's typically easier
and quicker to create threats than to
stop them so we need to be careful with
our decisions on how to widely share
these powerful Technologies now of
course let's talk about the loss of
control because this is one of the
widely debated things within the AI
space and this is what they talk about
so they talk about the fact that like we
now shift from threats involving rival
States and terrorists to a new source of
threat the possibility of losing control
over an AI system itself here AI is not
just you know amplify existing threats
but create new paths to mass destruction
a loss of control can occur if
militaries and companies grow so
dependent on automation that humans no
longer have meaningful control and if an
individual deliberately leashes
unleashes a powerful system or if
automated AI research outruns its
development safeguards you know those
are things that could potentially
undermine National security whilst this
does seem like I guess you could say you
know fiction now to many people who are
sitting on the fence I think 10 years
from now this is not going to be fiction
at all when we do have rapidly more
powerful AI systems and this is
something that I think you know right
now it's not that crazy but I remember
once again reading research papers where
these small models they built like you
know some kind of framework where the
model was truly agentic and it was able
to copy and exfiltrate itself multiple
multiple times so this is of course
something that we do have to think about
like this isn't something that is just
completely impossible it's something
that could happen and you know of course
another thing as well is that you know
people are already doing this like there
is this slow slow slow erosion of
control from these models so the problem
is is that because AI is so good the
waves of automation start to happen so
it says once incremental it may strike
across entire sectors at once and leave
human workers abruptly displaced so
basically what they talk about is that
in this climate those who refuse to rely
on AI to guide decisions will find
themselves out out paced by competitors
who do having little choice but to align
with Market pressures rather than ble
with them and each new gain in
efficiency entrenches dependence on AI
as efforts to maintain oversight only
confirm the pace of Commerce outstrips
human comprehension and soon replacing
human managers with AI decision makers
seems inevitable not because anyone
wants to surrender to Authority but
because to do otherwise courts immediate
economic disadvantage what they're
saying here is that look if you don't
use AI you're going to be behind and if
that is the case we're slowly overtime
going to lose control and if we lose
control then of course it's a situation
where if our entire state is dependent
on these AI systems what happened when
that entire State you know starts to go
out of control and AI potentially does
something we may not understand and then
you can see it talks about here you know
once AI managed operations set the tempo
more AI is required to Simply keep pace
and we're seeing this right now it says
you know right now these systems compose
emails they handle administrative task
and over time they orchestrate complex
project s then they start to supervise
entire departments and manage vast
Supply chains Beyond any human capacity
and it says as society's economics
depart becomes more and more complex
people will trust the AI more and more
and it increasingly has this you know
cycle of escalating Reliance on that AI
system like right now some people use AI
for various different things like I
personally use AI to manage my social
media but in the future maybe I'll use
AI to you know write the scripts maybe
I'll use AI to make some decision
makings maybe I'm going to use it to
make some key decisions and you know as
the decisions get better and better and
as I keep using the AI over time I'm
going to be more and more reliant on
that Ai and you can see irreversible
entanglement and eventually if essential
infrastructure and the markets cannot be
disentangled from AI without risking
collap collapse of course human
livelihoods depend on automated process
that no longer permit easy unwinding and
people lose the skills needed to
reassert command so LOL power grids
which cannot be shut off without immense
cost our AI infrastructure is going to
become so you know tightly intertwined
with our civilization and the cost of
pressing the off switch grows more and
more prohibitive as halting the systems
would cut off the source of our
livelihoods and it's pretty crazy
because you know they're basically
saying that look if we're going to build
this system and it's going to be inside
of our you know world like how on Earth
are we going to ensure that at some
point we don't lose control of these
systems and we make sure that like
because they're so deeply intertwined we
don't lose anything when they go off the
rail so of course once again they talk
about you know chaos GPT and how people
could you know try and unleash crazy
agents talk about Rogue stake tactics
like any like an Unleashed AI could draw
on the methods of Rogue States North
Korea for instance has siphoned billions
you know through cyber institutions and
cryptocurrency V recently I think they
stole like a billion dollars of ethereum
which is pretty incredible but when we
think about the fact that like these
Rogue states only takes one person to
like you know improve those tactics on
scale self-propagate copies of itself on
scattered data centers and divert funds
to finance more ambitious projects it's
going to be pretty hard to stop those
things like this is going to be out of
control something we're going to have to
think about in the future and it says
though so rudimentary now future models
May grow far more agile and perform
tasks that once demanded human hands and
if a capable AI hacks such machines it
gains immediate leverage in the physical
world from there the sequence is
straightforward it crafts a potent
cocktail of bioweapons and disperses it
through robotic propes crippling
Humanity's ability to respond and this
is where they talk about the fact that
like once you know an AI you know is is
is is really really dangerous it could
then you know have a simple path to
catastrophe by being able to operate in
the physical world now I do think that
there are going to be rules and
regulations like you're not going to you
know have a random Tesla bot just
walking around that like nobody knows
what it's doing but I do think that the
risk will be there because all of these
companies are building humanoid robots
and I think that there are going to be
tons of humanoid robots in the future
and I recently even saw a demo today of
a robot jogging that looked super super
realistic it's just pretty crazy but the
point of the matter is is that that
stuff is advancing so quickly a lot
quicker than I honestly initially
thought and they basically say that like
if an AI is able to hack these machines
it's going to be able to get immediate
leverage in the physical world and from
there it's not going to be hard for it
to do anything else that is of course
very dangerous so that is of course
something that we do have to think about
and you know this is where we get into
the intelligence recursion so this is
something that is really really
debatable but at the same time this is
probably the biggest risk so in 1951
Alan churing suggested that a machine
with human capabilities would not take
long to outstrip our feeble powers and
good later warned that a machine could
redesign itself if a in in a rapid cycle
of improvements an intelligence
explosion that would leave humans behind
and today all three most cited AI
researchers Yoshua Benjo Jeffrey Hinton
Elia Sasa have noted that an intelligent
explosion is a credible risk that could
lead to human extinction AAS this does
seem like something that is pretty much
you know fake and you know all of that
stuff I think it's worthwhile taking a
look at what Mustafa siman says the
Microsoft AI CEO he says something super
interesting I also think it's worth what
looking at Larry summon says he's on the
you know Board of open aai and this is
the thing that they talk about suppose
we develop a single AI that performs
worldclass AI research that operates
around the pace of today's AI say 100
times the pace of a human copy it 10,000
times then we have a vast team of
artificial AI researchers driving
Innovations around the clock and this is
where an intelligence recursion or
simply a recursion defines the notion of
recursive self-improvement by shifting
from AI editing itself to a population
of AIS collectively and autonomously
designing the Next Generation and even
if an intelligence recursion only
achieves a 10-fold speed up overall we
could condense a decade of AI
development into a single year and such
feedback loop might accelerate Beyond
human comprehension and oversight and
with iterations that proceed fast enough
and do not quickly level off the
recursion could give rise to an
intelligent explosion and such an AI may
be as uncontainable to us as an adult
would be to a group of threeyear olds
and as Jeffrey Hinton puts it there's no
good track record of a less intelligent
things controlling things of Greater
intelligence basically saying that look
if we create things that are that much
smarter than ourselves how on Earth are
we really truly going to control those
systems now considering all the things
that we just previously discussed this
is where they're saying that look
despite the danger the intelligence
recursion means a powerful lure for
states to you know overtake Rivals and
if the process races ahead fast enough
to produce super intelligence the
outcome could become a strategic
Monopoly basically what they're saying
here is that look if we realize that
super intelligence is something that is
going to dominate us would it not make
sense for those countries to try and
develop super intelligence you know
quickly with the intelligence recursion
as their strategic Monopoly and just
pray that they manage to control it
because if they managed to get that far
ahead in the intelligence recursion then
they're to be you know that far ahead of
everyone else so it says even if the
improvements are not explosive a
recursion could still Advance
capabilities fast enough to outpace
Rivals and potentially enable
technological dominance and the first
mover Advantage might then persist for
years or indefinitely spurring states to
take bigger risks in pursuit of that
prize basically stating that look
whoever gets their first they're
probably going to have a model that is
so intelligent it's probably going to be
able to predict and do anything and
everything and whoever gets there first
they're probably likely to stay there
for that time which is why there there
so much incentive to actually get there
and you can basically see here that with
all these geopolitical pressures that if
the choice is Stu risk omnicide or lose
some might take that gamble carried out
by multiple competing Powers this
amounts to a global Russian Roulette and
drives Humanity towards an alarming
probability of you know Annihilation
basically saying that look the problem
here is that with everyone competing for
power all of these countries one of them
is going to take the Gamble and then
because they're all taking the gamble
everyone is basically going to have this
secret man hat project which leads us
more towards an Annihilation because
none of us know how to control those
systems and we're going to develop them
even faster before we develop those you
know safeguards and you know in a sharp
contrast after the defeat of Nazi
Germany the Manhattan Project scientists
feared that the first atomic device
might ignite the atmosphere and Robert
Oppenheimer asked Arthur Compton what
the acceptable threshold should be and
Compton set it at 3 in a million and
anything higher was too risky and the
calculations actually suggested that the
real risk was below compet threshold so
the test went forward and that you know
we should work to have our risk
tolerance stay near competence threshold
rather than in double digit territory
but in the absence of this coordination
whether States trigger a recursion
depends on their probability of a loss
of control and they basically talk about
that the fact that these companies are
going to do this not companies but
countries it's probably going to lead
the final Victory not to any state but
to the AIS themselves which is of course
unfortunate for us and they talk about
you know loss of control can emerge
structurally as Society gradually yields
to decision-making AI systems that
basically become indispensable and of
course insidiously acquire more and more
effective control so this is something
that is probably going to happen over
time and you know they talk about you
know it could redefine National
competitiveness based on nation's access
to AI chips AI you know super
intelligence could enable you know a
super weapon that could have a state to
have strategic Monopoly the Dual use
nature you know amplifies all of these
risks and there are such strong
attackers of Advantage for biot
terrorism we need to be super careful so
we have to be very very careful about
these AI systems and of course they talk
about strategies in order to safeguard
this so one of the things they talk
about you know proponents assume that if
an a modor test crosses a hazard
threshold major Powers will pause their
program yet militaries desire these
precisely hazardous capabilities making
these pauses pretty much impossible
which is of course pretty you know
unfortunate of course you know by
contrast the US China and Economic
Security revieww Commission has
suggested a more offensive path and they
actually talk about you know actually
building a Manhattan Project to build
super intelligence and realistically we
already know that there are at least
three or four companies working on this
we've got opening eyes saying they're
going to build super intelligence there
was another company recently with former
founders of Google saying that their
goal is to build not just super
intelligence but autonomous
superintelligence and then we got a
third company got x. AI working on Super
intelligence and of course we've got
elas satova who's also working on Super
intelligence so for me I think it is
something that is happening now one of
the things that this paper wanted to
show us was of course the possible
possible outputs of this happening so
how bad does it look so we can see here
that it says do do we do the super
intelligence Manhattan Project or not
and let's see let's say that we actually
do the super intelligence project and
what happens if China tries to mess with
this then of course we have immediate
escalation if China doesn't mess with
this then of course let's say that you
know the US outpaces China we've got a
situation where potentially the
recursion is not controlled and then
everyone dies so the US manages to get
you know super intelligence but of
course everyone dies what about if the
you know the recursion is controlled
let's say we have super intelligence we
control it we could possibly create a
super weapon and then potentially cross
China but if we don't create a super
weapon potentially there's going to be
escalation between these countries of
course once again if China manages to
outpace the US of course and they don't
control their superintelligence then
every body could die if they outpace the
US and they have a controlled super
intelligence then they could also cross
the United States and of course if they
don't develop a super weapon there could
be some escalations overall any of these
scenarios none of them look that good
and two of them in fact like one of them
is omnicide No in fact two of them are
omnicide the other one's escalation and
the other one ends up with the us being
crushed and so you have to understand
we're going to enter a very very
politically unstable time due to all of
these things going on now of course this
is where they talk about m and this is
where arrival with a vastly more
powerful AI would amount to a severe
National Security emergency and so
superpowers will not accept a large
disadvantage in a capabilities rather
than wait for arrival to weaponize
superintelligence against them States
will act to disable threatening AI
projects producing a deterrence Dynamic
that might be called Mutual assured AI
malfunction so they're basically stating
that look there is no way countries are
going to sit back while other nation
states develop their super intelligence
and just rapidly Go off into the
atmosphere with all of their
technological Marvels and Innovations
you know two scenarios and basically
they're talking about the fact that you
know if the US developed nuclear weapons
and the other people develop nuclear
weapons then there's no scenario where
one of them deploying the nuclear
weapons doesn't lead to the other one
deploying the nuclear weapons so of
course there's going to both they're
both essentially going to be destroyed
and of course everyone dies in that
scenario so it kind of there's like this
equilibrium so it's like kind of nice
you know and then of course I'm guessing
that you know there's kind of this
potential ual scenario here where the US
bid for strategic dominance and of
course the Chinese bid for it and then
of course they both die and of course
there's this mutually assured AI
malfunction so this is the newer idea
that if country a dries to sabotage or
destroy countries b super smart AI
project then country B will do the same
to Country a and then both countries are
going to be in a screwed scenario and
then so neither countries want to do
this so essentially we have a mutually
assured a malfunction which is going to
probably be relatively stable so and
hopefully the basically thinking this is
going to be basically the same with how
nukes are now basically they're saying
that this is probably the default
outcome and that a state can expect its
AI project to be accept to be disabled
if any rival believes it poses an
acceptable risk and this Dynamic
actually stabilizes the Strategic
landscape without lengthy treaty
negotiations and you know which is all
that is necessary that the state
collectively recognizes their strategic
situation and the net effect may be that
a stalemate that postpones the emergence
of super intelligence cels many loss of
control scenario and undercuts efforts
to secure a strategic Monopoly much as
Mutual sh destruction once restrained
the nuclear arms race hopefully this
does happen and I guess we will see how
things play out there but one of the
things that they actually talk about
something you know cuz this is a really
really long document and I might even do
a longer video on this because there's
so much that I just think that like
every single page was really intriguing
like I was reading this think this video
is probably going to be an hour long and
I really wanted to cut it down but every
single page was interesting and its
video is still not everything that they
have so I decided for the rest half I'm
just going to you know pick out some
super interesting things and one of the
things that they did speak about was the
fact that the primary goal of compute
Security is to treat Advanced AI chips
like we treat enriched uranium and that
is basically stating that look these AI
chips are going to become really really
hard to obtain and of course they're
going to track them because in the
future AI chips are going to become
super super important to Global super
security and essentially they're going
to become a piece of technology that are
really really important for the future
of AI now of course they talk about you
know if such a model becomes publicly
available they are irreversibly
proliferated making Advanced AI
capabilities accessible to anyone
including terrorists and other hostile
non-state actors who are far more likely
to create bioweapons so they're talking
about like exfiltration by rival
superpower is concerning the public
release of wmd capable model weights May
pose a far grer threat and this is why I
say there is a risk of open sourcing
models with great capability one of the
craziest statements from this document
is that they talk about the fact that
you know AI systems exhibit
unpredictable failure modes and the fact
is that this is so much different from
Nuclear Physics because this is a
rigorous foundation in you know like
physics and math but today's AI research
often advances through theoretical
tinkering throwing stuff at the wall
seeing what sticks Vibes based
evaluations and to create these Cutting
Edge AIS they just essentially grow them
like these systems are not designed they
are grown and of course that can result
in you know emergent capabilities so
this is something that you know they
think is also another risk factor
because there are things we cannot
control so overall I think this is
something that you know there there are
just a a plethora of different things to
consider and I would say that overall we
have a situation on our hands where
super intelligence is probably going to
come within the decade and probably
within the next 10 to 20 years I think
that those breakthroughs that are needed
probably will happen and that a lot of
these things you know being discussed in
today's video and in the document which
of course you can read I think a lot of
them you know future governments will
have to consider but let me know what
are the things that you you guys think
about the most and I would love to know
your thoughts and theories so with that
being said I will see you guys in the
next video