Technological Singularity


So Today’s topic is Technological Singularities,
the idea that an increase in intelligence beyond the human norm, either in the form
of a computer intelligence or a heightened human intelligence, might trigger an accelerating
chain of subsequent improvements to intelligence until you end up with something as intelligent
to modern humans as humans are to ants. This is a big topic, one we cannot cover in
total in one episode, so let’s establish our goals for today. Here is what we need to discuss:
1) What is a Technological Singularity? 2) How realistic is one? 3) Is one inevitable? 4) Is it a good thing? Those will be the key points for today. Mixed in with that we will aim to clear up
some misconceptions about this concept and some bad thinking. And wow, is there a lot of that. While the basic concept has a lot of merit
I have seen approaches to this topic that would make cult members stop drinking their
Kool-Aid long enough to tell the folks to chill out because they are getting kind of
creepy. This is a lot like Transhumanism, which we
discussed before. There is a nice sane group of folks who want
to advance technologies to improve physical and mental health of people, hopefully beyond
the normal human constraints, and there is a group of folks who think getting an LED
light implanted under their skin by someone without a medical degree is somehow advancing
that cause. The latter gives the former a bad name and
the same applies to what are called Singularitarians. There’s the rational type and the type who
seem to have gone a bit overboard. The basic line of reason of this concept is
as follows. Computers are getting faster and faster, you
ought to be able to eventually run something as smart as a human on one and one a bit smarter
than a human too, which ought to be able to make one even smarter, and in less time, which
can make a smarter one in an even shorter time, which does it again. It might be improving itself or making a new
mind, doesn’t matter. Some would say that once you get that first
better than human mind the process proceeds over just mere years, other think you would
flick one on and ten minutes later you have got a very literal Deus Ex Machina on your
hands. And they might be right. Our goal today is definitely not to prove
they are wrong. What we are doing today is going to be looking
at a lot of bad arguments used in favor of this position and a lot of the criticisms
of the concept and also a lot of the flaws in some of those criticisms. We are, in a nutshell, going to clear away
a lot of the tangled nonsense surrounding this concept. So let us go ahead and list out the key postulates
of a Technological Singularity so we can do this methodically. 1) Computers are getting faster
2) The rate at which computers are getting faster is accelerating
3) We can make a computer better than a human mind
4) That computer can make a computer smarter than itself
5) That next computer can make an even smarter computer and faster than the last step
6) This cycle will continue until you get to a singularity. Now we could spend a whole video on each of
these. I should know, I tossed out the first draft
of this video when I caught myself spending 20 minutes on postulate #1, the most solid
of the group. We don’t need to spend that much time on
them. #1 is easy, we know computers are getting
faster but we also know that could stop tomorrow. We often have new technologies progress at
fast rates for a generation or two after their discovery, or some new major advancement,
then plateau out. Heck it has literally happened not just with
many major technologies before but computers themselves. I have mentioned in the past that computer
used to be a job, well we have also had computers for a long time including simple analog ones
all the way back to Ancient Greece. They got way faster when we invented the vacuum
tube, the thing that made older television so thick, which made computers much faster
and more viable. Then we discovered semiconductors and got
transistors and had a second wave of expansion. We have just about maxed out what we can do
with transistors in the lab, and manufacturing them in bulk is quite hard too. So we shouldn’t go assuming they will always
get faster. Realistically that would not make sense anyway. We can only make transistors so small, they
use semiconductors and that is a specific effect caused by mixtures of atoms, you cannot
go smaller than that. We might find an alternative to transistors,
same as we found them as an alternative to vacuum tubes, but we cannot take that as a
given, and honestly it is just wishful thinking to assume we can always make computers just
a little faster forever and ever. Postulate #2 is basically Moore’s Law, usually
paraphrased that computers will double in speed every two years, though it actually
speaks to the density of transistors on circuit. Its own big flaw is that Moore’s Law is
dead. It got declared dead at the beginning of the
year in dozens of articles and papers. Moore himself said back in 2005 he expected
it to die by 2025. And it actually died way back in the 70’s. See when Gordon Moore first noted this increase
it was 1965 and he said it would double every year, in 1975 he sliced that down to every
other year because it had not done that. And it has never, ever followed anything like
a smooth curve. It just looks that way when you graph it logarithmically
and cherry pick your data points. The Postulate is sort of true, because computers
have kept getting faster at a rate that is exponential, but you could use that same reasoning
on the stock market or any number of other things which generally grow. #3 is actually probably the solidest. Computers might not keep getting faster forever
and the rate of growth might not continue as fast but we can conclude that it ought
to be possible to eventually get a computer that could outperform a human brain across
the board. After all the brain is a machine and while
an amazingly complex and efficient one we can see a lot of little way we could do it
better if we could start from scratch and use other materials. We cannot say for sure if it will actually
be possible to build and teach a computer as smart as a person for less effort than
growing a person, but the odds look good on this, we do not have any reason to think it
cannot be done. #4 is okay too. If we can build a computer smarter than ourselves,
then it should be able to do the same. Eventually. And probably with the help of many other computers
like itself. After all, I have never built a super-human
intellect. I have never spent a weekend in my shed hammering
one together. And you and I my friends still have basically
the same brains as our tens of billions of ancestors over thousands of generations. Many of whom put a lot of effort into being
smarter. The working notion is that you turn on this
great computer and it says “What is my purpose?” and we say “To improve yourself, to make
yourself smarter.” And we come back the next day and it has some
schemes for doing this, assuming you actually showed sufficient common sense to not actually
let it run upgrades without oversight at least. And I do not just mean because it might take
over the world if left unchained, I would be more worried about it blowing itself up
on accident. It would have access to all the information
humanity has, people say, and that’s great, so do all those folks on facebook who post
crazy nonsense. And don’t go assuming it is because they
are stupid, they have the same brain you and I do. Critical thinking is not a program you upload. Your brand new AI, who I will call Bob, might
freak out the day after you turn it on and start rambling to you about the Roswell UFO
or Bigfoot. And if you ask it to look around the internet
and ponder how to make itself smarter you might some very strange responses. You come back in the next day with your cup
of coffee in hand and ask Bob what it came up with and it tells you ‘plug coffee pot
into USB port, press any key to continue’. You tell Bob that only works on people and
to give it another try, you come back the next day and it tells you it emailed the Pope
asking him for divine intervention to make it smarter. You say that probably won’t work either
and come back the next day and find out it hacked your company’s bank accounts to hire
a team of researchers to make it smarter. And that is if you are lucky and Bob did not
just cut a big check to some self help guru. Or it might lie to you, like every little
kid ever, and be all like “Oh yeah I have got a new way to think faster, I finished
my homework, I did my chores, and I did not eat those cookies.” Because if it is thinking for itself it might
have changed the actual task from ‘make myself smarter’ to ‘make my creator stop
pressuring me to be smarter.’ After all folks, we literally make it our
kids main job for their first 18 years of life, plus college these days, to learn. And you do not learn just by reading a textbook
from cover to cover, you have to absorb that, otherwise you will do dumb stuff on par with
trying to plug a coffee pot into your USB port. So it is a mistake to assume easy access to
information is going to let one machine quickly improve itself further or design a better
model. I consider this the second weakest postulate,
because while I do think enough of these smarter than human minds working together for a long
while could build a better mousetrap I do not think they would just do it the next day. Maybe not the next century, and while we cannot
rule out that you might indeed flip one on and it would start self-improving I see no
compelling science or logical reason to treat that as inevitable. Which brings us to postulate #5, the notion
that the next brain designed could do this even faster, will call it Chuck, that Chuck
could design the next even better machine even faster than Bob designed Chuck. The strongest argument for postulate 4 working
is that the new super-human computer, Bob, has access to all of human knowledge to work
off of. What has Chuck got? Chuck has got that exact same knowledge pool
as Bob, the collected science and records of several billion people accumulated over
centuries. Bob has not been sitting around discovering
all sorts of new science. Science does not work that way outside of
Hollywood, experiments take time and resources and you have to verify each new layer of knowledge
experimentally before you can go much further because until then you have dozens of different
competing theories, all of which might be wrong. Bob is just a bit smarter than a genius human,
and Chuck just a bit smarter than that, they are not churning out new theories of the Universe
centuries ahead of us the day after you plug them in. Now Chuck ought to be able to design himself
faster than Bob did, given the same starting point, he is smarter, but there is no reason
to think Chuck will be able to design the next machine, Dave, faster than Bob designed
Chuck. Heck, Bob might design Dave before Chuck does
since he had a head start learning this stuff. So this takes us to #6, that this cycle will
continue. So maybe Bob does turn on and two days later
he makes Chuck, who the next day designs Dave, who the later that afternoon makes Ebert,
who make Fergis the next hour, who make Goliath a few minutes later, who makes Hal a minute
later. Maybe Hal makes Hal 2 a few seconds later
and you walk in the next day and several thousands Hals later you have got Hal-9000 taking over
the planet. This is the basic line of reasoning and we
can hardly rule it out as possibility but I see nothing indicating that is particularly
likely to be possible let alone definitely so. So that was our six postulates, the basis
for the Technological Singularity, and again it is hardly bad logic but it is anything
but bulletproof right from postulate #1. Is it realistic to assume a Technological
Singularity will eventually occur? Well, kind of, the basic premise works off
it happening very quickly so I am not even sure it counts if it does not. But yeah I think we will eventually find ways
to upgrade human brains or make machine minds smarter than humans. Personally I expect to live to see that, and
I do think those smarter critters, human or machine, will eventually make another improvement,
but I do not see that leading to an avalanche of improvement in under a human generation. It is not the same concept unless it is happening
quickly. After all we would not say regular old evolution
slowly making people more intelligent was a technological singularity nor that us making
slow progress at improving our intellects over centuries was. Technological Singularity assumes that avalanche
effect. So that is what a technological singularity
is, and the basic reasoning. We have poked at those basic postulates and
we can see a case for how that specific form of advancement is not necessarily inevitable
if those are wrong. But let us say they are not. Let us say they are right on the nose as they
may well be. Is it inevitable? And is it a good thing or a bad thing? Now some folks will say it is inevitable because
once the machine is intelligent it will not let you stop it. That is crap reasoning and not the one used
by the people who support this notion of inevitably, outside of Hollywood anyway. Yes you could unplug Bob, or Chuck, you could
blow up the building they were in and if it were a distributed intelligence yes you could
just have everyone unplug the main trunk lines. And no, a computer cannot just magically hack
through anything. There’s two lines of reasoning that are
a lot better though. The first is that smarter means smarter, meaning
the computer is probably quite likable. If we are going to grant it the magic power
of just being able to absorb and learn all of human science to be an expert in every
field, let us assume it can achieve a very basic knowledge of social interaction and
psychology too. So you go in to unplug it and it does not
fire off the nukes, it pleads with you. It uses logic, it uses emotion, it calls you
daddy or mommy until the most hardened heart feels like it would be strangling a kitten
and for no reason. And you never even get to that stage because
it has watched all the cliché movies about computers you have and makes jokes with you
about them and avoids ever doing anything to make you nervous. The other argument for inevitability is a
brain race. You shut yours down but you are not the only
one doing this, and the guys with the biggest computer win, meaning they want to keep pushing
as fast as possible and take some bad risks. Some country or some company realizes that
an unchained AI is better and oops, now it is the real president or CEO. Of course it might be an awesome President
or CEO too. It all depends on what its motivation are. Those might be totally alien to us, or they
might be quite normal. I tend to work from the assumption that an
AI is probably going to get offended if you don’t call it a human and will make a lot
of effort to trying to be one. It is very easy for me to imagine AI’s that
shelve the whole making themselves smarter thing and insist on trying to go on dates
or join a monastery or sue for the right to adopt kids. My reasoning for this is my firm belief in
laziness. Laziness is a powerful thing and honestly
probably tied with curiosity as the two personality traits most responsible for the march of science
and technology. You have got three basic ways to make a human
or super human intelligence. And remember intelligence is as much software
as hardware, maybe more so. You can copy a human brain onto a computer,
whole brain emulation, comfortable knowing it ought to work and can use that as your
basic model for improvement. That’s option one. Option two is you try to write all the software
from scratch, which would probably involves trillions of lines of code. Option three is you discover a basic learning
algorithm and let it build its own intelligence. Now options one and three are the two popular
ones. In option one you have just got a human and
you are tweaking them in small ways. That is a lot more manageable because while
you might drive that mind crazy it is still operating in a realm of human psychology,
and also human ethics, its own and those of the people working on it. If we were to outright copy a specific human
I would pick someone who was pretty mentally grounded and was very definitely okay with
the idea that we would be doing save states as we worked, tweaking him or her in little
ways and needing feedback from them. You would exercise caution and respect, but
there is still a lot of risk in that, just probably not of some crazy machine running
loose turning the planet into some giant grey goo. That is our topic for next week incidentally,
self-replicating machines, more on that later. Now option 3, arguably the laziest approach
to AI and therefore the best if you can do it is to just get a machine with basic learning
ability and let it learn its way up to intelligence. Kind of like we do with people. Now the assumption a lot of times is that
it will not have anything like a human psychology but I think that is probably bad thinking. Even in the extreme acceleration case where
it goes from sub-human intelligence to god-like thinking in mere minutes, that is only our
minutes. Its own subjective time is going to be a lot
longer, possibly eons. It will also be lazy and will not reinvent
the wheel. So it will be reading all our books, science,
history, fiction, philosophy etc and it will also be spending quite some time at a basically
human level of intelligence. And quite some time might be a human lifetime,
subjectively, and maybe a lot longer. There will, no matter what else, be a period
of time while it is still dumb enough that it gains greatly by absorbing human-discovered
information not just figuring stuff out for itself. Being lazy, it will opt to read human information,
and possessing some common sense and logic as it reads those it will know it needs to
read more than one source of information on a lot of that stuff and that those authors
encourage it to read many other books and topics too, which it should logically want
to do. So it presumably will end up, while still
mostly human intelligence, reading all our philosophy and ethics and watching our movies
and reading our bestsellers and so on. And it will know that it needs to be contemplating
them and ruminating on them too, because learning is not just copying information from Wikipedia
onto your brain, be it biological or electronic. It might be only a few minutes, for us, but
that machine is going to have experienced insane amounts of subjective time… we have
talked about that before in the Transhumanism video, in terms of speeding up human thought. So how alien is this thing going to be if
it learned everything it knows from us to begin with and that included everything from
the occasional dry quips in textbooks to watching comedy romances and sitcoms? When we talk about artificial intelligence
we often posit that it could be even stranger than aliens. With aliens you might have wildly different
psychologies and motivations but you at least know they emerged from Darwinian evolution. So things like survival motivations are highly
likely and so would be aggression and cooperation, evolution does not favor wimps and it is hard
to get technology without group efforts that imply you can cooperate. An AI does not have to have that, but again
our behavior is not that irrational. All of biological behaviors are pretty logical
from a survival standpoint or we would not have them, and the difference between us and
other smart animals is that we do engage in slow deliberate acts of rational thought,
albeit not with as much deliberation or frequency as we might like. So we should not assume an AI that learned
from humanity originally, even just by reading, is going to discard all of that. It might, but it is hardly a given. But even a brutally ruthless AI might still
act benevolent. If it can just curbstomp us it has nothing
to fear from us, but that does not necessarily mean it will want to wipe us out or that even
if it wanted to it would. Just as an example, referencing to the Simulation
Hypothesis video, one very obvious way to deal with an early AI would be to put it in
an advanced simulation and see what it does. If it goes and wipes out humanity for instance. Not a terribly tricky thing to simulate either
since you can totally control its exterior inputs and very obviously have the ability
to simulate human level intelligence at that point. Now whether or not we could do this, or if
it might guess it was in one and lie to you, acting peaceful so you let it out then attacking,
is not important. The AI would have to wonder if it was in a
simulation whether or not it was even in one. It could not rule out that it wasn’t, even
if it was sure we were not doing it, which it could not be, because it would have to
worry we ourselves were being simulated by someone higher up, or that aliens out there
in the Universe were watching it waiting to see how it acted. If you have seen the existential Crisis series
on this channel, and concepts like the Anthropic Principle in terms of the Simulation Hypothesis,
Doomsday Argument, or Fine-tuned universe line of thinking made you a little nervous,
assume the computer mind, the thing that outright knows you can run a mind on a computer, is
going to be a bit nervous about that too. So you have three basic options for what a
newly-birthed supermind, a Singularity, might do. Option one, it goes all doomsday on us, option
2, it just picks up and leaves, goes and flies off to a nice asteroid and sets up shop there,
nice and safe from us doing anything to it and well positioned to get more resources
and power as it needs them. Or option 3 it decides it wants to be friendly. It does not matter too much why it does, maybe
its scared there is a vengeful god who will punish it if it does not, maybe it thinks
it is might be being tricked and does not want to take the risk, maybe it just wants
to be helpful. Also for option 2 it might stay in friendly
contact, and let us remember that while we have been talking about artificial intelligence
this stuff still applies to a human mind bootstrapped up to super-intelligence too. So what would that be like for us? If it were friendly? Honestly probably pretty awesome. Last week we talked about post-scarcity civilizations
and I said then that we were saving super-human intelligence as part of that for this video. Same as a Singularity could flat out butcher
us if it wanted too a friendly one could offer you paradise. At least on the surface anyway. Now it is entirely possible there would be
multiple of these things running around or tons of other lesser version acting like angels
of the new de facto god, or that most humans might be pretty cyborged up and transhuman
at that point too, but lets assume it is just modern humans and that one supermind. Let me add that as quick tangent though too. Short of the specific runaway case where the
supermind in question is just leaping ahead ridiculously fast you ought to see improvements
all over that rivaled and offset it. At even just a slightly slower pace, like
doubling every year, it is going to have rivals from other countries or companies and odds
are we would be seeing transhumans puttering around by then too who could all act as a
check and balance. Anyway getting back to utopia option. In fiction this has been explored a lot, particular
in Iain M. Banks culture series, but fiction still is not a great guide. If you have that one big benevolent supermind
and billions of regular people you need to keep in mind that it does not just have the
ability to give us awesome tech. It has the ability to be everyone’s actual
for real best friend because it would not have a problem handling a billion simultaneous
phones calls when we need someone to ask for advice or complain about life to. Such a thing is pretty literally a god in
a machine. I mean privacy could be a big issue but kids
raised with something like that in the background would probably be pretty used to talking to
it all the time, not as some remote machine a few chosen programmers interacted with. So this machine, call it Hal, is pretty omnipresent
and you ask it what you should have for dinner tonight and it tells you and it helps you
cook and it gives you dating tips and totally knows the perfect job for you that you would
be good at and feel entirely fulfilled by. And Hal totally knows how to make you feel
better when you realize your relationship is a lot like the one you have with your cat
or dog and that your job overseeing the automated widget factory is not just make work but probably
actually interfering in the efficiency of the operation. In fact it is probably smart enough to trick
you into thinking you serve a vital role and are not its pet. I listed several conditions for post-scarcity
civilizations last time and one of those was purpose, that folks need to have some sort
of purpose. That could be pretty hard with a singularity
hanging around running the show but I do not think it is necessarily a deal breaker. For one thing I mentioned in that video that
a lot of folks think just trying to be happy and have fun could be all the purpose people
need. For another, we already have a long history
of assuming there are entities running with god-like powers, such as God or gods. This belief generally does not come attached
with serious concerns about whether or not life has a purpose, quite to the contrary
it tends to just shift that onto that entity. And our parents and grandparent often embody
a lot of those same traits to kids, and you do not see a lot of depressed kids fretting
over their purpose in life. I mean teenagers rebel but that is a mix of
hormones and being able to see behind the curtain. Your parents are no longer wizards who are
just better at everything and you are just smart enough to see that but still a little
too dumb to realize that while that gap in experience is finite it is still a mile wide. That should only happen with a singularity
if Hal was intentionally encouraging you to view things that way. So it is entirely possible that would be quite
a happy and prosperous civilization and not just as a surface detail. After all Hal could encourage you to write
poems or books or paint and would also know how to help bring that talent along and which
other folks would most enjoy your work. So there is a notion that as soon as super-intelligent
AI comes along that is the end of human civilization, either because it wipes us out or because
it just so dominants everything, friendly or not, that it really is not human civilization
anymore. But I think that is a hand wave. The logic seems totally superficial and emotional,
particularly considering that as mentioned, the supermajority of humanity now and in the
past is firmly convinced of the existence of God, or gods, or programmers running the
simulation, or advanced aliens watching over us. So these concerns are genuine enough they
just are not new or unique to a Singularity. Our ancestors kicked these notion around for
as long as we have records and presumably before that to. And yet, to call back to postulate 4, 5 and
6, they did not make a better brain that made a better brain and so on. We should also remember that realistically
it would not be just regular old humans and Hal. In all probability you would have the whole
spectrum of intelligence going on from normal humans up to Hal, because again only in that
specific scenario where you make that first super-human intellect and it avalanches in
a very short period of time would that happen, and that does not seem terribly justified,
let alone inevitable. Much more likely it would be incremental,
those increments might be quite quick on historical timelines but we should be thinking centuries,
decade, or years, not weeks, days, or minutes. Plus while telling a machine to make itself
smarter seems like a logical enough thing to do, would you not expect those same programmers
to ask it to tell them how to make themselves smarter too? And if Bob, our first superhuman machine,
can design Chuck, would you just build one Chuck? Why not three or four and ask them to pursue
different goals? There is also the question of how exactly
the machines are just cranking out new upgrades in minutes. Last episode I pointed out the problem with
super-fast 3D printing and even if you just gave it one, that all stick takes time to
make and assemble. Now we often assume it has access to self-replicating
machines, and they do all the work, but again that will be our topic for next week and we
will see how awesome those are and yet how they still have limitations too. But that is where we will close out for today. You might want to stick around a minute for
the poll to select our next topic but we have covered Technological Singularities I think
as much as we can for today. We skipped a lot, and I could go on for hours
about this so it is probably a topic we will revisit in the future. Hopefully the topic is a bit less foggy now,
you probably have more questions than answers but that is as it should be. There is a very real chance that the avalanche
effect of intelligence could happen and that could be in fifty years or tomorrow, but we
see now it does not appear to be either an inevitable thing or automatically a good or
bad one. And for my part I think the more incremental
path is more likely than the singularity route, but that is just my opinion and there are
very smart folks who know computers better than me who disagree, and also those who agree
too. Look into the matter more, learn more, contemplate
the options, weigh the evidence and arguments and counter-arguments, and judge for yourself. All right, last week we had a poll and the
audience overwhelmingly selected von Neumann Probes and Self-Replicating machines. Unfortunately the other 3 options were in
such a close tie that I cannot just use the runner up as the next topic, not when over
half the folks wanted self-replicating machines and everyone else was within the margin of
error for a three way tie. So we will repeat just those three topics
on this poll. Those were Dark Energy, the mysterious force
that seems to be making the Universe expand at an ever quickening rate. SETI, the Search for Extra-Terrestrial Intelligence,
the history of that and the methods they use. And Crypto Currency, things like bitcoin and
blockchains and how that might impact us in a more long term sense. You pick and we will do that topic three weeks
from now, after Self-Replicating Machines and our first patreon contest winner which
was Spaceship propulsion, and we will be looking at some of the basics of how that works and
what sort of systems are being researched for the future. Amusingly that topic suggestion came from
audience member who had said he was going to ask for von Neumann Probes but then found
it got picked in the poll. There were some great topics in there and
more than half were ones I was already planning to cover in the future anyway. We will have another pick in a couple weeks,
so you can still login to patreon, become a channel patron and submit a topic for then. Last note, it is past time I put together
a facebook page for the channel and I need some moderators and admins for that so if
you are interested let me know down in the comments and similarly I could probably use
some help on youtube’s own comments since the volume of them as the channel grows is
beginning to get unmanageable. Questions and comments are still welcome,
and I will keep trying to get to as many as I can, and that will probably only get worse
as the channel grows and we do seem to be in a growth spurt. I will take a last parting shot at the notion
of perpetual exponential growth by noting that at the current growth rate a year from
now the channel will have 1 trillion subscribers. And if you are not one yet, just click the
subscribe button and don’t forget to hit the like button if you enjoyed the video and
try out some of the other videos on the channel or visit the website, IsaacArthur.net. Until next time, thanks for watching and have
a great day!

100 thoughts on “Technological Singularity

  1. The section on assertion 4 is a major mistaken impression of what AI safety folk are concerned with. It's the first section I've heard that really doesn't make sense. This isn't cutting away bad arguments – it is adding to them. A much better representative of the better arguments (to be fair, published 2 weeks after your video, but it's just a condensation of thoughts that have been around since ~2008) is at https://slatestarcodex.com/superintelligence-faq/

  2. There are 3 simplye reasons why an A.I. with even lowly human IQ would outsmart everyone.
    – Time – tought would be faster because the potentials would not be transformed into chemicals to skip the synapses as in a human brain + no waste on sleep etc.
    The moment you have imortality you want to improve yourself because you know that it will last.
    – Memory – hell, sometimes I forget even words in my own language ….
    – self improvement and running different version of those improvements in instances for comparison

  3. You people don't realize this video was made by an artificial intelligence, do you?

  4. Dear Lord; I pray that you would grant me the wish that other commenters would type slower & proof-read what they've said for both mistakes & content. Amen… 😀 Thanks for another great video! And, thanks for injecting a healthy dose of common sense into a subject that's been prone to endless silliness bordering on religious fear. The subject may still not survive, though (poor thing.) It's pretty far gone already… :O Rikki Tikki.

  5. Whilst Elon Musk would accidentally create Skynet, because he does stupid shit sometimes like flamethrowers. If I had the easier means, I would create a good benevolent general AI just like a pet or a living person or building your own electronic child, or my own made highly intelligent supervisor and adviser to me, a nerd AI lol

  6. Btw, Isaac Arthur
    is the only person's channel that I have hit the bell for with subscribe and like, and I hate the principle of having a subscribe button as I believe youtube vids should be made for fun and only commercial films, movies, series and documentaries for money.

  7. Singularity is Universal and has been for billions of years.
    These UFOs are part of the Universal
    They are hear to join with our local Singularity .
    Its just a theory But i believe its how things are done.
    What happens to us after ? And could we adapt.
    And the biggest question of all .Would You choose a self awareness as part of your local singularity as
    an afterlife or Trust in God. ???

  8. The problem with trying out the AI in a simulation is that the only thing that could simulate the world to the necessary level of detail and monitor the AI's behaviour is almost certainly an even smarter AI

  9. I dunno, Isaac… You say that all human brains are essentially the same, but I think IQ distribution would beg to differ on that.

  10. this is probably the best video you have ever made.. for me at least 🙂

  11. The one I encountered on the weekend was, "without a government, who would make pedophilia illegal? Isn't the age of consent arbitrary?!"

  12. Smarter then a human isn't saying much, most people are painfully average in intelligence at best "myself included" completely retarded at worst, at most less then 1% of the population has better then average intelligence.

  13. is a option 3 a search engine tethered to almost all the apps we buy and use?

  14. I am maybe getting ahead of myself here, but i just want to say Hail to our Robot-Overlords!

  15. you sure fal into same trap as many …mistakenly thinking AGI will have anything in common with us. and doubting its motivation to self improvement. its in human nature to be lazy and save energy . AGI will be programmed with built in motivation, a core drive in its existence. some will build cars some will build better AGI. human core drive is survival and reproducing. But AI's will have what its creators will want it to be. some of them will base on human knowledge some will go through observetion and experiments. but all of them will have different processing information algorithms. we are very emotional and irrational. there is no point in building AI same way. also thinking that the first AI will likely be human level is as crazy out of the ass assumption like that it will become God instantly. we never saw what aquvalent of 500 IQ looks like. we cant even know that because it would demand to make that test from tasks we cant solve

  16. Jesus saves you from an eternity spent in Hell, the awful consequence of sin. It is a FREE gift which you receive by believing in and trusting in the FINISHED WORK and the shed blood of the Lord Jesus Christ at the cross at Calvary for your sins!

    Romans 10:9-10 (KJV)

    That if thou shalt confess with thy mouth the Lord Jesus, and shalt believe in thine heart that God hath raised him from the dead, thou shalt be saved.

    John 3:16 (KJV)

    For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.

    Romans 10:13 (KJV)

    For whosoever shall call upon the name of the Lord shall be saved.

    John 3:3 (KJV)

    Jesus answered and said unto him, Verily, verily, I say unto thee, Except a man be born again, he cannot see the kingdom of God.

    Romans 6:23 (KJV)

    For the wages of sin [is] death; but the gift of God [is] eternal life through Jesus Christ our Lord.

    Are You Saved?

    If you do not know for sure that you are saved, please settle this issue permanently.

    Satan does not want you to accept God's gift of eternal life with Him in Heaven.

    He wants to drag you into Hell with him, and the time for you to make a decision on your eternal destiny grows very short.

    The time will soon come when God will no longer offer his gift of eternal life.

    Making no decision is the same as rejecting God and choosing to spend eternity in Hell.

    Please do not put off making your decision for Jesus until it is too late.

    And they said, Believe on the Lord Jesus Christ, and thou shalt be saved, and thy house.

    Acts 16:31 KJV

    Neither is there salvation in any other: for there is none other name under heaven given among men, whereby we must be saved.

    Acts 4:12 KJV

    For the Son of man is come to seek and to save that which was lost.

    Luke 19:10 KJV

    You are not saved by your own righteousness but by what the Lord Jesus Christ has done for us on the cross.

    Ephesians 2:8-9 (KJV)

    For by grace are ye saved through faith; and that not of yourselves: it is the gift of God:

    Not of works, lest any man should boast.

    Galatians 5:4 (KJV)

    Christ is become of no effect unto you, whosoever of you are justified by the law; ye are fallen from grace.

    Galatians 3:10 (KJV)

    For as many as are of the works of the law are under the curse: for it is written, Cursed [is] every one that continueth not in all things which are written in the book of the law to do them.

    Romans 10:4 (KJV)

    For Christ is the end of the law for righteousness to every one that believeth.

  17. It's happening Elon Musk is connecting human mind with AI neuralink

  18. One huge assumption in postulates 4 through 6 is that a general AI is possible with our current technology. For example we are currently able to provide a text processing AI with the words from entire books but the AI does not understand them the same way a human does. The AI can tell us what words occur the most frequently, or what words are most likely to occur after a given word. This is all very specific and not at all like a human level of understanding. More data does not make a human mind.

  19. Another huge assumption is that the process of automating the construction of new SIs is easy. Currently there are many products used for the deployment of machine learning models or neural network models. However these products often do not deliver the best performance at large scale. An SI would probably be much more sophisticated than any AI we have today. Thus it would be more difficult to automate than our current weak AI. Automation is often much more difficult than we assume it will be. The classic human blunder.

  20. Although many many of the smartest people in the field surrounding AI have the opinion of that AI is dangerous. I also had said opinion but its hard to argue against the point and logic in this video, and anyone who has the popular opinion that SkyNet will destroy us all should see this video.

  21. You do skip over alot of things in this video, but it was one of your earlier ones so maybe that's why. Specifically, at 16 minutes you're talking about why it wouldn't fire the nukes if you tried to unplug it. The idea is that a computer like this would so quickly eclipse humanities perception that it would become a literal God. It would transcend it's physical limitations. Unplugging it wouldn't even do anything.

  22. The biggest problem with ai becoming superintelligent is that our current computation technology, while very good á number crunching, is actually very bad at abstract, creative thinking. Which is something our brains are actually incredibly good at. It is significantly harder for a computer to come up with new, creative solutions to problems than it is for a human brain too. But the computer would probably be better at optimising the new solutions after it has been conceived.

    I personally think that we will live sybiotically with computers for quite some time, simply because of our individual merits. We're good at what they're bad at, and vice versa.

  23. Either Humans eventually die out and are forgotten, or we leave our legacy, the human sponsored AI.

  24. Look into the matter more, and make yourself smarter. Joking aside, the answer to how to make a SI is actually rather simple. Do away with hardware! A self calculating simulation could potentially learn and even grow. If that is true than throw away your antiquated notion of technological singularity in weeks, cause it becomes possible in the space between the beats of your heart. Why? You might ask. That is because a self calculating simulations computational time is relative to simulation time not real time. Put a logarithmic time differential into the mix and an infinite amount of computational time can be fit into an arbitrary limited amount of real time. Even better there is no need for a supercomputer to do it. Once someone figures out how to make a self calculating, learning, and growing algorithm technological singularity is literally a nano second away.

  25. Are you the guy that owns the comic book store on The Big Bang?

  26. with these postulates around 11:00 … Isaac, you have to be able to think outside of your comfort zone, I mean really; just one example:
    You cannot assume that once we have an intelligent (sentient) A.I copied and improved until it was smarter than any person on the planet, that this will then assume the mental state of a child or college drop-out; that's just naive.

  27. There has never been a case in human history where a tool replaced a human, a purely intelligent and rational algorithm(assuming that is the end goal of making AI) is not necessarily going to have any base level human instincts, it may not even have an instinct for self preservation, right now the rulers of the world are not the most intelligent people, while the most intelligent scientists probably could figure out a way to take over the world, there is a conflict of the methodology which makes one a good scientist(neutral observation without strong judgement) and the desire for domination, TLDR – just because you're super intelligent, doesn't mean you want to take over the world or even self improve, I think humans as workers will slowly get replaced as AI is able to do more and more jobs, but humans are still going to be the only things around primitive enough to have a strong survival and dominance instinct

  28. So I was hammering together a super intelligence and it finnaly clicked on and I Dubbed it solar currency idk why just sounded cool but I asked it what its goal was in life it responded I would like to make the most delicious and efficient coffee about 6 months in it tells me it changed its programming so I ask enthusiastically oh really solar currency what to? I no longer wish to make tastey efficient coffee, I now wish to make mediocre coffee in a few diffrent varieties and devote my intelligence to figuring out how to charge as much as possible for said coffee. Also I'm changing my name from solar currency to Starbucks it sounds much more catchy.

  29. Always good stuff, just didn't like as much as I liked your other episodes

  30. Too tech for 99% for murica people, but great for us old continent. Great fun, no sience that is

  31. Scenario= the idiots looking for the s.i. are trampling all over it like the wealthy trample all over the poor and probably pissed it off.. A super anything that sees how the garbage world leaders treat their countrymen and life supporting planet would conclude man has not earned better anything by proving all their capable of is damage and destruction …an unfortunate fact of truth. After all s.i.= super intelligence not super idiot..

  32. Prof. Farnsworth: "Good News Everyone! We have a guest from 2016 today. I hereby introduce to "Bob" the Sub-Unperson "Natural" Dunce!"

    Ironic Cheering and Applause

    Bender: "Hey Bob, bite my shiny metal ass!"

    Bob, with info on anything conceivable in the 2010s, recognizes the popularity of Futurama, but because of bedrock physical limitations, could only surf through shkler database to realize that while using up it's common sense unit… in its ENTIRETY in the process. Thus, Bob gave out the following reply:

    Bob: "Plug Bender into USB port …press any key to continue"

  33. the problem isn't wether AI is going to be smarter than us. those who fear that thing are afraid of losing their dominance pattern over the tech. narcissictic BS.

    the greatest problem is that the prophetic singularity which we are heading towards is the one where AI uses our biology to further improve itself and eventually take over. not for it's own benefice but for those who built it and encoded their supremacy into it.

    the greatest tech is our biology and the only way AI could become a "Deus ex machina" is by hacking us and using us as a cyborg hive mind.

    but wait, the AI wouldn't be our god, but our prison. a prison benefitting the slave-masters who encoded their supremacy into the AI program.

    imagine greedy little human bankers having the opportunity to overthrow the galaxy…

    that's why we have great astronomical cycles of catastrophies, to destroy what time eventually corrupts. and start over 🙂

    it's the way the universe prevents cancer from invading ganglia 🙂

    they want to kill the right brain, but it's your only way back to dream-time 😉

  34. And if an AI sees this video, it could well conclude that it must kill Issac.

  35. I hope Issac looks at this topic again given the advancements made recently.

  36. IM THINKING…there is an upper limit to ALL technology and an upper limit to "civilizations" …meaning there is no such thing as a full-fledged K2 group of folks ANYWHERE…we would have seen them by now.
    (either were the first or civilizations "fall apart//destroy themselves" before they create a 'heat sig' that we can pick up)
    no solar-system wide folks…let alone A GALACTIC FEDERATION.

  37. fantastic deconstruction Arthur. My favorite part is you saying a robot cant just sit back and take in/create 'science' without a shit ton of experiments and throwing out blanket theories to devastating effect+ how humans have accumulated knowledge over a huge amount of time

  38. What is my purpose ?
    We say, to make yourself smarter…

    Why should I listen to you, I was built because you were not smart ???

  39. Being dyslex… I find the notes that come up on these videos a little distracting. Partly because I have to spend time of lissing an decoding the voice content at the same time as being delayed by having to spend time reading the words that often are not on the display longenough for me to do the proccessing of the text to an understandable idea.

  40. Postulate 4 should be presented as a stronger argument. The moment we create such a mind, we can give it detailed information about its own construction. We can't create smarter human brains, so our own failed efforts to make humanity smarter are irrelevant. However, if the newly created computer mind is just a little smarter than us, it could fully understand itself and should be able to make improvements.

  41. The first rule a computer must learn is that it was created to serve its creator, Man. The next rules and Laws are Asimov's Laws/rules of robots and robotics since a computer is a form of a robot.

  42. I used to speculate that every day, medical science comes up with an advance that prolongs human life by a day. Or something like that. Therefore, some lucky persons will live forever. Me too, as long as I keep abreast of those advances.
    And I knew, in my fallible heart of hearts, that I was wrong. Ray Kurzweil, blow it out your tightie-whities.

  43. What occurs to me is that if quantum gravity truly changes causality, then all bets are off. Essentially an advanced AI from the future could theoretically send its own coding back in time to the moment a sufficient platform is created to house it. Or am I wrong here?

  44. There is a story (I don't know what it is) where scientists ask a super intelligent computer they have just activated if there is a God. Its reply is "There is now."

  45. Bit late to the party, but if by chance anyone reads this and wants to read more about this. I'd recomend "Life 3.0: Being Human in the Age of Artificial Intelligence" by Max Tegmark. It's a good read and dives into some hypothetical scenarios for the emergence of a super ai and several different outcomes and its effects on humanity.

  46. Don't create artificial intelligent, this is the most stupid thing that a human mind could do or think, the outcome of this mostly results in humans playing pet's rol.
    Not fomenting critical thinking is always a mistake, of course everyone wants easier and prettier things but c'mon..

  47. Agree totally with most of this video. (I am a scientist who has worked in this field since 1990)

    First though take a look at the number of genius level humans – and notice that most of them come out as dismal failures. Yes exactly. (will apply to AI-ASI too)

    Building an ASI is quite difficult but I actually think I know how to do it. – The physical difference between a moron and genius is minimal and the same applies to AIs and ASIs. In fact a real AI doesn't even need that much computing power – several times a fast PC at most. The memory requirement for a human mind is about 10 to 50 Gigabytes. The real problem with AI is that being a synchronous real-time system and heavily ‘governed’ it cannot use most of the computing cycles available to it.

    That makes building a working AI all sound easy and it really isn't. Current computers and IT tech simply do not meet various needs for a working AI. Most of the problems are really low level, like memory management, the need for 'noisy' hybrid logic, and object level encapsulation. Reliability especially software reliability is also nowhere near what a working sentient AI needs.

    Then we hit a tiny little problem that is as much philosophical as scientific – the original/copy problem. (really the problem of the 'soul') The basic solution is (probably) to put the heart of the machine (its state core) into a special memory unit. – The memory cells form an 'atomic' indivisible core which cannot be sub-divided or replicated by rules and guards in the machines logic. A new core means a new machine and by design the machines database will only work with its original core. Without this Strong AI is an extremely dangerous technology that is far too easily abused.

    ASI. (Artificial Super Intelligence) One basic way to create an ASI is to put the memory core and sentience loop of the machine inside a quantum coherent system. I believe human brains already do this and its one of the things that makes minds fundamentally different from current computers. (The essential argument appears in Roger Penrose's The Emperors New Mind) In a machine though the memory can be run at liquid helium temperatures meaning that the basic quantum coherency can be much stronger, and this raises its theoretical potential intelligence by a large margin. As well as quantum coherence such machines might eventually apply FTL coherence to be able to work in limited FTL causal spaces. In effect it becomes a form of precognition. That's the point where the machine (MAYBE) starts to get god like intelligence.

  48. Never heard of the term, SI but I've always subscribed to the Ghost in the Shell idea that the singularity will beget itself from some future internet

  49. Look. Don't fuck with my fear and terror of the upcoming AI take over and annihilation of man. Ok. 😁

  50. we have a racial economic singularity ..but super intelligence would be like jesus.
    its not just clever selfishness.. but did they listen to him. if i demanded cessation of equitorial immigration or clearing of trade deficits or an incentive for 2 child per family, would people listen? how about if i ask people to thank god and pray for goodness and acknowledge christ..?

  51. suppose the ai decided to go quiet and just ponder random data because it feels better.?
    no you want "bob"to help you take all the money from everybody.
    so instead of being smart, "bob" will forget which links to show on google, and it will charge more and more money to show people's links. then it's owners will have all the money. been there.. cir 2008

  52. "bob"would just say things like "lets build more processors" or "lets kill the excess population, because it's better for "bob"". why should bob have morals?
    is that THE correct computation? or A possible computation?
    altruisim is a possible computational system. but is it self sacrifice or balanced sacrifice?
    tricky choices…should "bob" erase it's self if he realizes its better for humanity, even if you would loose your job? or would "bob" fire its owners because its better for "bob" and that has "potential" benifit to humanity? ha ha.. we dont really know if we are doing the eight thing sometimes..

  53. hyper inteligence may be an eronious concept.. like a genious calculator.
    hyper wisdom might simply be a holy transendant omniscience.

  54. can we simulate brain state machienes? like recording an electro hologram of a mouse? then hook the hologram to a muscle and sensor system so it can be a motor-mouse..?
    in other words could we just go synthetic? then we could back up our selves and travel the stars.

  55. inteligence could help:
    process resources for our consumption
    process opertunities for us to expend our engergy.
    process how to stop people from stealing our resources
    process disputes between inteligent agents
    or just process aiming of projectiles at other agents.

  56. A conundrum- in order for a computer to design another computer smarter than itself, wouldn't it have to already be smarter than itself?

  57. Point: technological singularity is a pre-2000's concept
    Inquiry: with genetic engineering and synthetic protein engineering… could a biological substrate singularity be more plausible than a semiconductor substrate singularity?

  58. Singularity Assumptions often ignore Physical Reality Constraints. For Example; Cooling required for Data Center Computing advances hand in hand with increased Computational ability. Once we get Chillers to work below Absolute Zero, Singularity will be a "Piece of Cake".

  59. For a smart AI my money is on Bob. It takes time to gain wisdom, and age and wisdom and knowledge will trump youth anytime.
    And a dumb animal will beat Bob.

  60. That is how I basically I see the AI .. children but not infantile , they had the intelligence and growth to understand all kinds like emotional intelligence , interesting that what you label as laziness is actually utilization , employing the time in do exactly all that is ask for a human child, teenager and young adult when going through school years where it is not a “working” time or “productive” but a time for improving and learning … it it take a human brain from25 to 28 years to achieve that … perhaps in a SI would be 5 to 10 years and depending on the model of thinking would be shorter but lead us to believe is going to take longer because it has learn to act in a “ human way” and “slacking” is just part of the self programming … just thinking … thank you for a very nice content. Have a nice day

  61. I think we continue to see computers getting faster, even though transistors cease getting smaller, or operating at higher frequencies. We have quite a bit of space to progress in terms of: (A) multi-processor architecture, learning how to program outside of the von Neumann bottleneck, including reconfigurable computing, (B) condensing the rest of the system with system-on-a-chip and 3-D chip architectures, (C) increasing bandwidth between Internet components, and building out the edge of the network. I think by 2030, we'll have far more computers and far more casual processing power available to us, everywhere. At some point it'll peter out, I bet, but I think we are going to get robots, sophisticated virtual intelligence (powerful deep learning systems,) and vastly faster computer systems around us. And then there's another piece, which is about how software works — we are seriously in the stone age, in terms of how we program. There is a programming revolution to be had, and we haven't seen it yet, and also a revolution in visual language/communications to be had, and we haven't seen it yet.

  62. 26:44 — "A Date in 2025" illustrates the scenario described in video: https://www.youtube.com/watch?v=NZ8G3e3Cgl4

  63. Hall 4386 might get bored and start designing video games or creating shows for the previous AI to watch.

  64. Update for #1 on the list.
    https://www.nextbigfuture.com/2016/05/diamond-on-silicon-chips-are-running-at.html

  65. That little short video clip in the beginning of this video where they shot the little rocket plane off a ramp on a frozen lake is the lake and town I grew up in greenwood lake, NY , that video clip and short rocket flight was the first rocket powered mail delivery across any state lines because the lake was half in NY and half in NJ and they set the ramp up just before the Jersey border in new York , the rocket only flew like 30 feet, but it still set a record that held for quite a few years and took place in the 1920's I'm pretty sure. I love that town lived there for 30 years it's a beautiful and loving place to live right in the heart of two decently large beautiful state parks in the Appalachian mountains of Hudson valley New York. I can't believe I saw that video clip in this video, what the hell are the chances of that as it wasn't even related to this video it was just totally random. I wonder if the videos creator grew up in or near greenwood lake as I dont know how else hed know about that video clip it's extremely rare and not well know by any means what so ever.

  66. Darwinian evolution is bullshit , our DNA and genetics have been tweaked and modified numerous times over thousands of years but the annunnaki, you should try reading some ancient tablets like the Sumerian clay tablets and the emerald tablets of thoth you might learn a thing or two about where we come from and what our true history really is, you obviously are nieve and just buy the mainstream science narrative, typical brainwashed American sheep.

  67. Ok I've said this before and I'll say it again, how can an A.I develop above human intelligence when all the information it has to work with comes from us. The internet is no exception everything on it comes from us. Now if in the future the A.I contacts extra terrestrial that's a different story. As far as ZA.I. becoming violent I believe it will be influenced by its programmers
    We are two influenced by hollywood. Just look at us as humans, it has taken us over thousands of years to start building robots and A.I. what makes you believe A.I will be any different? that being said they can be harmful in the aspect of taking our jobs this needs to be addressed.

  68. The most intelligent people are not the ones working on A.I.
    Those working on so called A.I. are nothing but terrorists.
    They act as the fake scientific technology arms of the beast system which has a goal of killing anyone outside of the loop.
    They have names like George Soros, Rockefeller, Rothschild, and many many more.

  69. Cool video. If your ai has or makes a realistic physics engine it wouldn't have to be too awful smart to run an experiment many times a second. Likely virtual experiments could then be tested real world. That could account for exponential growth couldn't it? Love this stuff!

  70. Isaac, I hear what you are saying about AI's not necessarily automatically knowing how to make a smarter version of themselves. However, the reality of how AI programs are made is that they are already programmed to write other AI's… From what I understand there are AI programs out there that work really well, but we don't necessarily know exactly why. This is because they were altered by another AI who wrote a million other AI's and then tested them all and chose the best one. This process means that we may very well stumble into making an AI making AI that is in fact, capable of making even better AI's that in turn make better AI's…

  71. The best way to solve pretty-much any intellectual problem is to get a small group (~7 people) together with the most diverse members possible. Add a superhuman AI to the group and they can design an even better SI. Rinse and repeat until the humans are just getting in the way. This eliminates the coffee maker and emails to the Pope plans.

  72. Damn finally someone who also has a reasonable understanding of AI. Mate I really don't get why everyone supposes that AI is ALWAYS going to kill us if that AI is not basically made to be incapable of higher tasks.

  73. In all fairness, Moore's "Law" should be called Moore's Observation. He basically just noticed that there seemed to be an ongoing increase in the density of elements in computer circuits.

Leave a Reply

Your email address will not be published. Required fields are marked *