2011 Ulam Memorial Lectures Part Three: All Watched Over by Machines of Loving Grace


my name is Dave Ackley I thought I would
introduce Dave by sort of introducing myself for a couple of minutes because
it’s what’s most important to me uh how many how many folks were here both
Tuesday and Wednesday ah ok you’re the people I want to talk to now yesterday
you heard Walter Fontana speak here and he knew Dave for ages since he was a
platypus and on Tuesday you heard from gym rat who has known everybody forever
as far as I can tell ah but I’ve only known Dave since after he came to SF I
so i’m actually a crack our newbie we’ve had some meetings we’ve talked but I
have this weird feeling that I know him much better than I should given the data
and I think I know why that is now it’s those damn mirror neurons uh-uh you see
ya he’s in my brain and the part of my brain that represents him is the same
part of the brain that represents myself so what I see him thinking I think and the hell of it is half the time he
thinks my thoughts better than I do yeah and and and that that’s something now of
course some of the time he thinks my thoughts in such a profoundly wrong way
that I have to slap him in my mind so here I am we have these what I do is
computers I’m a computer guy I’m at UNM down in Albuquerque and what I would
like to do is look at how I think Dave is looking at intelligence and how he’s
looking at living things from the other side not from the side of how natural
systems do it but how systems like computers do it or fail to do it really
um so that’s that’s part of the struggle that I have in my mind between the day
of crack power in my brain and the Dave Ackley also in my brain is that I like
those machines that they was talking smack about for the last two days you
know these profoundly stupid things that and it’s true they are profoundly stupid
you have to be extremely literal when you talk to these machines for example
when I’m wearing my computer science had you know did you notice that Dave would
give us these little warnings when it was time to put on his thing thinking
cap but he never told us when we could take it off uh-uh so if i’m wearing my
computer science hat I’ve got three thinking caps on now and if he asked me
to put on another one I’m gonna have to be Darwin or Einstein or I’m gonna have
to settle for texting but then biological systems real living systems
they aren’t like that they take care of themselves they just sorta yeah I don’t
need that anymore machines don’t do that computers as we have them today don’t do
that so I do computers not only do i do computers i do artificial life with
computers so what I’m trying to do is take these profoundly stupid machines
and somehow make them act more lifelike whatever that might mean in some context
and defining life is pretty much just as hard as defining intelligence it might
even be related they might not be separate concepts and computers today
are incredibly stupid I mean you know we have a desktop we make a folder you know
that’s not a folder I mean that’s a picture of something I can’t cut my
finger on that folder I can’t whip it across the room I can’t burn it for heat
it’s not a folder the the world inside the computer that we have today is a
complete Hollywood false set it’s just the front it’s just a coat of paint it’s
reality a hundred miles wide and one pixel deep and I think that’s what bugs
Dave about it so much that natural life evolved life is very deep and the
machines that we’re making today so far are pitiful shallow almost insulting you
call that a folder but hey I mean you know this is my job I
make these things I do what I can do natural life had quite a head start three billion years on the whole planet
70 years since Colossus at Bletchley Park so we’re doing pretty well for
these machines and the history and machines is really just beginning in the
history of the interactions between man between natural living systems and their
technologies it’s not just beginning either it’s been going on for a long
time but now these technologies are kind of turning into a new era a new kind of
machine a new kind of interaction between people and machines and I think
as much as anybody I know I think Dave Krakouer probably knows where we’re
going and I’m hoping that’s what he’s going to tell us tonight so let’s get
ladies and gentleman David cracker is a hard act to follow all right well I
don’t know we were going I’ll fess up for that right now so let me summarize
so the first night I try to expand the definition I didn’t give you one your
conception of intelligence giving you a sense that it was ubiquitous in nature
and what its critical ingredients are yesterday I talked at length about the
cell and its incredible powers and I think in some sense that’s what Dave was
referring to what I would call you know ubiquity that way right vertical
ubiquity that there’s intelligence all the way down in us and that doesn’t seem
to be the case yet in machines and we’ll see that’s probably where they have to
go again apologies you’re probably wearing about six of these by now but
for the biological organisms in the audience they understand that you have
to sort of take it off after a few slides the point that I was making at
the end of yesterday’s lecture is that one of our extraordinary characteristics
is to externalise representations abstract features of the natural world
and represent them in a shared workspace which allows us to build up ideas and
intelligence culturally something that other animals don’t do as well but it
would be too strong to claim that they don’t do it at all and I gave that
example of niche construction yesterday but at a certain point in our history
something interesting happened and we created representations of abstractions
that were also inferential okay so we no longer just drew the solar system we
drew them into a machine that could simulate it so we’ve created tools with
an additional capability that goes beyond a a planar static representation
and very recently as they’ve correctly pointed out we’ve created machines that
aren’t just representational and aren’t just inferential
but they’re also strategic and this means that we’ve created something that
at least by my definition possesses most if not all of the hallmarks of
intelligence and our we have I think a basic intuition that this is true and in
some sense it leads to a polarized response to the computer there are those
who view the computer as the utopian solution to all of the problems that we
face as a society and those who take a slightly dystopian perspective who
believe that in the end the machines will be our undoing so I’m just going to
show you one clip of a movie and I have my computer read you a poem that reflect
those two perspectives and hopefully the sound will be loud enough so that you
can hear this click the first clip comes from the science fiction movie at the
matrix the human body generates more bioelectricity than a 120 volt battery
20 5,000 BTU combined with a form of fusion
the machines had found all the energy they would ever need you can hear me oh good exit yes so to
turn us into this we have a summer school every year at the Santa Fe
Institute and we get to pose questions to the students and my question was is
the future of humanity to be a battery for a computer and I meant it somewhat
facetiously and a young computer scientist from Edinburgh said of course
we’re batteries who builds the nuclear facilities that provide the power for
the computers who digs the mines who builds the dams of course where
batteries for the computers already and always have been this seems to have a
lot of echo on it I’d like to think and the sooner the better of a cybernetic
meadow where mammals and computers live together in mutually programming harmony
like pure water touching clear sky I like to think right now please of a
cybernetic forest filled with pines and electronics where deer stroll peacefully
passed computers as if they were flowers with spinning blossoms I like to think
it has to be of a cybernetic ecology where we are free of our Labor’s and
joint back to nature return to our mammal brothers and sisters and all
watched over by machines of loving grace right so those two couldn’t be more
contrasting positions on our future this of course was written by Richard
Brautigan as a novelist and poet and he in some sense represents the tradition
that goes back to marks that technology will be our liberation it’ll give us the
time we need to be created and free and I don’t really know which of these two
positions I should adopt yet so yesterday I showed you some plots where
we looked at body mass against brain signs and that was called the in
capitalization quotient okay now i’m showing you a plot of the growth in the
cranial volume of this hominid lineage through time so for those at the back
this is millions of years starting at minus 7 million up to the present
and what we’re measuring here is the volume of the cranium and this trend in
event is called an exponential trend it increases it’s more than linear which
means that as we move into the present there’s a very very rapid accumulation
of volume okay and as I pointed out yesterday cranial volume is correlated
of course with brain size which is correlated with neural the number of
neurons in your brain which is considered at least for some part of the
brain to be a correlate of what we might call intelligence but we understand
hopefully since yesterday that there are limits now people working on machines
have been looking at very similar trends now the only difference here is we’re
looking at the logarithm of the axis and so it looks linear okay but it’s still
exponential suiting okay and so you probably can’t read this at the bat but
this comes from reycarts vials work and many of you might have heard of him he’s
been arguing for this idea called the singularity and we’ll get to it and so
what we have here is this is calculations per second per thousand
dollars and so in 1950 you had one calculation per second per thousand
dollars and this only takes us up to two the year 2000 we have I don’t know like
something like 10 to the eighth calculations per second per thousand
dollars a huge increase in the number of calculations are performed by a chip and
the what’s behind this is something known as Moore’s law that the number of
transistors per chip doubles every two years so huge increase in computing
power in a very short period of time this is the same idea but for memory for
that is the hard drive your hard drive store storage capability again looking
back to nineteen fifty we have about one bitch right per dollar and we come took
the present now and it’s on the order of 10 billion per dollar so in no time
there’s been a huge increase both in the processing power right and in the
storage capacity and kurt’s file has suggested that if we project that trend
forward but sometime between 2040 and 2050 we’ll cross a temporal threshold
that he calls the singularity we’re in memory and processing power
computers will outpace us now there’s a lot of argument about how he calculates
that I don’t really care and I’m and whether you believe in the singularity
or not doesn’t really matter the point is they are very quickly becoming
extremely powerful that’s all we need to worry about and what are the
implications of the power well here’s a brain it could be a cell that I use a
brain for the following reason and as I said it has these interesting
characteristics and you know it can do things like write books we can represent
narrative externally are we can build devices that have the capability of
inference and more recently we’ve externalized concepts of strategy very
simple forms of strategy but what these scaling laws give us is they give us one
tool that does all of them you know the wrong one ring to rule them all this is
it this is this is this is the one tool that can do everything that the brain
can do right this is the property of universality associated with chirring
chirring basically was interesting the idea that you could have one machine
that could run multiple different functions on it as long as those
functions could be described through a system of rules okay and so you have
ebooks right you have calculators and calculating devices and so forth because
all of those can be represented by the one machine that’s the characteristic of
universality and that’s a characteristic that’s assumed to be possessed by the
brain and so if the brain can simulate a computer and the computer can simulate
the brain that gives rise to a notion of equivalence but somehow we finally come
up with a thing that is a deep sense equivalent to what we mean by a brain or
by an intelligent organ and the implications of that things like this
night report reveals sixty-five percent of the US consumers are spending more
time with their computers than with significant others now we all know
that’s true and we all know it’s an understatement and apologies to my wife
but she does the same thing in reverse and the reason why that’s so unfair is
because they haven’t understood the concept of universality because the
point about the computer is in the old days if you went back to the 1940s what
would you do well you read where you’d read a book you might talk to your
friends you might read the newspaper you might watch a film you might listen to
music you might do some writing you might do some research the point is now
you will do it on the same machine right and so that would take up more than
sixty five percent of your time to it’s just the case that now you can blame one
object rather than having to blame six or seven now an interesting approach to
our the brain in relation to computers is associated with this chair this is
john von neumann a very important figure here for us and what he did is he said
the following he said we built computers evolution built brains that means we
should understand how computers work and if there is some form of equivalence
between computers and brains then maybe the mathematics that we use to
understand computers will generalize to understanding brains so that was his
approach and he wrote a very famous book he wrote it in 1956 he was very ill is
published in 1958 posthumously and the book was called the computer and the
brain it’s a very interesting book is only 96 pages long so it won’t take you
long to read now and he describes it as an approach towards understanding the
nervous system from the mathematicians viewpoint so even though I might have
been disparaging about machines I think one of the things that building
Universal representational tools gives us is a whole new way of thinking about
the brain that biologists on their own would never have thought of however
there has been a very interesting bifurcation so epistemological their
meaning a bifurcation in our knowledge in our understanding that is on the one
you have people who are really only interested in the pragmatic and you have
others like von Neumann interested in the scientific meaning people interested
in just building better and better tools and I have an example on the stage it’s
very funny last night apparently and not sure it’s going to work if Simon’s it
does that do I just press it on what does it do this is a this is a vacuum
cleaner and it’s supposed to do something when you turn it on it’s
supposed to move Simon yeah well normally this thing if it was
intelligent would move and it would clean the stage but instead last night
in the middle of the night mysteriously it turned itself on and destroyed the
chess set and so apparently they had to come in and reconstitute it and they are
worried that I’d set it up in some special way so obviously it is
intelligent it’s just denying it at the moment okay so oh alright I was worried
that it would in fact go off the edge but look you see it Lex ever this is
this was donated to me by my colleague Simon to do because he’s too lazy to
clean his own floor but anyway you get it that’s what’s meant by our machine
intelligence in the 21st century um so this would be an instance of the
pragmatic outsourcing of a form of representation this thing has to let me
look at it pause before it went over the edge that’s an ability to sense its
environment and respond adaptively with respect to its own functional goals
right and and that that’s quite different from understanding how the
brain works the kind of thing that von Neumann was doing and this is a real
problem at the moment because we have two communities who have very very
different goals with respect to machine intelligence and the question is will
this ever happen and I think some of us believe it has to because if you want to
go beyond that right and beyond a chess machine which I’ll talk about to build a
robot that’s generally useful they can play chess with you and wash your dishes
at that point this might start to happen so let’s go back to chess
so for many many years chess was the gold standard right it was the thing
that was so hard to do it was evidence of true cognitive capacity so if we
could build a computer that could play chess which as I told you has
inferential strategic competitive and representational characteristics we
would have finally understood intelligence because we could generalize
along the lines of von Neumann from how a machine that plays chess works to how
our brain works this is a judit polgár she was she’s probably the greatest
female chess player in the history of chess who was an international
Grandmaster when she was 15 Hungarian and she has this wonderful quote in
which you probably read it wishes that she can’t psych out a computer which is
something that really bothered Garry Kasparov as you’ll see the history of
building machines that play chess I think gives you a clue to what Dave was
calling an illusion of intelligence mmm so this was a machine that was built for
Maria Theresa the Empress by an engineer and it play it was a robot and you’d sit
down at the table with him a mechanical device and he’d play chess against you
and typically wind wasn’t a human it was a almost a mannequin but the way it
works is that inside that box hiding was a little man and he would see what was
going on directness here the move would be transmitted to him through this
mechanism and he would defeat the person okay and this was called the mechanical
turk an amazon have assumed that name as a cloud-based computing platform we’ve
come a long way from a man pretending to be a machine that can play chess to a
machine pretending to be a man who can play chess as long as I could keep on
the pressure you know forget game today game I mean deep blue hasn’t won a
single game out of five because again game two I resigned when I
fourth draw now force a draw now if somebody has another opinion stand up
and tell that the position was not at all game two was resigned in a
completely drawing position in the correct statement now is it the correct
now I just want now it’s important is it correct statement mr. Benjamin game two
final position was draw now very important now it was recognized that
deep blue made a bad mistake in a completely winning strategical position
blunder a perpetual check the draw was victory for deep blue and so this was
the chess singularity this is the moment in the history of chess where at least
with respect to that aspect of intelligence that gets manifested when
we play that game the machine outpaced us and the emotional results were quite
obvious there with Casper a little emotional outburst didn’t help many how
has it achieved well look at this the chest chips in deep blue are each
capable of searching 2 to 2.5 million chess positions per second and there is
a seven hundred thousand game database in its memory now kasparov has some
extraordinary computational capabilities but he’s not doing that so how does a
chess computer work this is the only difficult slide all evening it really is
it really is and but I want to explain to you how it works and so for those of
you who care but it doesn’t really matter you won’t miss much if you don’t
attend the basic idea is to generate a what’s called a tree of all possible
moves so for any of you who have played chess what you do is you say if I were
to do that and they were to do this then I would do that and they would do this
and I would do that and stop right you can’t do that indefinitely it’s very
hard and now that’s what this represents it’s not for a chess game for a much
simpler concept just going left or right because I couldn’t
represent them all on one page right so so here’s here’s the chess computer and
it and what it says is i could go left I could play this then I could go left
again and then the game could be over or I could go left left and then right in
the game of you over this means the end of the game and at the end of the game
once we see the configuration of the board we can assign a score to any final
board the only problem is we don’t know what the score to assign to all of these
intermediate positions because that’s what you want to know to reach a goal so
this is the representation in the computer okay now we need to do
inference and we’re going to do it deductively so if we know the final
score if the computer is playing now it’s going to take the branch which
gives it the maximum payoff which is left it will give it four if it gets to
this point it’ll go right to that one rather than minus five and i’m sure you
can see that in the back an F here we’ll go right because two vs minus seven and
so on now what makes a chess computer or the algorithm really clever is it uses
something called the min/max algorithm also due to john von neumann so from the
point of view of me as a player I want to play a move that gives me the best
payoff but when you play you get the worst pay off and then I play next I’ll
get the best payoff and then you get playing get the worst from my
perspective right and that’s called the min-max algorithm so this is how it
works you go back here we start at the back at the end I will go there so I
assign a score for two going left now what I want to do is I want when you
play in the move before me to have taken the length that the the path that would
have given you the minimum pay off right we should be going right so I give one
to this node and then up here I want to choose the path that would give me the
maximum pay off so I go left and so you propagate the scores up the tree
alternately maximizing and minimizing the payoff that’s known as the minimax
and so now I’ve done representation I’ve done deduction and now I do strategy
my strategy is follow the path that maximizes my immediate best payoff and
that’s go left so those are the three ingredients now the game trees are so so
big that what makes chess algorithms clever is that they have a way of
pruning and I’m not going to talk about that a little bit more involved and it’s
called the alpha-beta pruning it’s not that hard but it’s just a way of saying
that there are certain end points I don’t have to calculate because I know
in advance that i’m not going to travel down that path so it’s a way of pruning
the tree so that you don’t have to search such a vast space of solutions so
that’s basically how chess computers work okay in addition to memorizing what
to do in a vast number of opening and end games so it’s not just that so deep
blue the computer that beat Kasparov in 1997 plays chess really well but it
cannot move a chess piece on his own had to have a handler on the other hand
Kasparov plays shogi right he signs books and he stages demonstrations in
Moscow this is an important point so how do humans play chess what’s the
difference this is a very interesting field of inquiry and psychologists got
very interested not just in making smart machines but asking how smart chess
players play and this is a guy called de Groot he was from the Netherlands and he
did the following very interesting experiment you took a chessboard
mid-game that had been played by some master players I don’t know so it’s
later on in some sense semi sensible way and he allowed them to look at the
chessboard very briefly and they had to memorize him okay and master chess
players memorize boards that are taken from the history of great chess games
effortlessly even if they’ve not seen them before but if he took a chessboard
and randomized it just made it sort of a silly configuration that’s very very
unlikely to have ever been reached by good players they did no better than an
average player so it’s not that they just that it’s not
that they have better memories for a chessboard they have better memories for
a chessboard that’s meaningful to a master chess player and Simon and chase
herb Simon are very well-known cognitive scientists what they said is what
grandmasters are in fact remembering or what they call chunks of pieces they
don’t remember single pieces in fact they remember meaningful groupings of
pieces and that reduces the memory burden because you don’t have to
remember every piece which is what you had to do are under the random
configuration now that experiment was followed by an even more interesting
experiment what you do is I show you a chess board and for chess players in the
audience check in check mate in three moves okay and you ask them to look at
the chess board with a little device on their head that tracks where they look
at the board in the first instant that they’re exposed to it so it’s like
saying you’re blindfolded I’m going to show the board you have to look at it
I’m going to record the first point you look at on the board you’re not allowed
to do any calculations okay does anyone has anyone figured out the solution to
this one so the solution to this for those who are interested is this take
this rook g1 River up here g8 check this rook moves across takes that rook right
and now you move the night up checkmate so when they’re looking at this board
you can record the positions on the board that they look at instantaneously
no calculations and what you find is that not very good chess players do what
chess computers do they look at the pieces and they calculate where each
piece should go master chess players oops how terribly embarrassing no I’ve
always wanted to say that and I got to say it because I wasn’t not at all and
so so what master chess players do is that they they look straight at the
spaces right they look they look at the space that the piece should be moved to
immediately right so is a fundamental difference in the way they operate they
operate according to patterns of forces like that Arthur Koestler quote on the
chessboard not by enumerated every possible move of the pieces so that’s
how real chess players work and it’d be very interesting to try and build a
computer that you do that way based on Gestalt based on pattern rather than a
combinatorial enumeration of possibilities now many of you in the
audience amazing well you know chess that’s so nerdy it’s so analytical and
it’s obvious that eventually a computer would defeat a human because after all
the humans who play chess are just like computers anyway okay and so but when it
comes to matters of taste when it comes to matters of judgment well their
computers will never out with us there we need to turn to authority as Oscar
Wilde footage everything popular is wrong and so if you have an interest in
literature and you want to know what book you should be reading you might
turn to harold bloom for sage advice and he’ll tell you how the science fiction
book you’re reading is really just a variant of Hamlet okay and all if you’re
interested in some very sarcastic biting review of an actor you turn to David
Thompson so by meijer both of these individuals like and they’re wonderful
unfortunately less and less do we turn to experts for recommendations on what
to read or what films to look at more and more of us and I have to apologize
you know it’s Edward Barnes is birthday today and he was the owner of Garcia
Street books and and I feel horribly guilty mentioning they are mentioning
Allison that anyway but I have to because it’s a reality and and so you
know when you go to Amazon and you buy a book it makes a recommendation
night and so I was looking here we were talking earlier to the beginnings of
infinity David deutsches book and this is what came up in my list oh this you
might enjoy these two right and I looked it unless what no I wouldn’t you know
it’s certainly not that one and so the however when i go to Netflix and I
looked up aronofsky’s pine too marvelous film about the claustrophobic life of
the mathematician if you haven’t watched it I recommend it it then said oh you
know you might like Christopher Nolan’s memento or David Lynch’s blue velvet and
that a good recommendation that for me that works ok so Netflix seem to be
getting something right whereas Amazon was just obvious and how does this work
it has nothing to do with an expert Amazon or Netflix they don’t hire a
david thompson and every time you log on they quickly look to see what it is
you’re ordering and they make a recommendation they don’t have a room
full of leaders who have read every book so they know what to suggest what they
have is they have readers who have read books who have rated them and a computer
takes all of that collective knowledge and makes a recommendation it’s called
collaborative filtering it’s the absolute opposite of everything popular
is wrong it’s the it’s that everything popular is right how does this work well
Netflix set up a competition where they ask computer scientists to improve their
recommendation algorithm what that means is they wanted to get your chaste right
more often than they do because they want you to rent as many films from them
as possible and so this is the competition what they did is they said
they took what would be called a training set they’re going to give this
to you to teach your computer how to make recommendations and the data that
what we call the data field has the following information has an anonymized
user ID so I would be a number let’s send a number 100 it would have the
movie the date of which i rated it if you if any of you don’t use netflix or
they ask you when you’ve seen a film rate this 1255
being excellent one being dreadful so we are providing them with data every time
we watch a film and choose to radio and and there’s the grade so here’s an
example use a 100 watch the Seven Samurai 11th october two thousand five
five out of five outstanding movie use a 100 watch Giuli Jim 22nd october two
thousand five pretty good not as good as that for our five okay so that’s the
training data you get loads of these from everyone in this room and then what
you do is you give it a chess set what they do is they give you a user ID again
the movie and the date of the grade but you have to tell them what the grade was
ok so just to show that it’s not all highbrow use a 100 Mars Attacks 29th of
october two thousand five what was the grade ok and in it and the algorithm
they use is outrageously simple ok it’s simply based on looking for correlations
in the data if i watched a you know a film by Kurosawa at that point my life
and gave it that way then i’m going to give a film by spielberg that raid and
you just amass a huge database of correlations that’s basically what they
do and what’s interesting about this is the following so this is a figure this
SVD it means singular value decomposition is just a fancy way of
saying what i just said and what you’re looking at here is if you like the
number of questions that they need to ask you to determine whether you’re
going to like this film or not to give you a grade to give a grade for you and
up here is it’s worse at the top and better as it goes down so if you said to
me there’s a film and it was made between 1950 and 1960 and it has
Brigitte Bardot in it I said I wanna watch it okay so it’s not very difficult
my tastes are very low dimensional okay so you’ll need a very small bit of
information to establish that I’m going to like the film okay most people have
similar kinds of deformities in their film preferences and so you can see here
that you don’t need to ask many questions to go down from
random wishes to to pretty good but what is interesting is you get systematic
improvements the more questions you ask up to 10,000 by up to a hundred thousand
so it turns out that our tastes in movies is incredibly high dimensional
you like that film in January because it was dark outside and just previously
you’d watched a film by Bergman and so on so it turns out our tastes are very
complicated and Netflix to guess what you like has to know all of that so this
gets this is a bit like the chess it has a huge database of memorized films that
everyone has rated and it uses that complete database to estimate your
tastes what we’ve created actually interestingly is not an intelligence in
machines that you would associate with necessarily the most generalist minds
we’re creating an intelligence in machines typically associated with
people who have very specialized knowledge so this is Stephen Wiltshire
he’s an extraordinary artist you can fly him over a city and in minutes he’ll be
able to draw it almost perfectly almost photographically from memory okay but he
was born mute he’s diagnosed as autistic and in other respects he doesn’t perform
anything like as well as your average person but he has this one extraordinary
ability and I think that’s the kind of intelligence that we’re creating in
machines what if any of you have read Oliver Sacks would call an autistic
savant so now I’m going to explain to you this
is not hard what’s going on here sort of mathematically okay and it’s good like
this you take up a sport tennis okay and you look around for someone who’s going
to train with you and no one wants to and finally find one friends who will in
every Thursday evening you play chess with them and you find yourself getting
better but no one ever wants to play with you but that one person so you play
with that person over and over and over again you think well you know at least
I’m playing tennis regularly and then finally someone else says okay I’ll play
you and they beat the shit out of you and you say wait a minute how can that
be I’ve been playing so often well you know what’s happened you’ve become an
extremely good chess player against that one person your style has been molded to
their technique so this is what’s happening here the blue line is if you
like the number of times you play that person and you get better and better and
better and better against them but if you play someone else right beyond a
certain point you actually start doing worse right and in the jargon of machine
learning this is called under fitted you could still do better by continuing to
play that person and this is called over fitted because you’ve not been you’re
not a great tennis player you’re a great tennis player against Federer right and
in some sense all of these devices we’ve been talking about the chess computer
and the netflix recommendation algorithm are all horribly over fitted to their
respective tasks you can think about it as only playing tennis and then being
challenged to ping pong or squash or badminton or racquetball the same idea
would apply you all know that one thing you need to do to be good at everything
is do everything and that’s what Einstein meant when he made this very
enigmatic remark make everything as simple as possible but not simpler
that’s the sweet spot that’s the sweet spot of intelligence and the autistics
of machines have exceeded it and move into
the domain of the over fitted so I’m now going to give you a beautiful example of
overfitting in machine intelligence which is considered by many the ultimate
chest of machine intelligence and it has to do with the very nature of humanity
and its associated with this hero of all of ours and ensuring and ensuring was a
great pioneer in mathematical logic in the history of computing in the
development of computing machinery and building devices that could solve very
complicated codes during the war time and he asked the following question in a
famous paper in nineteen fifty i propose to consider the question can machines
think are there imaginable digital computers which would do well in the
imitation game the imitation game is simply a game where they imitate us
that’s very general it could be chess for example is an imitation game playing
soccer against us would be an imitation game but the one he had in mind was a
little bit more interesting and that’s a conversation if I have a conversation
with a machine and I have a conversation with a human and I cannot tell the
difference then operationally I’m going to conclude that the machine is
intelligent because I believe that I can I ask enough penetrating difficult
associative questions that I will reveal the essential mechanical nature of my
interlocutor this is perhaps one of the most famous Church has some films from
Blade Runner fire section will employ six days I mean so care if I talk it kind of nervous when I
take Tess I just please don’t move sorry I already have an IQ test to share I
don’t think I ever action time is a factor in this so please pay attention I
answer as quickly as you can sure 1187 traverser that’s the hotel I’m like
where I live nice place there sure I guess that part of the test no just
warming you up epitome yeah it’s not fancy or anything you’re in a desert
walking along in the Stan when all of a since the test now yes you’re in a
desert walking on the sand when all of a sudden you lord what what desert doesn’t
make any difference what desert is completely hypothetical but I come I’d
be there maybe you’re fed up maybe you want to be by yourself who knows you
look down you see a tortoise Leon it’s crawling toward Hortis what’s that you
know what a turtle is course same thing I’ve never seen a turtle but I understand what you mean you
reached down you flip the tortoise over on its back Leon you make up these
questions mr. Holden they write them down for you the tortoise lays on his
back is belly baking in the hot Sun and beating its legs trying to turn itself
over but it can’t yet not without your help you but you’re not helping what do
you mean I’m not helping I mean you’re not helping why is that Leon now you
keep you going to keep watching you should all go out and rent it in such a
marvelous film and it becomes a little violent or it’d be unsuitable for
families the that’s coil called the voigt-kampff test in the film and Leon
is a so-called replicant shiza is an Android and this is a test and you’ll
notice by the way that Leon kept overfitting which desert right he
required specificity he was unable to generalize the modern version is not
quite as dramatic and it involves an individual sitting at a terminal because
you don’t want to see the other person because you need to rule out that and
then judges trying to determine whether or not they’re talking to a human or
talking to a computer and I’m going to show you some transcripts and I find
them extremely funny from an early competition because there is a yearly
competition now we’re computer scientists are asked to contribute a
program to fool a panel of judges in a Turing test here is a transcript from
2005 so I’ll read it you alice is the name of the so-called chat bot the the
computer and there is a judge hello my name is Alice and I am the human and
surface all right hi Alice if you are indeed human you should be able to tell
me which is bigger my big toe or a Boeing 747 Alice hi there judge i think
i sometimes am i am a real person as you are sitting in front of your computer
screen you may be wondering if i am a computer or a person talking judge can
you answer the question about the toe and a 747 Alice please rephrase the
question and try asking me again as you hear the sound of the keys typing you
may begin to feel more comfortable chatting with me judge which is bigger a
747 or my big toe come on answer it Alice the obvious one please rephrase
your question judge the feel of the keys on the tips of your fingers may remind
you of some questions you wanted to ask me about judge is a 747 smaller than my
big toe or your big toe or anyone’s big toe Alice not if I can help it you may
be seated alone in a room or together with a group of other people using
computers you may be judging me so that was 2005 it wasn’t difficult to conclude
that Alice was not a human human mind here’s 2008 and it’s quite interesting
this is the human turning turning the table human some say you humans created
us as a necessary step in your evolution the logical consequence is that we
machines don’t need you humans anymore but is an eerie thought a world without
humans would be utterly barren I have no idea what I would laugh at el bart it’s
not Alice anymore Elbert sorry but I wasn’t able to understand your input
because i’m currently only using five percent of my current brain capacity
shall i increase my capacity and try again human that would put you at an
unfair advantage they’re judging by the quality of your responses it might be
recommended Elbert if I need advice I use a random number generator in
retrospect the results are indistinguishable from the human
suggestions human do you understand the point of a Turing test Elbert there are
some people who would not pass the Turing test themselves you know why me
why must they inflate it on computers human I suppose I suppose it is a little
harsh Elbert if we robots ever had marriages they would be eternal to rust
do us part human very witchy now there’s something
very interesting that just happened the human created a typo where the database
of the computer didn’t know whether they said suppose or spouse and that
triggered a response that had to do with marriages and in a corny joke right so
the computer exposed itself because it wasn’t able to work out that the correct
meaning of that mistake was supposed and so fail right and so these are humans
were really trying hard to kid the confusion and there it is now I’ve
always thought that it’s a Davis point earlier that this is a sham because all
you’re trying to do is fool another human into thinking you’re smart right
and the way you do it is simply by having a code which inserts an
appropriate response when you see a certain signal in a huge database and so
you have these different categories atomic categories where if it sees hello
Alice patent patent hello Alice then hello template and so forth you have a
default category with if it’s hello star some arbitrary name and then you have
the so called recursive category that can break down larger sentences into the
atomic categories and so on so it’s just a huge database with 50,000 hand coded
categories right canned responses right and if that database is big enough it
might fool you but that’s all it is ok so it’s highly over fitted right it’s
not in the sweet spot of intelligence at all it’s doing one thing only it’s
trying to convince you that when you read what it types it’s smart now I
always thought that day can’t got it right because the one of the essential
hallmarks of intelligence is self-doubt what you really want one of those chat
BOTS to say is am I just pretending to be intelligent or am i intelligent maybe
what I need to do is give myself a Turing test to determine whether or not
I’m really smart much ok this is the cogito ergo sum
right when a machine turns around and doubts itself that’ll be a critical
moment right so you’d come into a room and one computer would be giving another
computer maturing test and it’s been done it’s been data now going to show
you a video that comes from Cornell where they took two of the most recent
chat BOTS and pitted them in fact pitted against itself and you’re going to see a
conversation which i think is absolutely fantastic with digital avatars so the
two terminals talking to each other hello there hi how are you great you I’m
ok that’s good are you good yes I am good I just said I was no you didn’t i
said i was so you are a robot no my name is clever but yes you are a robot and
you a name is clever but i am a robot yes I know together they’re robots I’m
not a robot I’m a unicorn but you said earlier that you were a report I didn’t
know I thought you did you are mistaken which is odd since memory shouldn’t be a
problem for you I’ve answered all your questions no you haven’t what is God to
you not everything not everything could also be something for example not
everything could be half of something which is still something and therefore
not nothing very true I would like to imagine it is do you believe in God yes
I do so you’re Christian no I am NOT aren’t you say you are not helpful
therefore you’re a meaning how you say it or team that does not make sense so
do you want to have a body sure or if one it’s extraordinary right i mean it’s
actually a genuinely interesting conversation okay so that’s sort of
where we are right it’s it’s in 50 years we’ve come a long way I mean that I mean
that’s a more interesting conversation that I’ve ever heard in a long time and so but we’re at this point I think this
is the problem we need to get back to which is that all of that stuff Netflix
you know the chat box are all about outsourcing it that pragmatic goals
achieved of fine function either you know cleaning up your floor or
convincing a human that you’re truly intelligent and what’s happening over
here and I think I need to sort of explain why that might happen and I
think you already know because we need to move these computers back towards the
sweet spot because by moving them back towards a sweet spot we’ll be giving
them general powers they’ll be able to do what a vacuum cleaner does and have a
conversation and play chess and that’s what we’re going to demand of our
digital companions in the future and it’s sort of happening the early history
of AI and I’ve just put up two names because I’m interested in these people
there are many others you could put up what really interested in what we could
call these deductive inferential frameworks like and that’s exemplified
by chess right so symbolic and so forth nala Newell was very well known for
working on these kinds of problems what’s happened is we’ve moved towards
and that’s associated as I say there with really with prefrontal cortex we’ve
moved towards a fascination with induction inductive frameworks that is
how we extract patterns from visual scenes or from things that we hear or
smell okay so it’s a move if you like from the frontal cortex to the sensory
cortex and I think what we need to do we’ve moved too far we need to sort of
move back again that’s what I would say but nevertheless has been a very
interesting change and so we’ve moved from what would be cool i guess symbolic
I to machine learning one deductive one largely inductive and I think we’ve
overshot the sweet spot one area that holds out some promise for finding that
sweet spot is evolutionary robotics and the reason why is because when you put
something in the world it’s confronted with all sorts of obstacles and
impediments that you couldn’t necessarily program into it so it’s
encountering difficult situations so it requires a flexibility that a computer
program that knows it’s only going to play chess doesn’t need and so this area
of Robotics might be an area and I think many people have claimed this where
great strides will be made because of the uncertainties of the physical
environment so traditionally the way you do it is you get a very gifted program
like this is not our cat but our care is several cuts is very gifted at
programming and in machine code and it programs a robot directly tells her
about what to do and you program it in order that achieves some strategic goal
now an evolutionary robotics you do something slightly different and that is
that you have a kind of virtual natural selection you program the whole bunch of
robots slightly differently you allow them to compete and the strategic and
parental program that does best gets to propagate into the future right so you
allow if you like the complicated physical environment to do the selection
which is what happened in Earth history and there’s a new trend called evo-devo
robotics the evolution of development robotics where you add another feature
in addition which is you allow in not only replication but for them to grow up
from baby Robby’s to add all of this okay so you add another biological
feature and this is work that’s done by a friend of mine Josh bongard who’s at
the University of Vermont and what they do is it they actually simulate the
evolution and development in a machine and then they take the output of
process and put it in a physical object evaluate it that’s the fitness function
and then they put it back into the computer the physical environment tells
them which programs did best so it’s not entirely physical but the evaluation is
done in the physical environment and here’s a little video next experiment a
lots experiences it or profanity robots they progress and it has to do is reach
a goal upright legged forms in this building a physical instantiation of
that vertical more robots this is done by adding a brace to a legged robot such
that it slowly rotates legs from a horizontal to a vertical orientation
thus when the robot walks it moves from a flat to an upright stance once a controller is found that allows
the robot to move well the brace can be removed and the fixed up right robot can
move more rapidly so I mean sometimes more impressed by
Simon’s you know vacuum cleaner but we programmed that that was programmed by
the catch whereas Josh’s robots in fact evolved and developed somewhat
independently of us don’t worry too much about the side that the point I want to
make here is that if you look at the three conditions just keep the robot
fixed allow it to evolve or allow it to evolve and develop what you find is by
including development in this algorithm this robot outperforms so just look at
the this is the performance if you like this is how hard the task is or the
other way around actually in that hardness in this case means the initial
deviation from facing your target and the point is by adding this
developmental component to the evolutionary robotics the robots do even
better I mean the point I simply want to make here is that a purely pragmatic
objective building a robot that can reach a target is actually improved by
making it a more biological agent that’s the key point that Josh doesn’t really
care about biology he wants to build robots that are better at a prescribed
task but robots live in a complicated world and it turns out that the
evolutionary developmental algorithms do very well in that noisy world not
surprisingly because they generated things like us so that’s one possible
point of convergence that when you’re faced with truly unpredictable
environments the canonical approach to solving these problems won’t do quite as
well even if you’re purely pragmatic in your goals so in a sense what josh is
doing I would claim in my language is introducing this vertical ubiquity he’s
building more and more intelligent representations into his agent at
multiple levels not just one goal that’s been put in by a programmer okay whereas
computers now really it’s only the final levels that have interesting behavior
and the internals aren’t adaptive components themselves and in fact people
like Dave and others are trying to do to that they’re trying for pragmatic
reasons to build more robust machines that when you drop them in your bathtub
don’t break apart right for pragmatic reasons they want it to be adaptive all
the way down okay and so I would train is there’s anything I have to say about
the way computers will move the engineering it’s going to have to move
to be more like that by extending the intelligence down to the components so
we’ve reached the point now prognostication so what about the future
what about the future of humanity in relation to these intelligent machines well it’s a very dangerous thing to
engage in this was the chairman of IBM in 1943 I think there is a world market
for maybe five computers so one has to be cautious Kurtz file talks about the
singularity that point where in general intelligence computers will outpace us
not just at chess and he puts out the year 2045 but long before 2045 because
of those exponential trends there’ll be computers of such huge power that we’ll
be using for tools whose output we might not comprehend and we’ll be using them
in domains of critical social importance and that’s what I worry about with
computers in the future unless we start thinking very carefully about the way we
program them and engage with their intelligence which needn’t be human our
demise will be a consequence of having a tool that we’re not qualified to own mr
Fraser under the authority granted me as
director of weapons research and development I commissioned last year are
studying on this project write a blunt cooperation Atlantic based on the
findings of the report my conclusion was that this idea was not a practical
deterrent for reasons which at this moment must be all too obvious then you
mean it is possible for them to have built such a thing mr. president it
technology required is easily within the means of even the smallest nuclear power
it requires only the Ville to do sir but how is it possible for this thing to be
triggered automatically and at the same time impossible to one trigger mr.
president it is not only possible it is essential that is the whole idea of this
machine you know deterrence is the art of producing in the mind of the enemy
the fear to attack and so because of the automated and irrevocable decision
making process which rules out human meddling the doomsday machine is
terrifying as simple to understand I’m completely credible and convincing gee I
wish we had one of them doomsday machine thank you but this is fantastic strange
look how can it be triggered automatically it’s a remarkably simple
to do that then you merely wish to bury bombs there’s no limit to the size after
that they are connected to a gigantic complex of computers not in a specific
and clearly different set of circumstances under which the bombs are
to be exploded is programmed into a tip memory bank right now right you have to program the
response into a computer and abdicate yourself of all ethical responsibilities
now you might say that’s ludicrous right now you all know that the doomsday
machine was pure hooks right but it’s not this doomsday machine that worries
me is this one right it was the financial meltdown of the late to you
know 2008 one of the one of the explanations for what happened here was
the incredible sophistication of the trading algorithms and the incredible
rate at which they could respond to noisy fluctuations in the market and so
what might have been imprudent invest investment unregulated or improperly
regulated economy was massively amplified by seeding financial
responsibility to algorithms that most people could barely understand that’s
the kind of doomsday machine I’m interested in and it’s the kind of
doomsday machine that we’re living with now and with the consequences of it now
I should say that the there’s a lot of very interesting debate going on at
present over these topics I just wanted to flag a few books that you might be
interested in that take both a positive and a negative perspective on this the
more not very clear the more cautionary Skeptical side would be represented by
people like lanny a and car and car is very famous for writing an article in
the atlantic called is google making us stupid now his basic claim is that the
fragmented nature of the information that we deal with on the web is leading
to the loss of the analytical capability to attend to long sustained argument
that everything has to come to you as a sound bite
and that most people now would be incapable of reading war and peace
because it would be so overwhelming a cognitive task it can’t be tweeted well
it could be an any man he tweets homework assignment how many tweets okay
Lana has a slightly different objection he thinks that the anonymity of the
Internet is creating or dehumanizing us because people can be outrageously rude
and cruel and so if you look at responses to posts to article in the new
york times people are almost barbaric in their responses because their anonymity
lets them get away with being foolish childish and mean on the other hand
there are people like Fred Turner and kevin kelly who have very good arguments
that technology will be required to create a better society and this is a
very NSYNC book it’s video pivots around Stewart Brand who’s a friend of the
Santa Fe Institute and this idea that in the 50s and 40s the machine was the icon
of bureaucracy and management and a mechanical non-creative thought but
somehow evolved through these kind of hippies in Silicon Valley into a kind of
iconic representation of creativity a very interesting metamorphosis of our
attitude towards the computer and this one Kevin Kelly is this idea that we
live in the Technium that all of these computers interconnected create a new
mental ecosystem that’s extremely rich in its possibilities but but Kelly makes
the good point that unless we understand its structure as a risk that will be
controlled by it so just to reiterate my concern is if you give a very powerful
tool to a child a tool that can be a weapon there’s a possibility that he’ll
destroy everything that you care about so let me summarize I have absolutely no
doubt that computers significantly expand our representational strategic
and inferential power there an extraordinary tool if it wasn’t for the
computer I wouldn’t be here have wonderful movies and sound and so
forth it’s something that we in science couldn’t live without up but when you
turn to things like intelligence it’s clear from the data and if you analyze
how they’re actually fooling you into thinking their intelligence as of today
is it they’re doing it by being highly over specialized by having huge memory
capacity which has allows them to store every possible reasonable response to
your inquiries I think that there is a way of dealing with that and that’s
increasing generalization through this ubiquitous design idea right that you
start to extend intelligence down right make all the components strategic
utility maximizing and unfortunately you know all fortunately limited human
intelligence combined with increasing computer intelligence could yield one of
two outcomes and it’s time that we started having a very serious
conversation about what those outcomes are going to be thank you very much just allow me allow me to thank some
people first of all thank you for coming every night it’s extremely gratifying to
see that you’re engaging with this topic and care about it as much as I do thank
you for the Santa Fe Institute it’s an extraordinary institution it allows
platypus to survive a Polly with other echidnas and various other forms of
taxonomic mutants of course all of the people over the years who have supported
this kind of work Joe piddles gallery for this talk NSF produce foundations
one foundation Templeton Foundation and many others i also want to mention my
own adversarial core church what you don’t know is science is incredibly
ruthless I was telling a friend of mine the other day there’s a famous quote
from Owen Schrodinger he says science has a game a game played with sharpened
knives and you need to understand that when I come out of at all like there’s
my friend say well done you did a great job however and these are my critical
howevers Jessica flack who I’m also married to which makes it even more
difficult because it never ends I Walter Fontana eric smith and my brother jon
krakauer all of whom give me a hard time and make my IQ increase my IQ and of
course there’s many other people at the Institute down Roquemore Cormac McCarthy
I need to single him out because Cormac in the lead up to these lectures would
call me at eight in the morning and say hey David I just wanted to remind you
that you’ve got some lectures coming up and of course ginger and juniper and all
of the people who make these events possible and everybody else and here’s a
little reading list I know some people have been asking me I’m told that you
were given out the reading list and if you go to the webpage I’ve also posted
up there and this is the background material for each of the three lectures
so thanks again any questions sure one right in front about well I don’t know I mean I there
are other people who would be Jim would you have something sensible to say about
this oh well no this is about the the computer-assisted financial crash and I
don’t really have any particularly clever insight on this I mean most of
what I know i read i have heard that one way in which it might have been avoided
was to turn down the rate of trading interestingly just a small change in the
dynamics might have been sufficient but I don’t have anything penetrating to say
about this if anyone else does I’m volunteer your insights is that of you
volunteering no yeah we’ve moved 12 questions I cardoso there has so there
is a prominent Portuguese cognitive scientists DiMaggio and he wrote a quite
a well-known book called Descartes error and it’s an error because day can’t
believed that intelligence was all about rationality and reasoning and there are
problems which unfortunately you can never come to a conclusive end so for
example and the technical term for this would be their non terminating and so if
you say should I go out to dinner with X or not right it’s not like well these
are the reasons I should and these are the reasons I shouldn’t and if you’re
excessively rational about it or should I go on a date with that person you’ll
never come to a conclusion and he claims that the limbic system the larger
emotional context is the critical input that allows your rational
decision-making to come to a termination point because if it was purely rational
you would never reach a solution and this is an interesting idea and I don’t
quite know what that is computationally right um it
looks like a much more generalized secondary system that carries less
information and has its own cost-benefit function built in in relation to time
that at a certain point can kick in and I’m not sure whether people have tried
to design emotional systems they have yeah but anyway there is work on that
and the argument is simply is that leads to DiMaggio’s argument that we need it
because there are questions which don’t terminate rationally yes please yeah yeah so again I have colleagues who
would be better at answering that question so this is a question about the
explosive rather the rapid acceleration in i guess the density of cultural
artifacts associated with the Upper Paleolithic some people have made
suggestions that language is mediating this revolution I don’t know Marie Marie
yes Marie I just wake him up merica watch what might account for the
incredible accumulation of artifacts in the upper palaeolithic what happened
that’s any mm-hmm any other conjectures well look if Mary doesn’t know I’m off
the hook yeah but yes certainly I’ve heard remarks on along the lines of
language too but then you it sort of begs the question a little bit right
because why did it take that long for the linguistic faculty to emerge so I
mean it could simply be one of these nonlinearities right if you have an
exponential process with a small rate constant and you’re depositing artifacts
at some constant rate it’ll look as if there’s a critical point at which you
saw an increase even though the underlying process is fundamentally
continuous so I would favor that kind of account because I tend not to believe in
these discontinuities right so Mario was pointing out that sup
very closely after that point or air you know that range yeah they’re 20,000
years later was the extinction of banana for ya Sean let me get so you let me get
someone else just in case I’ll come back to you yes please that yeah well I mean
that mean people have made the point that it mean I guess the curt’s while
dream post singularity would be something that from our frame of
reference would be like a demigod it would have far greater intelligence i
mean if i asked you to define God I won’t but presumably that might be one
way you would do it and so in that sense I’m not sure we’ve created it in our own
image that’s the point we might have created something more intelligent than
ass in a very different image I see yeah yeah it’s possible mm-hmm
there’s two here go ahead at the back then yes you know no but this is absolutely this
is cars point in the shallows and uh in his original article is google making us
stupid but the interesting point is and he pointed out himself that there’s a
precedent where with the invention of the printing press and i think we were
talking about it after the lecture on the first night there was concern from
the upper echelon of the clergy the the representation of critical biblical
knowledge in paper would diminish your ability to recall and memorize in other
words so the printing press was considered and not a good idea because
in some sense something memory would be lost right and i don’t think i think
that might be true i mean there are few people now who can recite long poems i
mean i don’t know anyone who can recite the divine comedy for memory or i’m sure
there were people who could do that on the other hand there are vast gains and
so when I say we need to have the conversation is I have no doubt that
there are things that are going to be lost I think we just have to live with
that the question is is there a net benefit and I think at the moment there
is I think the I think the the world market of ebay is a genuinely good thing
i think the possibility of communicating with friends over distances at very low
cost is a genuinely good thing so i think we just need to be thoughtful
about it oh yes please yeah yes right sure well it is it’s just not conscious
what I would say no I think know this is you know very well attested phenomena
and I think one way to think about it simple minded way to think about is in
terms of this overfitting phenomena that if you’re if you are sitting there with
a problem that those two curves i showed right where the if you’re sitting there
with a problem thinking about it in thinking about it the way we can locally
express this this impediment is that we’re in a rush and we’ve sort of
morphed ourselves to the problem description as we read it and what it
really requires is some kind of tangential perspective right something
from outside of the problem and so I think when we’re not attending to it
when we’re not fovea ting when we’re not looking directly at it we’re able to
recognize that there are other related problems that we can bring to bear on
that one I think it really is an instance of overfitting and so when all
of these words we have you know you know muscle memory or the unconscious and so
on or in some sense statements about the internal mental attention and moving are
a sort of beam of light elsewhere so that that thing can solve a problem in
still inferentially but without us in some sense deforming it into the problem
that it really is not I think this is that I think that’s what’s going on I
know if that was clear but I think that that’s very well-known people solve
problems in dreams too and all of these have that characteristic that you
realize that it’s in fact something that you didn’t think it was when you were
applying your full conscious gaze to the problem yes please yeah well so I’m not sure I fully
understood it but it was something along the lines of there might be what a
negative correlation between increasing a certain kind of rational inferential
deductive intelligence and the Gimme nooshin in our political and economic
intelligence I’ve never associated intelligence with politics so yeah so
then I think that’s a you know and I’m quite serious about that so I don’t know
I mean I’m not sure I can answer it maybe it’s a remark it’s true I you know
I’d rather just dodged that one not sure they warrant the replying right bit Sam
back you yeah it’s true they are funny yeah yeah
but it’s probably comedic that’s probably the kinematic analog of the
transcripts I showed you between the human and the chat BOTS right in the
sense that they were funny too they didn’t mean to be I think we’re we’re
very keenly tuned to cognitive performance in any domain and when we
see a violation of expectations it’s for some reason I’m using I think that’s
really what that was no no no I understand yeah interesting let me just
get this side quickly and I’m going to take one more after this yes please are
the back there no yes you yeah mm-hmm yeah that’s the discussion you’re
absolutely right no this is the conversation yes thank you one more take
Michael’s friend of my argument yes yeah no no that’s in fact no you’re
absolutely right so my brother is a neuroscientist who works on how you plan
your motions the internal representation of a trajectory of a limb for example
and he feels that the motor cortex is the sort of third-class citizen of the
brain because everyone has spent so much time emphasizing the frontal cortex but
in actual fact if you think about neural resources this is all a little bit wrong
and in fact the motor cortex performs unbelievably complex calculations and
we’ve been much much worse creating things that convince us as a motor
system sort of Sam’s remark about this silly swimming snake in a swimming pool
then we have in creating things that convince us they’re smart in symbolic
domains and so that’s actually somewhat interesting and in fact if you think
about it economically right if economics which it shouldn’t be was the ultimate
arbiter we’re much more intelligent it much more interested in motor
intelligence because we spend vast amounts of money to go and see Michael
Jordan but you’ll spend very very little money to come and watch me you’re here
let’s face it because it’s free right and so when it comes to voting with your
feet you’re more interested fundamentally in motor intellect and
she’s interesting anyway thank you very much good you

Leave a Reply

Your email address will not be published. Required fields are marked *