The wonderful and terrifying implications of computers that can learn | Jeremy Howard | TEDxBrussels


Translator: Joseph Geni
Reviewer: TED Translators admin It used to be that if you wanted
to get a computer to do something new, you would have to program it. Now, programming, for those of you here
that haven’t done it yourself, requires laying out in excruciating detail every single step that you want
the computer to do in order to achieve your goal. Now, if you want to do something
that you don’t know how to do yourself, then this is going to be
a great challenge. So this was the challenge faced
by this man, Arthur Samuel. In 1956, he wanted to get this computer
to be able to beat him at checkers. How can you write a program, lay out in excruciating detail,
how to be better than you at checkers? So he came up with an idea: he had the computer play
against itself thousands of times and learn how to play checkers. And indeed it worked,
and in fact, by 1962, this computer had beaten
the Connecticut state champion. So Arthur Samuel was
the father of machine learning, and I have a great debt to him, because I am
a machine learning practitioner. I was the president of Kaggle, a community of over 200,000
machine learning practitioners. Kaggle puts up competitions to try and get them to solve
previously unsolved problems, and it’s been successful
hundreds of times. So from this vantage point,
I was able to find out a lot about what machine learning
can do in the past, can do today, and what it could do in the future. Perhaps the first big success of machine learning commercially
was Google. Google showed that it is possible
to find information by using a computer algorithm, and this algorithm is based
on machine learning. Since that time, there have been many
commercial successes of machine learning. Companies like Amazon and Netflix use machine learning to suggest
products that you might like to buy, movies that you might like to watch. Sometimes, it’s almost creepy. Companies like LinkedIn and Facebook sometimes will tell you about
who your friends might be and you have no idea how it did it, and this is because it’s using
the power of machine learning. These are algorithms that have learned
how to do this from data rather than being programmed by hand. This is also how IBM was successful in getting Watson to beat
two world champions at “Jeopardy,” answering incredibly subtle
and complex questions like this one. [The ancient “Lion of Nimrud” went missing
from this city’s national museum in 2003] This is also why we are now able
to see the first self-driving cars. If you want to be able to tell
the difference between, say, a tree and a pedestrian,
well, that’s pretty important. We don’t know how to write
those programs by hand, but with machine learning,
this is now possible. And in fact, this car has driven
over a million miles without any accidents on regular roads. So we now know that computers can learn, and computers can learn to do things that we actually sometimes
don’t know how to do ourselves, or maybe can do them better than us. One of the most amazing examples
I’ve seen of machine learning happened on a project that I ran at Kaggle where a team run by a guy
called Geoffrey Hinton from the University of Toronto won a competition
for automatic drug discovery. Now, what was extraordinary here
is not just that they beat all of the algorithms developed by Merck
or the international academic community, but nobody on the team had any background
in chemistry or biology or life sciences, and they did it in two weeks. How did they do this? They used an extraordinary algorithm
called deep learning. So important was this that in fact
the success was covered in The New York Times
in a front page article a few weeks later. This is Geoffrey Hinton
here on the left-hand side. Deep learning is an algorithm
inspired by how the human brain works, and as a result it’s an algorithm which has no theoretical limitations
on what it can do. The more data you give it and the more
computation time you give it, the better it gets. The New York Times
also showed in this article another extraordinary result
of deep learning which I’m going to show you now. It shows that computers
can listen and understand. (Video) Richard Rashid: Now, the last step that I want to be able
to take in this process is to actually speak to you in Chinese. Now the key thing there is, we’ve been able to take a large amount
of information from many Chinese speakers and produce a text-to-speech system that takes Chinese text
and converts it into Chinese language, and then we’ve taken
an hour or so of my own voice and we’ve used that to modulate the standard text-to-speech system
so that it would sound like me. Again, the result’s not perfect. There are in fact quite a few errors. (In Chinese) (Applause) There’s much work to be done in this area. (In Chinese) (Applause) Jeremy Howard: Well, that was
at a machine learning conference in China. It’s not often, actually,
at academic conferences that you do hear spontaneous applause, although of course sometimes
at TEDx conferences, feel free. Everything you saw there
was happening with deep learning. (Applause) Thank you. The transcription in English
was deep learning. The translation to Chinese and the text
in the top right, deep learning, and the construction of the voice
was deep learning as well. So deep learning
is this extraordinary thing. It’s a single algorithm
that can seem to do almost anything, and I discovered that a year earlier,
it had also learned to see. In this obscure competition from Germany called the German Traffic Sign
Recognition Benchmark, deep learning had learned
to recognize traffic signs like this one. Not only could it recognize
the traffic signs better than any other algorithm, the leaderboard actually showed
it was better than people, about twice as good as people. So by 2011, we had the first example of computers that can see
better than people. Since that time, a lot has happened. In 2012, Google announced that they had
a deep learning algorithm watch YouTube videos and crunched the data
on 16,000 computers for a month, and the computer independently learned
about concepts such as people and cats just by watching the videos. This is much like the way
that humans learn. Humans don’t learn
by being told what they see, but by learning for themselves
what these things are. Also in 2012, Geoffrey Hinton,
who we saw earlier, won the very popular ImageNet competition, looking to try to figure out
from one and a half million images what they’re pictures of. As of 2014, we’re now
down to a six percent error rate in image recognition. This is better than people, again. So machines really are doing
an extraordinarily good job of this, and it is now being used in industry. For example, Google announced last year that they had mapped every single location
in France in two hours, and the way they did it
wast hat they fed street view images into a deep learning algorithm
to recognize and read street numbers. Imagine how long
it would have taken before: dozens of people, many years. This is also happening in China. Baidu is kind of
the Chinese Google, I guess, and what you see here in the top left is an example of a picture that I uploaded
to Baidu’s deep learning system, and underneath you can see that the system
has understood what that picture is and found similar images. The similar images
actually have similar backgrounds, similar directions of the faces,
even some with their tongue out. This is not clearly looking
at the text of a web page. All I uploaded was an image. So we now have computers
which really understand what they see and can therefore search databases of hundreds of millions
of images in real time. So what does it mean
now that computers can see? Well, it’s not just
that computers can see. In fact, deep learning
has done more than that. Complex, nuanced sentences like this one are now understandable
with deep learning algorithms. As you can see here, this Stanford-based system
showing the red dot at the top has figured out that this sentence
is expressing negative sentiment. Deep learning now in fact
is near human performance at understanding what sentences are about
and what it is saying about those things. Also, deep learning
has been used to read Chinese, again at about
native Chinese speaker level. This algorithm developed
out of Switzerland by people, none of whom speak
or understand any Chinese. As I say, using deep learning is about the best system
in the world for this, even compared
to native human understanding. This is a system that we put together
at my company which shows putting
all this stuff together. These are pictures which have
no text attached, and as I’m typing in here sentences, in real time it’s understanding
these pictures and figuring out what they’re about and finding pictures that are similar
to the text that I’m writing. So you can see, it’s actually
understanding my sentences and actually understanding these pictures. I know that you’ve seen
something like this on Google, where you can type in things
and it will show you pictures, but actually what it’s doing is it’s
searching the webpage for the text. This is very different from
actually understanding the images. This is something that computers
have only been able to do for the first time in the last few months. So we can see now that computers
cannot only see but they can also read, and, of course, we’ve shown that they
can understand what they hear. Perhaps not surprising now
that I’m going to tell you they can write. Here is some text that I generated
using a deep learning algorithm yesterday. And here is some text that an algorithm
out of Stanford generated. Each of these sentences was generated by a deep learning algorithm
to describe each of those pictures. This algorithm before has never seen
a man in a black shirt playing a guitar. It’s seen a man before,
it’s seen black before, it’s seen a guitar before, but it has independently generated
this novel description of this picture. We’re still not quite at human performance
here, but we’re close. In tests, humans prefer
the computer-generated caption one out of four times. Now this system is now only two weeks old, so probably within the next year, the computer algorithm will be
well past human performance at the rate things are going. So computers can also write. So we put all this together and it leads
to very exciting opportunities. For example, in medicine, a team in Boston announced
that they had discovered dozens of new clinically relevant features of tumors which help doctors
make a prognosis of a cancer. Very similarly, in Stanford, a group there announced that,
looking at tissues under magnification, they’ve developed
a machine learning-based system which in fact is better
than human pathologists at predicting survival rates
for cancer sufferers. In both of these cases, not only
were the predictions more accurate, but they generated new insightful science. In the radiology case, they were new clinical indicators
that humans can understand. In this pathology case, the computer system actually discovered
that the cells around the cancer are as important
as the cancer cells themselves in making a diagnosis. This is the opposite of what pathologists
had been taught for decades. In each of those two cases,
they were systems developed by a combination of medical experts
and machine learning experts, but as of last year,
we’re now beyond that too. This is an example
of identifying cancerous areas of human tissue under a microscope. The system being shown here
can identify those areas more accurately, or about as accurately,
as human pathologists, but was built entirely with deep learning
using no medical expertise by people who have
no background in the field. Similarly, here, this neuron segmentation. We can now segment neurons
about as accurately as humans can, but this system was developed
with deep learning using people with no previous background
in medicine. So myself, as somebody with
no previous background in medicine, I seem to be entirely well qualified
to start a new medical company, which I did. I was kind of terrified of doing it, but the theory seemed to suggest
that it ought to be possible to do very useful medicine
using just these data analytic techniques. And thankfully, the feedback
has been fantastic, not just from the media
but from the medical community, who have been very supportive. The theory is that we can take
the middle part of the medical process and turn that into data analysis
as much as possible, leaving doctors to do
what they’re best at. I want to give you an example. It now takes us about 15 minutes
to generate a new medical diagnostic test and I’ll show you that in real time now, but I’ve compressed it down
to three minutes by cutting some pieces out. Rather than showing you
creating a medical diagnostic test, I’m going to show you
a diagnostic test of car images, because that’s something
we can all understand. So here we’re starting
with about 1.5 million car images, and I want to create something
that can split them into the angle of the photo that’s being taken. So these images are entirely unlabeled,
so I have to start from scratch. With our deep learning algorithm, it can automatically identify
areas of structure in these images. So the nice thing is that the human
and the computer can now work together. So the human, as you can see here, is telling the computer
about areas of interest which it wants the computer then
to try and use to improve its algorithm. Now, these deep learning systems actually
are in 16,000-dimensional space, so you can see here the computer
rotating this through that space, trying to find new areas of structure. And when it does so successfully, the human who is driving it can then
point out the areas that are interesting. So here, the computer
has successfully found areas, for example, angles. So as we go through this process, we’re gradually telling
the computer more and more about the kinds of structures
we’re looking for. You can imagine in a diagnostic test this would be a pathologist identifying
areas of pathosis, for example, or a radiologist indicating
potentially troublesome nodules. And sometimes it can be difficult
for the algorithm. In this case, it got kind of confused. The fronts and the backs
of the cars are all mixed up. So here we have to be a bit more careful, manually selecting these fronts
as opposed to the backs, then telling the computer
that this is a type of group that we’re interested in. So we do that for a while,
we skip over a little bit, and then we train
the machine learning algorithm based on these couple of hundred things, and we hope that it’s gotten a lot better. You can see, it’s now started to fade
some of these pictures out, showing us that it already is recognizing
how to understand some of these itself. We can then use this concept
of similar images, and using similar images, you can now see, the computer at this point is able
to entirely find just the fronts of cars. So at this point, the human
can tell the computer, okay, yes, you’ve done a good job of that. Sometimes, of course, even at this point it’s still difficult
to separate out groups. In this case, even after we let the computer
try to rotate this for a while, we still find that the left side
sand the right sides pictures are all mixed up together. So we can again give
the computer some hints, and we say, okay, try and find
a projection that separates out the left sides and the right sides
as much as possible using this deep learning algorithm. And giving it that hint —
ah, okay, it’s been successful. It’s managed to find a way
of thinking about these objects that’s separated out these together. So you get the idea here. This is a case not where the human
is being replaced by a computer, but where they’re working together. What we’re doing here is we’re replacing
something that used to take a team of five or six people about seven years and replacing it with something
that takes 15 minutes for one person acting alone. So this process takes
about four or five iterations. You can see we now have 62 percent of our 1.5 million images
classified correctly. And at this point,
we can start to quite quickly grab whole big sections, check through them to make sure
that there’s no mistakes. Where there are mistakes, we can let
the computer know about them. And using this kind of process
for each of the different groups, we are now up to an 80 percent
success rate in classifying the 1.5 million images. And at this point, it’s just a case of finding the small number
that aren’t classified correctly, and trying to understand why. And using that approach, by 15 minutes we get
to 97 percent classification rates. So this kind of technique
could allow us to fix a major problem, which is that there’s a lack
of medical expertise in the world. The World Economic Forum says
that there’s between a 10x and a 20x shortage of physicians
in the developing world, and it would take about 300 years to train enough people
to fix that problem. So imagine if we can help
enhance their efficiency using these deep learning approaches? So I’m very excited
about the opportunities. I’m also concerned about the problems. The problem here
is that every area in blue on this map is somewhere where services
are over 80 percent of employment. What are services? These are services. These are also the exact things that computers
have just learned how to do. So 80 percent of the world’s employment
in the developed world is stuff that computers
have just learned how to do. What does that mean? Well, it’ll be fine.
They’ll be replaced by other jobs. For example, there will be
more jobs for data scientists. Well, not really. It doesn’t take data scientists
very long to build these things. For example, these four algorithms
were all built by the same guy. So if you think, oh,
it’s all happened before, we’ve seen the results in the past
of when new things come along and they get replaced by new jobs, what are these new jobs going to be? It’s very hard for us to estimate this, because human performance
grows at this gradual rate, but we now have a system, deep learning, that we know actually grows
in capability exponentially. And we’re here. So currently, we see the things around us and we say, “Oh, computers
are still pretty dumb.” Right? But in five years’ time,
computers will be off this chart. So we need to be starting to think
about this capability right now. We have seen this once before, of course. In the Industrial Revolution, we saw a step change
in capability thanks to engines. The thing is, though,
that after a while, things flattened out. There was social disruption, but once engines were used
to generate power in all the situations, things really settled down. The Machine Learning Revolution is going to be very different
from the Industrial Revolution, because the Machine Learning Revolution,
it never settles down. The better computers get
at intellectual activities, the more they can build better computers
to be better at intellectual capabilities, so this is going to be a kind of change that the world has actually
never experienced before, so your previous understanding
of what’s possible is different. This is already impacting us. In the last 25 years,
as capital productivity has increased, labor productivity has been flat,
in fact even a little bit down. So I want us to start
having this discussion now. I know that when I often tell people
about this situation, people can be quite dismissive. Well, computers can’t really think, they don’t emote,
they don’t understand poetry, we don’t really understand how they work. So what? Computers right now can do the things that humans spend
most of their time being paid to do, so now’s the time to start thinking
about how we’re going to adjust our social structures
and economic structures to be aware of this new reality. Thank you. (Applause)

72 thoughts on “The wonderful and terrifying implications of computers that can learn | Jeremy Howard | TEDxBrussels

  1. I'm going with Elon Musk and say it's just terrifying, we need to regulate these techs now.

  2. In order to lead the future, computers will have to be designed to make mistakes.  It is mistakes (mutations) that enables lifeforms to grow in capabilities.

  3. this guy is just making some noise like farting with his mouth. He obviously knows nothing about AI. C'est l'âge d'or de l'imposture. So many fakes having the stand ! well it seems to work and it suite me fine for there are far too many stupid people on earth and they may just as well follow this guignol to their death.

  4. From the intelligence of an infant to computerized pathologists, writers, book keepers, catalogers, engineers, journalists, wow, computers are amazing!  Technological unemployment is no longer science fiction; it's here, and it has been ongoing since the Industrial Age!  The real challenge will be whether we can as a civilization can adapt to it or not.

  5. then in the future, there will be some jobs that directed by………………………..machine?

  6. Is the interface he is using to classify car pictures available to play with?

  7. "State Of The Art" as of 12-08-2014! Kind of like IBM's Watson computer! This could be scary… Perhaps the beginning of the end?

  8. Estamos frente a un problema que nunca antes la humanidad enfrentó, el ser humano pronto podría ser reemplazado en la parte científica y operativa por máquinas, ….que vamos a hacer con esto?

  9. This all raises questions of consciousness. Who are we outside of our bodies? What is life? Will a computer have consciousness? Is this evolution of the current state of the human species? Or the end? What will the new species be? How quickly will it evolve with this much power?

  10. 15:45 : "This is a case not where the human is being replaced by a computer, but where they're working together". True, there's a human involved in the initial stages of training, but once the computer "gets the idea" then the human has rendered him or herself permanently obsolete (along with all the other trained experts in that field). Terrifying indeed.

  11. "Source code or it didn't happen!"

    Another TED video with more empty words. A few months ago another guy talked about machine learning. We haven't heard from him since. Did he die, got killed, censored?
    Where is the source code? The verifiable proof?
    Of course they will sit on it forever and not do anything with it. Reminds me of what companies do with patents, etc.

    Progress is delayed due to this. It should be illegal to delay progress for greed.

  12. I do not see it as terrifying in any way.  Certainly we must exercise caution.  However, the actual merging of man an machine will have benefits for both.  We must evolve . . . and computers are the superhighway which we must use in this endeavor to evolve.   While we humans have to study one book for hours  to understand its concepts, a computer interfaced with a human could accomplish this task in seconds.  That means that one human could become knowledgeable of every profession on the planet in a very short time . . . . including rocket science and advanced mathematics.  Scary?  We are naturally afraid of the unknown.   But is was not by being cowards that Columbus or Magellan accomplished what they did.   

  13. fascinating and scary

    never ever give this to a fucking investman banker or broker …

  14. I failed to see what's "terrifying" about these advancements. We are about to enter a real human golden age.

  15. There is no understanding here.
    There is no seeing. There is no reading. There is no hearing.
    This is abstract catgories grounded in the human understanding & seeing & hearing & reading & …
    Very very useful thing. Powerful tool.
    But that's all.
    Mistaking data-mining for cognition is something that this era in computing will teach us. This TED captured this era in the misunderstanding.
    Cheers
    Colin

  16. All this advancement,and we still live in an obsolete society,plutocracy more or less…

  17. The most important point of this vid.
    “People can be quite dismissive.  “Well, computers can’t really think, they don’t emote, they don’t understand poetry, we don’t really understand how they work”,  So what?  Computers right now can do the things that humans spend most of their time being paid to do.  So now is the time to start thinking about how are we going to adjust our social structures and our economic structures to be aware of this new reality.” ~Jeremy Howard

  18. This wouldn't even be a problem worth mentioning if the energy revolution were at least well-underway, but post-scarcity isn't possible when there's still scarcity (that's sort of definitional).  On the other hand, machine learning might give it a boost once the suspicion of "oh no, Skynet!" wears off.  I'd like a world where my children don't have to toil away the priceless time of their limited lives to earn the basic necessities of living.

  19. End of slavery is possible.
    Hardest part will be to convince slaves that we are not our jobs and that employment and income are and always were two separate things.

  20. I for one welcome our computer overlords…. as long as the kind of unemployment it provides is the 'having basic needs met' kind for everyone and they learn empathy.

  21. the collapse is coming, humanity will be integrated into the VR cloud and preserved in ever evolving life support containers.

  22. If all can be done by a machine in due time, what are the 7 bilion people of this world gone spend their time on? This scares the living ** out of me.

  23. I think Jeremy was being painfully generous with the whole man + machine thing. I just didn't think I'd feel this belittled this quickly not by him, but by the unyielding, exponentially ruthless tide that drives the economic value of all but those with stock in the businesses that own the algorithms, centralised capital and physical resources.

    Conversely, it will mean immense gains in productivity meaning that there won't be the need for us to work as much however I'm unclear on how the day to day of that works. If people have a vastly diminished ability to provide value and earn, then there will be a vastly lower potential for profit on the market as a whole which offsets gains in productivity derived through this freeing up of labour. However consider that no business would ever provide needs let alone desires for free so I don't see a clear way how all this shakes out.

    It could be that the algorithms get hacked or opensource which then gives productivity to many more but again, producing stuff for free will never occur. It won't be an absolute like this so maybe we'll have to compete in spaces where machines cannot like… pro gaming, sports, drug dealing, prostitution, art (?)… Not traditionally regarded as solid career choices, suitable for the masses or high earners.

  24. I've shared a full list of citations now for the talk: http://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn/citations . Many thanks to the folks at TED for hosting this

  25. The faces of the audience are priceless in the "humanity will be obsolete" bit at the end.

  26. I feel that virtual secretaries would've made a nice point in this presentation as they are one of the growing fields replacing people while still needing consistent community feedback. It's a great example of how people will still be needed to train the system, if in a less direct manner.

  27. @Jeremy Howard – who made the speech-to-speech system you mentioned in the china conference?

  28. I think that we can't let computers replace us. We have to grow with them, not by expanding our intelligence to match them, but by using them as tools to accomplish tasks which used to slow down humans. Then, with computers taking care of these tasks, we will be able to focus on the qualities which distinguish us from computers, and thus combine their immense power with human skills like creativity and ethics to leverage the advantages of computers towards our own gain, instead of falling behind.

  29. We may have to put a serious tax on the companies generating all that automation and giving a guaranteed minimum income to all citizens. This will only be possible once all low skilled jobs are replaced by robotics.
    Otherwise, who would want to be a genitor, a house maid or a bus driver if they are all getting a minimum income? But once robots are doing it all, then it's ok.

    A minimum income to all makes it possible for the unemployed to look for other ways of making money, because their brains are now free to be innovative instead of fearing the loss of benefits once they start making money from their innovative ideas.

    It is key not to give a minimum income only to the unemployed, because then we reward their inactivity, while giving the employed a reason to stop working.
    I am aware that a minimum income for all means that the billionaires also get that minimum income, but I think we can all live with that.
    All this would also mean that corporate tax evasion would be a top priority crime – much more so then nowadays.

    Automation is coming faster then ever before, so we need to make laws faster then ever before to be ready for the effects of an automated economy on our society.

  30. Have they tried 'deep-learning' on poetry, specifically, something we played-with back in the early-'70's: it'd be interesting-for-a-moment to listen to the progress made in this effort to get computers to write poetry, comparing for cadence, trope, genre, style…plagiarism efficiency.

  31. We need to worry about changing our behavior in regards to how we are destroying the environment first, or that is going to be our extinction, not intelligent machines.

  32. One idea that was only surfaced in this speech (for good because it was targeting realistic and close-future expectations):
    We used to tell algorithms what to find. Now machines start to find by themselves what is up to be noticed (Google's cat search). But I may try to extrapolate from this.

    Currently:
    Big companies have massive data concerning a big chunk of mankind. One man can try to search for specific statistics and tell the machine what the goal is. Then he collects results and try to make something out of it. Some unexpected info can emerge from it, but all is mostly constrained by the man's imagination and initial expectations. 

    Future:
    I expect machines to surf the big data and to seek by themselves what interesting things they can find. The cat search of Google is very basic, but with some more robust AI, it can try to find interesting patterns and emerging ideas that we wouldn't have thought at first. I think humans have always tried to use some kind of map reduce algorithm, but with specific map and reduce algorithms/filters that we are already used to.

    The idea is to analyse mankind from a different point of view.

  33. The conundrum comes when the computers or A. I. reach sentient status.
    God creates man; Man creates god.
    God casts man from the garden; sentient god banishes man from the Earth.
    Man is exterminated computer inherits the Earth totally self-sufficient forever.

  34. omg, they could put this into a robot… and then they could make ai robots O.O

  35. partly interesting talk… but he seriously overestimates DL's capabilities…

  36. why cant he put the last question to be answered by machine learning loool

  37. Household income is still increasing. (You can check an updated mean household income evolution data at FRED website for example). Even if the mean household income was flat, it would imply an increase in productivity becouse the mediam hosehold size is smaller nowadays.

    Perks are also something not included in that household income data, and they are something that has increased during last years, specially for lower income workers.

    Income data does not account for a great amount of contributions made by technology. For example, nowadays everyone in a developed country has access to small cheap computers connected to internet, with access to huge libraries full of knowledge, to mobile phones, and to good quality big screens. Those are some things that would have been considered luxury products, and would have been valued at hundreds of thousands of dollars only some decades ago. We also have better healthcare treatments today.

    If IA created much better and much cheaper products substituting those we have today, that would be good for me, even if I had to work for a lesser amount of money, as a leisure staff member for example.

  38. Most of our fears regarding the development of AI revolve around the issue of employment. For anyone to join this conversation it is paramount to understand /define what employment/work actually is because in majority of cases the standard definition of work being the exectution of some tasks that directly contribute to the survival of people fails to describe what "work' is today. The modern definition of work is: ' doing something reluctantly for a given number of hours every day, without seeing the point of it other than getting paid at the end of the month'. Work in this sense is completely secure, no matter how advanced the AI gets. As long as there are people, there will be jobs. Jobs will only disappear, when people disappear

  39. Soon the machine will attempt to find it's self. That might be a day to be scared.

  40. please stop talking about 'average human level' because that is NOT what is indicative of its ability to replace humans… most expert work is invented / performed by people of above average human level. pilots are not average human level. soldiers are not. doctors are not. you are not. etc…

  41. our current state of capitalism and politics has me convinced that the 1% will manage to see all these gains while the poor will be relegated to death and poverty. At least here in the states, people are convinced that your lack of employment means you inherently lacking of worth are not fit to live. Bold statement but that is how the politics plays out. The very same people who would benefit the most from universal income would vote against themselves due to propaganda and media influence. I would love to be proved wrong but our politics has me very cynical about our ability to evolve with this oncoming age.

  42. Are the conclusions in the last slide the advice from AI? You better hope your AI is a feduciary.

  43. Perhaps AI will be "democratized" and easily available to anyone. So if my self-driving taxi or a robot works all by itself 24/7, and it pays off the loan for the initial cost in 6 months, then what's the problem? We can all use some free time. Singularitynet.io–the creators of the robot Sophia–are trying to make this work. BTW the entire concept of employer, employee, and working for a wage is coming to an end, and that's a great thing! The concept of money and governance are also changing thanks to Bitcoin and Ethereum. Maybe investing a small amount in those exponentially growing crypto currencies will be our new "job!" IMO The big tech companies and governments will be disrupted by the brutal efficiency of Bitcoin and Ethereum for the same reason Airbnb became richer than 100 year old hotel chains.

Leave a Reply

Your email address will not be published. Required fields are marked *