Authors Interrogating Tech: Paul Mason

you by the end of this book I want you to make a choice will you accept the machine control of human beings or resist it and if the answer is resist on what basis will you defend the rights of humans against the logic of machines now I would do a quick introduction to Paul as well so you get an idea of where he's coming from on this perspective that Paul is an award-winning writer broadcaster and filmmaker he was previously the economics editor of Channel four News his books include host capitalism why it's kicking off everywhere the globe new global revolutions live working die fighting and rare earth a novel so prolific focused on what has happened and what's going to happen in relation to our economic situation I'd like to start since again we're in the ethics tent that's talking about ethics in relation to this style in the rights of humans against the logic of machines how does that tie into ethical questions okay I will unleash my super I mean I think we we are in sort of triple crisis and it was really interesting to hear and ask before and they probably gonna turn it more anti-capitalist than me the actual anti-capitalist as you know we Marxist we love to write fifty page documents about Britt how brilliant capitalism's been so far but why it's not gonna be so anymore so the crisis that we're in are one of an economic system that doesn't work a as a result of ten years of crisis stagnation crisis confusion of the role of economics we've know got evaporating consent for democracy and for the rule of law and human rights and alongside that a crisis of algorithmic control whereby company a can use Company B which has five hundred five thousand data points on every every American to manipulate and control the outcome of election that's now as before you get any kind of artificial intelligence that's just basic sort of server farm big data and a lot of electricity so as I reported this what I came to the conclusion that we were seeing something deeper where we're seeing a crisis of what I call in the book that crisis of the neoliberal self a crisis of the self created in the market era that had ceded control to an autonomous system and in in in economics that autonomous system is called the market you may not know that Friedrich Hayek that the Guru of our right-wing free market thought actually conceived the market as what he called a spontaneously emergent order that is an autonomous system that could outthink human beings and the thesis of the book is having handed control willingly a lock of our lives ourselves our cognition in many respects to this autonomous system that turns out to be a brilliant entry-level drug for in the mid twentieth century hand in control of our lives to a real autonomous intelligent system which is a machine and from there I've kind of concluded that what we need to start thinking about and the premise of the book is that whether it's the the corporations and startups in this code in this conference or whether it's States or whether it's simply human beings in civil society we all need to become much more intelligent clients for moral philosophical thinking because it's that that underlies the ethics that we will try to embody in the autonomous systems that we build in future so in the in the book you work your way through some of the classical ethical systems and approaches where do you end up after having scanned quite neatly a history of ethics of at least a couple of thousand years oh yeah I'm glad you saying the word neatly look right I think that the possibility of artificial intelligence the possibility of unaided learning by machines brings humanity up close to ethical and moral philosophical debates that we kind of assumed were nice to have but they're really interesting some people spend their entire lives your movie that you made for your Cambridge project had had a Buddhist and it had a Catholic priest in it and and all the rest of it but to me it goes like this and in this I'm influenced by a falafel philosopher called alasdair macintyre who started out as a Marxist and ended up as a as an Aristotelian it goes like this but two ethical systems that we are the most aware of in modern capitalism are utilitarianism which which it make as many people as possible happy with by doing the least harm and list drawn up from a parent a parent eternal human attributes associated primarily with the the philosophy of Immanuel Kant but but in general you could say a good example would be the universal declaration of human rights the technical term for this is deontological ethical systems they come from oh they come from being the being of human beings no MacIntyre argues and I argue as well on this basis he argues look neither neither of these stands the test of a discussion of what is the human being on what are they based they are they're effectively the products of capitalism I would go further and say they're both ideologies they match very closely the ideologies of a market society with a state standing above it claiming to to protect the rights of humans or to protect certain principles that humans embody in reality says markets higher there are only two really coherent philosophical systems faced with the with the depth of the challenge that we face as much as modern people one is Frederick Nietzsche which can be summed up in the two words you that is you know religious says it there are no morals morals are a joke morals are are an invention there is no God there all forms of secular morality like my ethical systems are simply your attempt to to substitute for God since since there isn't one and if I want to shoot you in the face and run away laughing that's that's as if a student prank has been taking place it need sure that's up to me if I can do it the other one is the one that says what is the good Society and what kind of person do we need to create or do does one need to create as oneself to live in this good society that's Aristotle that's communitarianism that's virtue ethics I think what the contributions are trying to make in this book is that the possibility of AI brings us quite face to face with with that that dichotomy what let's let's go through them you can of course program AI with a bunch of deontological ethics you can say okay there's the Geneva Convention there's my autumn autonomous killing vehicle the vehicle must obey the Geneva Conventions that's good there's nothing wrong with that or we could say that the AI is meant to be running a small country you know or a smart city it must behave in a utilitarian way and it can do the trolley problem and it can work out what to do it however if you're gonna push an autonomous system that either looks like a human being or thinks as as quickly and as well as a human being it doesn't have to have consciousness it can just be an artificial general intelligence build it and push it out into the street it needs a theory of human beings it doesn't necessarily need consciousness although consciousness will create extra problems but if it has a theory of human beings my argument is it can't find one in utilitarianism nor in our eternal lists because it's it's experience is always going to create there the bad case bad law problem out of the say the Universal Declaration of Human Rights it's gonna go you know hold on a minute people don't like this Universal Declaration lots of human beings don't like this Universal Declaration fascist don't like it people from the global South don't like it they think it's pu-lease hmm maybe I don't like this my experience completes with it so the plea in the book is for us to try Esper us to begin as we program and as we design autonomous systems first of all we'll come to the practicalities of oh you do this but in a bit but first of all to try it for those involved in decision-making around these systems to have at the heart of that debate what do we think about moral systems a morals frighten people because ethical committees that you can have one of you developing the new drug it's easy ethics committee how many people get this placebo do we pull it if it's really good or really bad what's the aftercare what what laws do we have to obey and that's a closed task but for an open task like designing it and open an artificial general intelligence with autonomy you just need you need to be building in at the very beginning the issue of what is our theory of human beings and what morality we follow it I mean you could you could simply say even questions like is it ethical to publish research that are I know our big issues no in the AI sphere I work for a big al Acme big AI company let's call it big big at me AI company based in London near Kings Cross and and at the same time I have like a postdoctoral research project in AI and I love going to a bunch of academic conferences that because I may want the future career in in academia or with it norm at me company so there's my research is it ethical to publish this research well obviously it's ethical because the company says it is and and your universities never said there was anything wrong with it but a theory of human beings makes you ask those questions and and it's and it makes you then conceive the task of the AI as being a society-wide task so on the on the subject of the human you describe in some places in the book that you feel that we are being sold different versions of the human for the ends of different ideologies and controlling groups but the other hands in some places you're very skeptical about subjective interpretations of what the human being is and for your quite ante post-modernism yes I say we won't go too far into post-modernism now but is is that it is there a tension that you recognize that you feel that we can get to a more normative description of the human or are we entirely sold by these narratives of what the human being is for me as a Marxist we are essentialist what we describe a human being human nature that all other frightening phrase has certain attributes that's that which form the essence of the human being for us it is humans have a species being it's not you cannot study the individual on their own though the Marxist theory of human nature is this we evolved quite accidentally there's no designer there no one pressed the start button but the species evolved and all the handful of species which became human beings we're the ones that survived um with with consciousness imagination of design capabilities in other words we are technologists and co-operators by nature now if you accept this it is possible to extrapolate from that proposal which i think is biologically upheld by most of what we know about evolutionary biology and anthropology but some of the crucial steps Michael Tomasello as an anthropologist who studies the higher primates say things we went through first of all we developed language and consciousness and then we are not as an evolutionary step we developed cooperation which is the observable difference between humans and say or angry chants we cooperate consciously um though in if you think that's what who we are you can extrapolate from that logically the teleology part of of of all Aristotelian philosophy that is human beings have a purpose and their purpose is to liberate themselves from what it was like to be a human being in the previous generation so whilst we say human human beings have an essence that essence is constantly changing we have a social history and is logical to assume but since we've gone from 200 thousand years ago stone tools to twelve thousand years ago agriculture pottery cities quite 5000 years ago two thousand two hundred years ago we get the steam engine and last year I saw a silicon chip with a trillion switches on it 4 nanometers thick it's possible to assume that we're going to liberate ourselves that we have a positive a clear in the bright future so that's the kind of theory of humans that I hold to and from it I can deduce certain things that I want AI to do and not do as regards now let's God not what bother about post-modernism let's just say but there is a huge anti humanistic streak in modern form and and it it part of it originates out of Marxism the structuralist Marxism of the 1960s and 70s says history's a process without a subject another word for a process without a controller in other words is a machine we created post-modernism postmodern you know Michel Foucault says human humanity is a social construct it's just we just created it as a constant concept doesn't really exist it can be eradicated very easily and what troubles me more or the most is the emergence then out of post-modernism in its crisis of actual post humanism you meet lots of transhumanists here i only have a tactical difference well there's plenty of transhumanists in in the AI industry and in the kind of acceleration it's world my tactical disagreement with them is I want I want the end of transformations to be society wide and and accessible not not the the individual on the kind of that kind of seafront at Cannes who's got their robotic arm and many other appendages you know that we we all need to do it together the post humanists believe we are writing or already not human due to our instant interaction with technology so to me that's the big debate of the future it's so attractive for corporations to say since we are already either not human or half human as luciano floridi effectively argues we are information organisms already playing away on the away teams ground of technology then we have lost our ability to to possess agency and so I want to push these debates into the boardrooms and indeed people not just the boardrooms into the engineering faculties and the NLP faculties where my travels in them real to me a very interesting thing they don't exist nobody knows about them nobody's even interested in them but and that is the surprising thing you know metaphorically this hasn't happened but you can imagine it you know the engineer is saying look at my killer robot and then you say what about the ethics they go well yeah there's a at the end of the year there's an ethics conference about killer robots but by the way look at that look at my one that I built here should I take it to the conference we're not even you laugh but we're actually at that situation so out of this Reid ascription of the human and society as a part of a process you talk about the diminishment of our ability to change and our diminishment of our freedom we quite get quite fatalistic this is where I want to during my favorite pop culture reference from your book you talk about Game of Thrones yep having finished watching Game of Thrones are you still have the opinions that are narratives of fatalistic well yeah in the book what I argue is that I want us to reintroduce teleological narratives narratives where human beings go through a process of beginning middle end confronting a problem and overcoming it through human greatness Casablanca that's the allmovie writers we have to this we have to study Casablanca for that reason now what what's interesting about the world around you is that many stories have become endless stories you know unchained melody' it goes on forever um homeland whatever they do the black population of Baltimore cannot escape the fate of being drug dealers in inside the wire in homeland Carrie Mathison whatever she does however what she saved the world she destroys Carrie Mathison because she's got the bipolar disorder in fact what what these stories on HBO Netflix are brilliant at is placing a character on a 360 degree ball being dial and just studying the character that's why it's so a rewarding to be the actor um Game of Thrones of course the premise is there are gods playing a game with us and whatever happens is what our fate is determine by us and you may know there's a huge debate going on among game thrown us Game of Thrones family about why it went wrong in last three series and the reason is because they stopped writing it as an epic we're literally the gods are playing with people and and reverted to the Hollywood movie stereotype of it being characters who determine their own fate and and so what we need from then are psychological reasons we didn't care about the cycle or the psychology of thirsty Lannister she's just awful you know but people identify because she has fun but yeah I know I think we're surrounded by a mass religion of a folk religion of fatalism but I think the market era has really inculcated into us and and and you'll find people say as you've all know her Ari says in his book between determinism and random verse between or everything that happened to us that created us and the random events that go on in our brain and then sin around us there is no room for free will though that's quite interesting especially if you're religious because I'm not religious but religious people all religions have bought a focused around choice good versus bad and the the secular religion of humanism of radical humanism which I want to bring to the AI debate is is one that would say no we need to that the stories that we need to tell are the ones about where human brains liberate themselves keep defying both fate and randomness and DNA I think fact that's what that's what technological progress has been for 200,000 years so it's our parts of the book you're actually quite Pro automation automation and in some parts you seem more concerned about digital feudalism how do we get to an automation for the people but avoid control by the few okay so so I'm not only a tech optimist I'm a tech utopian I believe that very very you know in the middle of this century you know within within most of our lifetimes we could achieve a very low work high high abundance society on the basis of what we have now you wouldn't need AGI to achieve it and you just you just need a fairly decisive automation of what we've already got you know so process automation on steroids could get us quite a long way towards towards the situation where most things your most food most most energy energy is zero carbon food shelter transport could be substantially you know the zero marginal cost effect just collapses the price of everything of course what you need to do today to do this is unleash the technology from out of the social relations of production of the market and that's in my previous book post capitalism I argued that the information technology is so different from all previous machine technologies that it is likely to just blow apart capitalism and we we you the business leaders decision makers you know NGO people are gonna have to live in the world where the the social the social relations of a primarily market economy based on scarcity aren't gonna work with the tech so no I in the practical work I've done I mean yeah I don't do any practical work in tech but in the in advisory and discussion work that I've done for example with Barcelona the city of Barcelona under the mayorship of out of kalau who's this left-wing radical and she had this project of Digital democracy obviously for the people you know digitizing democracy but also of open technology in the city so mandatory open-source mandatory nonprofit the mandating that that they did the data of the smart city and this is a big thing to do but for a city that hosts the biennial export for the cellphone industry the date of the smart city is the cities and the public's not Cisco Accenture and the rest so I think we're all we're at the foothills just of people don't want to call it post capitalism we're at the foothills of a very positive automation story where we can deal work from wages and rapidly automate the world and that's good because work less and save the planet is a great slogan you know because you don't need working less and automating stuff is ultimately gonna save the world but to do it what we need is is to introduce the dimension of who is technology for and the previous speaker and I said it will give some great examples of the way I think the structure of the CEO world never allows them to answer that but I think that the people who are going to answer this are politic of politicians they're starting to answer you know who is energy for why do we burn so much shitty carbon why and therefore why do for whom do we want to digitize the city for whom do we want to digitize that the entire healthcare system is the ultimate question and of course my argument is it's for us but you can only logically get to that argument if you leave aside anti humanism because anti humanism ultimately while there's so many anti post and transhumanist philosophers on ethics boards of big companies because ultimately they will tell those big companies it's for you I think that's a wonderful place to stop with the message of work less save the planet yeah I can go with that that's right so if you'd like to join me in thanking Paul Masson [Applause] you

Leave a Reply

Your email address will not be published. Required fields are marked *