A response from Steve Pinker to Salon’s hit piece on “Enlightenment Now”

January 29, 2019 • 9:00 am

Phil Torres is a “riskologist” who studies existential risk, and seems to have it in for the New Atheists (Salon, of course, is always willing to provide him with a platform for that). Torres’s latest Salon piece is an attack on Steve Pinker and his last book (Enlightenment Now, or EN), a piece called “Steven Pinker’s fake Enlightenment: His book is full of misleading claims and false assertions.” Torres’s piece is pugnacious, ending with a suggestion that Pinker may actually be hiding stuff that he knows is wrong:

Let me end with a call for action: Don’t assume that Pinker’s scholarship is reliable. Comb through particular sentences and citations for other hidden — or perhaps intentionally concealed — errors in “Enlightenment Now.” Doing so could be, well, enlightening.

When I read Torres’s piece, I wasn’t impressed, as Pinker’s “errors and false assertions” seemed to consist mainly of quotations used in EN that, claimed Torres, don’t accurately represent the actual views of the quoters (Torres contacted some of them). There were also differences between Pinker’s and Torres’s views on the dangers of artificial intelligence (AI), which are differences of opinion and not “misleading claims”. Torres proffered no substantive criticism of the data Pinker presents in EN to show progress in the moral and physical well being of our species. Those data, after all, are what support the main point of the book.

But I wrote to Steve, asking him what he thought about the Salon article. He replied yesterday, and I thought his reply was substantive enough that it deserved to be shared here. I asked him if I could post it, and he kindly agreed. Steve’s email to me is indented below.

Hi, Jerry,

Thanks for asking about the Torres article. Phil Torres is trying to make a career out of warning people about the existential threat that AI poses to humanity. Since EN evaluates and dismisses that threat, it poses an existential threat to Phil Torres’s career. Perhaps not surprisingly, Torres is obsessed with trying to discredit the book, despite an email exchange in which I already responded to the straws he was grasping.

His main objection is, of course, about the supposedly existential threat of AI. Unfortunately, his article provides no defense against the arguments I made in the “Existential Threats” chapter, just appeals to the authority of the people he agrees with. This is fortified by the rhetorical trick of calling the position he disagrees with “denialism,” hoping to steal some of the firepower of “climate denialism.” This is desperate: climate change is real, and accepted by 97% of climate scientists. The AI existential threat is completely hypothetical, and dismissed by most AI researchers; I provide a list and the results of a survey in the book.

Torres disputes my inclusion of Stuart Russell in the list, since Russell does worry about the risks of “poorly designed” AI systems, like the machine with the single goal of maximizing paperclips that then goes on to convert all reachable matter in the universe into paper clips. But in that same article, Russell states, “there are reasons for optimism,” and lists five ways in which the risks will be managed—which strike me as reasons why the apocalyptic fears were ill-conceived in the first place. I have a lot of respect for Russell as an AI researcher, but he uses a two-step common among AI-fear-sowers: set up a hypothetical danger by imagining outlandish technologies without obvious safeguards, then point out that we must have safeguards.  Well, yes; that’s the point. If we built a system that was designed only to make paperclips without taking into account that people don’t want to be turned into paperclips, it might wreak havoc, but that’s exactly why no one would ever implement a machine with the single goal of making paperclips (just as no complex technology is ever implemented to accomplish only one goal, all other consequences be damned. Even my Cuisinart has a safety guard). An AI with a single goal is certainly A, but it is not in the least bit I.

The rest of Torres’s complaint consists of showing that some of the quotations I weave into the text come from people who don’t agree with me. OK, but so what? Either Torres misunderstands the nature of quotation or he’s desperate for ways of discrediting the book.  The quotes in question were apt sayings, not empirical summaries or even attributions of positions, and I could just as easily have paraphrased them or found my own wording and left the author uncredited. Take the lovely quote from Eric Zencey (with whom I have corresponded for years), that “There is seduction in apocalyptic thinking. If one lives in the Last Days, one’s actions, one’s very life, take on historical meaning and no small measure of poignance.” In our correspondence, Zencey said, “I did caution about the narcissistic seductiveness of apocalyptic thinking,” and added “that doesn’t make it wrong.” Indeed, it doesn’t, but it’s still narcissistically seductive, which is why I quoted it, perfectly accurately in context. As I wrote to Zencey, I think his argument that we’re approaching an apocalypse is in fact wrong, since it relies on finite-resource, static-technology thinking, and ignores the human capacity for innovation. But there was no need to pick a fight with him in that passage, since I examined the issue in detail the chapter on The Environment. The bottom line is that I did not attribute to Zencey the position that apocalyptic fears are groundless, just that they are seductive (as he himself acknowledges), and he deserves credit for the observation.

Torres was similarly distracted by the quote from a New York Times article that “these grim facts should lead any reasonable person to conclude that humanity is screwed.” These pithy words, which I wove into an irreverent transition sentence, were meant to introduce the topic of the discussion, namely fatalism and its dangers. I certainly wasn’t claiming that the Times writer was agreeing with any particular position, let alone the entire argument! (Sometimes I think I should follow some advice from my first editor: “Never use irony. There will always be readers who don’t get it.”)

Just as pedantic is Torres’s cavil about the hypothetical (indeed, deliberately far-fetched) scenario of growing food under nuclear-fusion-powered lights after a global catastrophe. Torres multiplies the muddles: I was not claiming that anyone endorsed this sci-fi scenario (though a footnote credited the pair that thought up the idea), and my addition of nuclear fusion to the scenario is consistent, not inconsistent, with their observation that current electricity sources would be non-starters.

In a revealing passage, Torres seems to think that EN is about “optimism” versus “pessimism,” and defends his fellow runaway-AI speculators as “optimists” because they are the ones who believe that “if only we survive, our descendants could colonize the known universe, eliminate all disease, reverse aging, upload our minds to computers, radically enhance our cognition, and so on.” I don’t know whether we’ll ever colonize the known universe, but Torres is already writing from a different planet than the one I live on. It’s true that EN does not weigh apocalyptic sci-fi fantasies against utopian sci-fi fantasies. The threats I worry about are not AI turning us into paper clips but rather climate change, nuclear war, economic stagnation, and authoritarian populism. The progress I endorse is not colonizing the universe or uploading our minds to computers but protecting the Earth, eradicating specific infectious diseases, reducing autocracy, war, and violent crime, expanding education and human rights, and other worldly hopes.

As for the supposed scholarly errors: Torres pointed out that that the “Center for the Study of Existential Risk” should be “Centre for the Study of Existential Risks.” I thanked him and corrected it in the subsequent printing.

Thanks again, Jerry, for soliciting my response, and sorry for going on so long. If I had more time I would have made it shorter.


Torres’s piece, I conclude, is not an act of judicious and scholarly criticism, but an anti-Pinker hit job motivated by things other than a concern for factual accuracy. You can see that alone from Torres’s last paragraph, in which he invites readers to further go after Pinker’s book and implies that Pinker is guilty of duplicity.

But you be the judge.

91 thoughts on “A response from Steve Pinker to Salon’s hit piece on “Enlightenment Now”

    1. It is an ironic use of a well known quote that is attributed to many authors. The first may have been Pascal.

      1. No, I think he’s rather clear: “the gains we make in artificial intelligence could ultimately destroy us. And I think it’s very difficult to see how they won’t destroy us, or inspire us to destroy ourselves”. Unless you consider Harris himself one of those who “know the least about it”.

        1. “And I think it’s very difficult to see how they won’t destroy us”
          Classic argument from incredulity (failure of imagination) phrasing from Sam Harris right there, he is an avid practitioner of the argument from ignorance/incredulity, his mystification of consciousness for instance in a lot of his writings, an over-reliance on intuition and his own imagination as a gold standard for measuring reality.

        2. Ok, just refreshed my memory on Harris’ AI position. He thinks that AI will eventually have goals that are at odds with humans’ and, having lost control over them, they will wipe us out. That seems wrong on many levels but we only need to look at one: we are a long, long way from having AI technology that has this kind of autonomy. If it ever happens the world would have changed radically and any ideas about dangers now are bound to be clueless. Furthermore, even if that was a possibility, we know so little at this point it is impossible to know what to do now to prevent it. Stopping work on AI now based on this fear would just be stupid.

  1. Very good response by Pinker and more civil than most would be. The man has much patience. I wonder if AI is such a terrible threat as Torres says, why am I not hearing the same from all the intelligence agency heads who are testifying before the Senate as we speak. They cover many areas of threats today but I did not hear much about his theory. My understanding is that China is working harder in that area than anyone, so if that is the big threat what does he suggest we do about China?

  2. I do worry about considering AI on its own.

    The combination of AI, drone technology and terrorism is extremely concerning, for example. It’s not hard to imagine others.

    Also, I think we need to be careful not to ignore hackers, criminals and the like in our thoughts.

    AI will increasingly form part of our internet of things connected world.

    Finally, There may be social risks as whole job classes become obsolete.

    1. The economists in my Twitter feed point out that every advance in technology put some professions out of business, but the new technology also opened up opportunities to replace the old.

      Personally, I think that’s the “induction fallacy”, but let’s hope they’re right.

      1. Yes. Exactly the kind of thing that holds true until it doesn’t. With some history behind us, we can attempt to guess at what those support jobs might be. I’ve done so and it doesn’t look good. However, real AGI (Artificial General Intelligence) is still not close to happening so their will be plenty of openings in Customer Service for some time.

  3. It seems Torres’ review is really just another misguided hit piece intended to strike a blow for the pessimists, albeit with an AI angle thrown in.

    AI is something I know quite a bit about. I don’t believe we’ll confront an AI apocalypse anytime soon. Instead, the danger will come from a combination of modern technologies wielded by evil people. More and more inexpensive, off-the-shelf technology components can be assembled by virtually anyone. Even if we believe that countries will show restraint in unleashing such things, we already know that terrorists won’t.

    There is no better demonstration of what I’m talking about than the “Slaughterbots” video. Bear in mind that this is all based on cheap technology pretty much AVAILABLE TODAY! It’s definitely not an AI like we see in the movies but there would be AI technology involved: face recognition, navigation, etc. If you haven’t seen it, please watch:


    1. That was also my reaction. Its not so much AI going mad, its lone nutters being able to access the sort of technology that previously only states had access to.

      Imagine if John Gray had access to killer drones?!

      1. Yes, and let’s not also forget the threat from genetics. Quick, cheap, miniature DNA analysis may allow terrorists to target individuals or entire races. And then there’s the threat from genetically engineered bacteria and viruses. Plenty to worry about before we even get to the robot overlord problem.

  4. In his Salon article, Torres links to his 35 page analysis of Pinker’s take on existential threats. I have no idea of whether Torres’ critique has any merit, but the fact that he took the time to write it indicates the importance of Pinker’s book in stimulating discussion among the intellectual elites (although probably not the masses) on critical topics that often take a back seat to the everyday transitory and trivial news. Only good can come out of a no holds barred discussion of serious issues and challenges confronting the world and how to deal with them, even if the problems are not so acute as some of Pinker’s critics think.

    1. Thanks for this. Yes, the Salon article was essentially an attempt to get people to read the much more substantive critique that I published. No, this was not a “hit piece,” but calling it that, as Coyne does, is one way to throw it into the trashcan before anyone’s read it. I spoke to many, many experts on the related issues, and virtually all agree that Pinker’s chapter is one of the sloppiest specimens of bad scholarship they’d seen. (PS. If you’re reading this, I’m shocked — Coyne has, in the past, acquired the habit of deleting or blocking any comment of mine that, however politely, disagrees with him.)

        1. Very good of you to come here and defend yourself, Mr Torres. I (for one) am now reading your longer “Project for Future Human Flourishing” piece that you link to in your SALON article. Only a few pages in but I have to say, I hope it gets better. You and Pinker actually agree on numerous things and so far it appears your rebuttal to Pinker’s criticism of the existentilist “pessimists” amounts to little more than “says you”.

          I’ve got more to read, but wanted to say thanks for coming. I hope you will engage some of the folks here.

        2. Well, I’ve read your piece. It strikes me as not so much a rebuttal as a complaint about tone. You seem to be upset mostly because you didn’t like the way Pinker frames the issues around existential threats and doesn’t pay enough homage to the complexity of you and your colleague’s work. You spend a good deal of time on this and surprisingly little on examples of Pinker’s perfidy. You make some good points but the paper is far longer than it ought to be (complaints about tone are not much interest to me)

          It looks like you are not interested in a discussion here. That’s ok. I get it. Only so many hours in a day. But here is one reader who is not persuaded by your argument.

        3. Have you apologised yet, for setting up a dozen fake Twitter accounts to abuse and harass people…?

          You’re a complete hack and a #NewRacist. Feck off.

  5. I was reading with amusement 😉 the supporting hit piece from (of course) PZ. Digging a little deeper into links of links it soon enough became clear that there was really nothing there of any substance. Even if every one of the accusations were true, it would not put the slightest scratch on the central thesis of EN.

    1. I read the PZ piece but I haven’t read either EN or Torres’ rebuttal.

      I disagree with your assertion that it doesn’t put a scratch on the central thesis of EN. The way it works is this:

      “Torres is critical of one chapter of the book. Therefore Stephen Pinker is probably wrong about everything, which is good because Pinker’s idea conflict with my world view.”

      It’s not a logically sound argument but discarding logically sound arguments in favour of irrational emotional ones that we like better is something humans do very well.

      In a perfect World, arguments would stand or fall on their own merits. This is not a perfect world so things like the above do damage good arguments.

    2. Peez hates Pinker for supporting evolutionary psychology, and Peez “despises” EP (or at least his comical strawman version of it) cuz it clashes with his SJW, anti-science agenda. Peez also thinks Pinker is “alt-right”.

      But keep in mind, this comes from a man who believes in Striving, that male nipples mean everyone is a little gay, and that it should be legal to have sex with dogs and dolphins.

  6. Chrissake, some people can’t stand good news.

    Meliorism — the notion that humans can achieve progress that improves the world — has underlain traditional Liberalism at least since late 19th century philosophers like William James and John Dewey. Pinker provides persuasive evidence to support that notion.

    1. Just look at the tragic development of the airplane. Millions of people killed in crashes, bombing, you name it. And there is always that old saying – if g*d had wanted us to fly….

  7. I recall the plot–though nothing else–of a science-fiction story read long ago. It answered the perennial question of why there are always paperclips but never coat hangers: the coat hangers were busy silently morphing into paper clips.

    Does anyone know the title/author of this piece?

    1. Or All the Seas with Oysters. I love that short story! Paper clips were the babies, coat hangers were the next stage, and the ‘adults’ of the secretly invading aliens were bicycles and these come in male and female forms.
      You remember how the story ended?

      1. Hmm, coulda sworn it was paper clips.
        And what is that bicycle sitting outside my office? It wasn’t there befo

  8. Wonderful response — and good to see Prof Pinker now prioritizing “protecting the Earth.” That’s a rare position for Panglossian technophiles like him.

    I’ve been a fan of Pinker since “The Blank Slate,” and I of course agree with the thesis of EN, but his chapter on the environment reminded me of the juvenile contrarianism of a Slate article.

    1. It’s quite uninformed and facile to call Pinker a “Panglossian technophile.”

      Among many reasons: Pangloss called this the Best of All Possible Worlds. Pinker believes just the opposite: the world can keep getting better and better (as it has for 200 year, unless you like poverty).

  9. Pinker is a giant and he’s produced a monumental work based on solid scholarship. That a few wannabes are nipping at his heels and correcting from “Center” to “Centre” isn’t surprising. I’m surprised he took the time to even engage Torres; I guess that just shows he’s a class act.

  10. “The rest of Torres’s complaint consists of showing that some of the quotations I weave into the text come from people who don’t agree with me. OK, but so what?”

    As long as such quotations haven’t been wrenched from their context, there’s nothing at all wrong with quoting those on the opposite side of an issue. To the contrary, it tends to bolster one’s position — just as introducing admissions made by a party-opponent does at a trial.

    Hell, during the Reagan-Bush years, it seemed like the most popular phrase in town among Washington, DC, conservative was “As even the liberal New Republic says …” 🙂

    1. There’s a pair of books I read–“Life Everywhere” and “Rare Earth”–that did a fantastic job of quoting folks who disagree with the thesis of each book. The idea is that if your opponents agree with some idea that you are using to support your conclusion, it’s probably got a good chance of being true.

      Sadly, most people today can’t understand critical thinking. They don’t realize that one can agree with individual arguments, without agreeing with the general conclusion. Most people see arguments as a package deal–you accept everything or nothing.

  11. Pinker strikes the nail on the head with this one-liner: “Since EN evaluates and dismisses that threat, it poses an existential threat to Phil Torres’s career.”

  12. Most of the criticisms I’ve seen of Pinker and EN have come from right-leaning theists or religion sympathizers. They always claim they’re attack poor scholarship but I’ve always suspected that they’re threatened by a corollary to Pinker’s thesis: The world has become a nicer place as we’ve become less religious.

    In this case I side with Torres on the threat posed from AI but his attack was unwarranted

    1. Prior to colonisation, most people lived in subsistence economies where they enjoyed access to abundant commons – land, water, forests, livestock and robust systems of sharing and reciprocity.

      ‘Abundant subsistence’ is an oxymoron. Further, that’s just a fairy story; human history is filled with mass migrations spurred by drops below subsistence level, with the invaders forcibly dispossessing the natives of their ‘abundant commons.’

      I see that Hickel is fond of quoting Marx, which makes me wary, as Marx got nearly everything wrong.

      Hickel also downplays the danger of population growth, instead favoring ‘redistribution’ of wealth to address poverty in what used to be called ‘The Third World’. I’d be curious to see Hickel’s action plan for that. In any case, total ‘growth’ (however one defines it) is already outstripping the planet’s carrying capacity. Evening things up won’t fix that.

      But anyway — what specifically in EN does Hickel disprove, & how?

      1. I read the Guardian piece and I picked out a quote from it that showed how it was utterly wrong. Then I came back here and found you picked the exact same quote.

        It’s totally ridiculous. The land had to be tilled with back breaking work for all the daylight hours that were available. The water would kill you unless you drank it in the form of alcoholic beverages. The livestock would die unexpectedly or run away or be eaten by things that came out of the forests. The robust system of sharing and reciprocity works better if you have a commonly agreed unit of sharing and reciprocity i.e. money.

        If you got appendicitis you died. If you had complications in child birth, you died. If you got any kind of significant injury, you died. If your cow caught a disease and died, you died. If it rained at the wrong time in the growing season, you died. If you drank the water, you died. If somebody else decided they wanted your land, you died.

      2. No, it isn’t. subsistence refers to the means by which goods and services are obtained, not their amount. Moreover, it spoke about the *commons*, which are not owned by anyone in particular.

        1. As Jeremy points out, it still meant living on the edge.

          A commons only functions so long as the population is well below the environment’s carrying capacity. After that, you get the Tragedy of the Commons.

          Nor were ‘sharing & reciprocity’ especially pronounced. Inequality of wealth distribution was profound, and often institutionalized via caste systems.

          Hickel’s ‘pre-colonization’ Eden is a fairy story. None of its conditions obtain outside of a smattering of hunter-gatherer, nomadic communities. To argue that the human condition was net much better than it is today is a naive fantasy.

            1. Hunter-gatherer existence was no picnic. But in any case, none of the idyllic cultures ruined by capitalism or colonization that Hickel imagines in his head were were hunter-gatherers. Marxists like Hickel are delusional.

  13. Professor Pinker, you seem to be treating AI and climate change differently. With AI, you seem to admit that, as Russell says, unaligned AI *might* pose an existential threat to humanity. But you dismiss the fear on the basis that it “will be solved”.

    Yet, many experts, including Russell, are worried that it *won’t* be solved, and that AI alignment is a difficult problem. Hence it qualifies as an existential risk. Similarly, climate change is, if not an existential risk, then a global catastrophic risk, *if left unsolved*.

    For solutions to emerge and be implemented, research into AI alignment has to be funded. Yet, in the book (which is overall excellent) you mock and dismiss places such as the University of Oxford’s Future of Humanity Institute.

    Surely you should be praising them for adopting the problem-solving mindset you advocate.

    1. I’m in agreement with Stewart that we need to try hard at solving the value misalignment problem. Can you supply a Steve Pinker quote where he mocks FHI [Bostrom] & similar places?

        1. See the first two pages of Pinker’s chapter on existential threats:

          “Rees tapped a rich vein of catastrophism… Techno-philanthropists have bankrolled research institutes dedicated to discovering new existential threats and figuring out how to save the world from them, including the Future of Humanity Institute…”

          If Pinker isn’t mocking them here, then fine. But, he seems to pit himself against Rees, Bostrom’s and others, when in actual fact they aren’t catastrophists: they are, like Pinker himself, problem-solvers.

  14. So, to be clear, you’re okay with Steven Pinker using statements out of context to support his position?

    1. The ones cited in the letter from Pinker to Dr PCC(e) were NOT taken out of context. Did you even read the letter?

      Now, if you have other examples, I’m sure we’d like to see them.

    2. Come on Ryan – supply some context yourself!
      A bit of ‘he said she said’ to illustrate your exact point. What you’ve written is borderline trollish.

  15. After Y2K and the internet, I concluded that the future of technology is so hard to predict, even for the most informed experts, that there will always be an element of crystal ball gazing (maybe that’s overstating it a little, maybe stock market speculation would be a more apt metaphor) to it.

    I always enjoy reading Pinker’s thoughts, but to some extent I think these topics do come down to intuitions and temperaments. If you are reassured by Pinker’s Cuisinart comparison, you may be reassured by Enlightenment Now. If, like me, you are more inclined to go “OMG I never thought of that!” and now whip around in the kitchen at random intervals to squint suspiciously at your Cusinart, giving it the side eye and wondering if it is conspiring with the toaster to turn you into a baked good, then your intuitions are probably forever skewed towards “waiting for the other shoe to drop.” Now if you’ll excuse me. I’m going to try to sneak up on the Cusinart while its not looking.

  16. Surprise, people want to poop in the punch bowl! I haven’t completed my reading of EN, however, what I have completed seems well researched and written. I am at a disadvantage, it seems, because as much as I enjoy the topics, I am not an “expert” who can smell the wiff of Bravo Sierra when it appears. My leap of faith lies within the in depth coverage of these topics and the “fleshing out” of his ideas which makes Steve’s EN so compelling and seemingly BS free. I also watched a discussion on TV by a naysayer of EN and Steve. Steve’s demeanor and overall calm acceptance of criticism and forthright defense of his position were noteable for many reasons, but mostly to me, because he was utterly unflappable and responded with total respect. In today’s world of idea exchange/argument, his attitude is an example of how people of dignity, intelligence and manners present themselves…thank you Mr. Pinker

  17. Before I read EN, I used to argue with with many naysayers and pensimists that the world had actually made so much progress since about 7 or 8 decades ago and many of the problems that persisted were part of human nature and its constant drive to make things better. Steven Prinker put thinks into the right perspective and dismissed the superficial arguments of seudo intellectuals and ideologues that claim to have a better arguments to guide the world in the future.

  18. I also wondered where the actualfactual errors were that Torres was trying to point out. He never took on the core subject matter, just lesser issues that he had a problem with. It also seemed like he wanted to write a book, but found it an easier platform to assert his ideas in the guise of a critique.

  19. “Well, yes; that’s the point. If we built a system that was designed only to make paperclips without taking into account that people don’t want to be turned into paperclips, it might wreak havoc, but that’s exactly why no one would ever implement a machine with the single goal of making paperclips ” Steven Pinker
    But capitalism ( we have morphed original capitalism into oligarchic and monopolistic capitalism) is designed to make money with it concomitant disastrous results to society, the environment and culture. Failing a substantial modification more is sure to come. The free market is not G-d.

  20. I agree with the thrust of Pinker’s work, but I do agree with the critics about the stuff on AI (and climate and markets, as it happens, but one thing at a time).

    My background is both philosophy of technology and computing and I now work in IT security, in particular in software and application security. I mention this just to get the background clear.

    From my perspective, the idea of asking AI researchers if they perceive dangers is (a) a bit weird; we normally leave the dangers of software to software security professionals, which were not discussed much and (b) might result in cherry picking. Presenting dueling “internal” experts looks like the state of the conversation, and that’s unfortunate. It illustrates why (a) exists. People like me exist as a “peer review” (whether or not you count me as a philosopher or an IT security practitioner here). *Those* are useful to discuss, and what are their arguments?

    Well, here’s one: any computer system sufficiently complex is likely to have security flaws, some of which might be exploitable to do things to people who aren’t even party to the relevant agreements if any. This is not merely theoretical; one example to look clearly at is the ongoing IT security nightmare that is the Internet of Things.

    So for example when Pinker says (paraphrase) “oh, but we wouldn’t give it this motivation” misses two things: (1) it is not clear that one *programs* motivation at all (it may, rather, be emergent in the ANN or whatever at a greater level of complexity) and (2) regardless, *the motivation can be compromised*. (See above.)

    There is more, but this should start people out.

    1. You are right. AI professionals have a vested interest in minimizing the danger to humans so we have to be look carefully at their answers. However, asking people who don’t have the proper background is not the solution.

      You really hit the nail on the head with the security concerns. The danger with future AI is one we have now: humans who hack into our systems in order to do harm. This will always be a problem and, IMHO, it is the main one with AI in the foreseeable future. On the other hand, AI/robots removing their own constraints, hacking themselves or each other, won’t be a problem for a long time. We are a long way from that kind of technology.

      I do disagree with your statement regarding giving programs motivation. Every program built with a purpose in mind is “motivated” to perform that purpose. The “slaughterbots” in the famous video are a good example. They are programmed to recognize certain faces and attack the person. I have no problem calling this “motivation”. On the other hand, I am very doubtful that motivation will spontaneously arise in ANNs. First, I don’t believe ANNs will play that big a role in AGI. Gary Marcus has written a lot on this subject. Second, ANNs have shown behavior that is undesirable, such as favoring black faces when trained to detect criminal behavior, but this is really just a bug, poor specification or training set. Unless the problem occurs in an AGI whose human-designed purpose is to kill people, it shouldn’t be a problem. I don’t see motivations distant from the AGI’s original purpose arising spontaneously. We are a very long way from the time when AGIs can come up with their own radical ideas and execute them.

      1. I’m not wedded to ANNs as the architecture, though something like them are likely a big part of the solution. Regardless – it was just an example.

        As for motivation, yes, in the same way we can extend the notion of belief profitably we can extend other psychological traits. But then my point applies, just “later on”.

        I would add that there is risk beyond deliberate harm – there is also accident.

  21. I love reading just about anything Steven Pinker writes. And although he could have been a bit more scathing in his response I suppose for a Canadian that it was about as scorched-earth as it gets (joking).

    1. Aren’t you Canadian Daniel?
      So how will you vote Pinker US Pres?

      You don’t say why your vote is for Pinker even though you write “That’s why my vote is Pinker…” LOL

      TBH This is your first post [I think] & you managed to squeeze in a link to your business [“BOOK ME” displayed in the top ribbon]…

      Not cool.

      1. The “Pinker for Pres” piece was written in a satirical flavour. It was intended to be a fun piece with lots of support for Steve.

        The “Book Me” category on the blog shows a list of books that I am working on or have written. I use “Book Me” for consistency between the planksip® FB page and the blog (planksip.org). Perhaps a failed attempt (obviously) at being clever. I love books is all I am trying to convey. It’s a rather undeveloped section of the blog while I am trying to still finish the Quaternion Correlation Quarterly.

        I am guessing that you feel like I am trying to get you to “Book Me” for a speaking engagement or something of the sort? Point well taken. Perception is certainly context dependent.

        Honestly Michael, I am pursuing my passion for writing, creating and learning for the sake of learning. If any of this offends you then I sincerely apologize for not making my message clearer.

        Enough about me. How about you? What are you working on? Perhaps we could find some common ground for some collaboration on a worthwhile conversation?

  22. Olle Häggström has a good overview of Pinker’s track record on this topic in these two posts (with links to further details)



    Quote from Häggströms second post:

    “Pinker has kept on repeating the same misunderstandings he made in 2014. The big shocker to me was to meet Pinker face-to-face in a panel discussion in Brussels in October 2017, and hear him make the same falsehoods and non sequiturs again and to add some more, including one that I had preempted just minutes earlier by explaining the relevant parts of Omohundro-Bostrom theory for instrumental vs final AI goals. For more about this encounter, see the blog post I wrote a few days later, and the paper I wrote for the proceedings of the event.”

    After following this saga for a while I wish Pinker would as a first step do this:

    In his own words write a one page description of the Omohundro-Bostrom theory for instrumental vs final AI goals and their reasoning for how that (allegedly) generates existential risk.

    Because I’ve yet to see any evidence that Pinker understands it.

    As a second step Pinker could explain how that in fact doesn’t generate existential risk.

    1. I don’t find Häggström’s arguments in these posts at all compelling. As he suggested, I re-read Stuart Russell’s Edge piece. My take is that Pinker’s quote from Russell’s work, “Nevertheless, there are reasons for optimism.” accurately summarizes it. Here’s Russel’s concluding paragraph:

      “I suppose this amounts to a change in the goals of AI: instead of pure intelligence, we need to build intelligence that is provably aligned with human values. This turns moral philosophy into a key industry sector. The output could be quite instructive for the human race as well as for the robots.”

      He is only calling for AI to be more aligned with human values, something no one would dispute, not invoking a doomsday scenario.

      This is just Häggström defending his profession and his collaborator, Torres. Just like Torres, he finds Pinker’s dismissal of the AI existential threat and existential threat to his career.

      I suspect everyone involved in AI takes the existential threats seriously in that they think they are worth researching and talking about. In that sense, Häggström and Torres’ careers are not in jeopardy. Instead, just as with some on the Left, they don’t like Pinker’s positive take and how it reflects on their respective worldviews. Too bad.

      1. Regarding your assertions about Häggström’s motivations: AFAICT Häggström has held a position as professor in mathematical statistics at a top university in Sweden many years before he begain devoting some time to also research AGI related existential risk. I can see no evidence of him gaining economically or careerwise from making that adjustment in his resesearch focus. Do you have any such evidence?

        (COI estimate: I don’t know Häggström but do regularly read his blog and I’m generally sympathetic to his work on AGI and risk.)

        Regarding Stuart Russell: Pinker’s EN asserts that Stuart Russel is one among “AI experts who are publicly skeptical” that high-level AI pose the threat of an existential catastrophe. (Pinker 2018 ch 20 footnote 20)

        In his short Edge text Russell does write about AGI related existential risk (“perhaps even species-ending problems”) and objects to one common way of dismissing such risks: “Some have argued that there is no conceivable risk to humanity for centuries to come, perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”

        It is true that Russell then also writes “Nevertheless, there are reasons for optimism” and briefly describes how AI researchers can constructively work on value alignment and safety.

        But I cannot see how that ending justifies the much stronger claim made by Pinker namely that Russell is skeptical that high-level AI pose the threat of an existential catastrophe. Nowhere (that I know of) does Russell express being skeptical that there is such an existential threat nor has Russel ever, AFAICT, claimed that we are certain or even very likely to solve/evade it.

        The Edge text is of course very short. To get more clear on Russell’s beliefs we, and Pinker, could easily access other work on Russell’s website here
        including the informative FAQ
        The website also links to the following 2016 text that Stuart Russell co-authored for MIT tech review titled “Yes, We Are Worried About the Existential Risk of Artificial Intelligence”

        1. The motivations I’m talking about are present in pretty much everyone. If one works in a certain area, one is going to have a negative reaction to ideas that make that area less important. I think Häggström’s work is important. We should be looking at the risks of AGI, existential and otherwise. I’m guessing that Pinker would agree that researchers should continue to look at AGI’s risks and that Häggström’s work is important and should continue. Pinker’s just coming down on the side of it not being an existential risk, not that it shouldn’t continue to be studied. As I see it, Häggström and associates are overreacting. I have no problem with them arguing their side of things but, as dutiful readers, we must recognize where they are coming from.

  23. I missed this earlier in the month.

    But Phil Torres is a NewRacist hack and abusive troll.

    He set up dozens of accounts on Twitter just to harass people.

Leave a Reply