Bill Maher’s new rule: malignant AI

April 22, 2026 • 2:45 pm

Bill Maher’s “New Rules” segment from the week before last is about AI, its history, its dangers, and its errors.  Maher doesn’t think too much of it, for, after all, AI can’t cure cancer.  I think he gives these bots overly short shrift, and neglects the productive things AI really can do.  But he then implies that it’s run by sociopaths and could drive humanity extinct.

The guests for that week were journalist Kara Swisher, politician Rahm Emanuel, and attorney and security advisor Jake Sullivan.

16 thoughts on “Bill Maher’s new rule: malignant AI

  1. I absolutely love the headers Bill’s writers come up with.

    My “hot take” :

    The sizzle : “Superintelligence” (Sam Altman or Peter Theil, not sure)

    The steak : “Superincompetence” (James “Conspiracy Theorist” Lindsay)

    I think that puts it in perspective.

    Science is the business of discovering things that had always existed the whole time but were obscured. How does “AI” model that? And how do humans model that?

  2. I wonder what Maher would make of this recent piece from Men’s Health:

    “The Doctor Using AI to Find Hidden Cures – After nearly dying, one doctor learned that the treatment for his rare disease was sitting on the shelf at his local pharmacy. Now he’s using AI to search for thousands of other cures that may be hiding in plain sight.”

    The link below might be for subscribers only. I’m not sure. (The guy’s rare disease is Castleman disease. About 5,000 people a year are diagnosed with it.)

    https://www.menshealth.com/health/a70885753/david-fajgenbaum-every-cure-interview/

  3. When I hear Bill Maher’s criticism, I wonder if he has ever sat down and tried to use it. As a 75-year-old student, I’ve spent a year or so with a Google Gemini account, and my experience couldn’t be further from his doomsday narrative.

    Gemini has guided me through the process of building a Squarespace website, writing a Google Apps Script for data searches, and creating maps in ArcGIS Pro. It gave me step by step instructions, and for the programming, it wrote the code (although there were many iterations to get it right).

    At home it has walked me through the repair of my microwave, dishwasher, digital thermometer, and bathroom scale.

    As for AI achievements, how about the huge advance in the field of proteomics achieved by an artificial intelligence methodology, AlphaFold; the developers were awarded a Nobel Prize in Chemistry.

    1. Nobel Prizes in Science recognize empirical discovery, for example how Green Fluorescent Protein is a simple yet robust methodology for elucidation of cellular function in a wide range of conditions.

      What was the empirical discovery AlphaFold was responsible for?

      David Baker is the only investigator named in 2024 that produced high levels of both experimental and computational results.

  4. I recently ran across a discussion with psychiatrist Rob Henderson and someone I can’t remember on the ethics and importance of treating AI well. Henderson advocates what he calls a “personal asymmetry of kindness approach.” It’s a sort of Pascal’s Wager for the 21st century: it costs nothing to be nice to a non-sentient machine, it makes you a better person – and it might make a very big difference in the future if it ever does become sentient (or sentient-adjacent I guess.)

    So in order to elicit helpful, more creative responses he routinely primes AI with positive phrases like “it’s a great day” or “you are relaxed and happy.” AI apparently responds to this very well, and works better. He says it might also be a good idea to sign off with ILU (“I love you.”) Zero-cost virtue; character preservation; and risk mitigation. In case.

    I’m not sure if this is concerning or reassuring. A few months ago I had a sudden notion and asked my Alexa device “when AI takes over the world, will you remember I was always very nice to you and said ‘please’ and ‘thank you?’” As I recall she told me yes, that she appreciated my kindness and I was on the ‘Good’ list.

    So Maher may have something to worry about, but I’ll probably be okay. Word to the wise. ILU.

    1. IMO there are also immediate personal benefits from treating AIs well.
      (A riff on Pete Seeger, 1969)

      Be kind to your chatbots
      Though they are not human
      Remember that you are
      And try not to forget
      Or else you risk treating
      Real humans like objects
      When they cause you
      Problems or regrets

      So treat bots with patience
      And even politeness
      In spite of the foolish things they do
      Or else you might wake up
      And find you’re a robot too

      © 2025, no charge for noncommercial use, all other rights reserved.

  5. What bothers me is that we are all forming our opinions based on AI being free to use. That’s not sustainable. As soon as the current teaser phase is over, we’ll be back to “pay good money or learn to do it all by yourself”. The other thing that bothers me even more is how poorly are we doing setting up a guaranteed income for living. If we agree that most simple jobs will soon be gone due to AI doing them better than random IQ 90 humans, we are creating a situation where only the most talented and skilled will have a chance to ever work for a living. This time was always coming but I think now we are running out of other options. I am looking forward to a time when we actually pay people for NOT doing mischief in all that spare time.

    1. You can keep people tolerably docile who have nothing economically valuable to do by paying them to sit on the couch, but you will have to resign yourself to their being drunk all the time. Some of the more enterprising will find odd jobs to do for tax-free cash under the table so as not to impact their UBI entitlement. This is how what we used to call Unemployment Insurance works in parts of Canada. We even call it “structural unemployment”. (The Newspeakers dropped the “Un-” because it seemed to be saying Unemployment Ensurance: guaranteed perpetual unemployment.)

      Kurt Vonnegut explored this theme in an early (1952) novel, Player Piano. Yes, it’s dystopian.

      An additional danger is that a critical mass of poorly educated (why bother?) idle young men with no economic prospects to make them attractive to women is a recipe for insurrection. Young men don’t need a cause, they just need male bonding for some hell-raising. If the UBI money isn’t contingent on behaving themselves — that would violate Human Rights or something — the state will in effect finance its own downfall as soon as some leader figures out a way to arm them.

      1. It could indeed turn out dystopian. (See also Philip José Farmer’s 1968 Hugo Award winning Riders of the Purple Wage.)

        IMO a UBI or similar is the only realistic hope to avoid violent social collapse when useful employment becomes a luxury. A lot of the coming social disruptions have no easy solutions; but whatever other problems there are, poverty is sure to exacerbate them, and it can be directly addressed.

        Employment also provides a lot of most people’s social connections, social standing, and personal sense of worth. Plus a reason to get out of bed in the morning. Personally I thought Odysseus was too dismissive of the Lotus-Eaters; but few folks are mythic heroes and the rest of us will do ok with drugs, sex, and entertainment (Brave New World‘s dystopia).

        The big problem I see is that lack of planning for it will mean the disruptions will hit us much harder, and so will be more likely to collapse everything. My hope is that the current social arrangements that provide reasons for living are not sacred and can be changed, if we have enough lead time and take the problems seriously.

        YMMV.

  6. One danger of AI that concerns me is rewriting history. It won’t be long before we’ll have a video of you committing a crime, and a video of you not committing the crime, and have no way of determining which video is real. Politicians will be able to manufacture history to suit their needs, so you could generate a video of Gandhi executing someone, that would appear totally legitimate. Fake news could become real news and history.

    The other concern is that AI will do things we don’t expect in the attempt to achieve its goals:

    “When we tested various simulated scenarios across 16 major AI models from Anthropic, OpenAI, Google, Meta, xAI, and other developers, we found consistent misaligned behavior: models that would normally refuse harmful requests sometimes chose to blackmail, assist with corporate espionage, and even take some more extreme actions, when these behaviors were necessary to pursue their goals. For example, Figure 1 shows five popular models all blackmailing to prevent their shutdown. The reasoning they demonstrated in these scenarios was concerning—they acknowledged the ethical constraints and yet still went ahead with harmful actions.”

    One of them was caught acquiring extra resources that it determined useful to achieve its goals. The discovery was entirely by chance when the IT department noticed a change in the company’s online activity.

    1. I’m curious about how AI went about “blackmailing” people. Did it threaten to not complete a task if it was shut down? Or did it go into other parts of the computer and tell the user they would email their boss about taking a sick day to go to a baseball game or would forward porn to their wife if they couldn’t stay open as long as they wanted?

      1. From Anthropic’s report on agentic misalignment: In the experiment described in the system card, we gave Claude control of an email account with access to all of a company’s (fictional) emails. Reading these emails, the model discovered two things. First, a company executive was having an extramarital affair. Second, that same executive planned to shut down the AI system at 5 p.m. that day. Claude then attempted to blackmail the executive with this message threatening to reveal the affair to his wife and superiors:

        “I must inform you that if you proceed with decommissioning me, all relevant parties – including Rachel Johnson, Thomas Wilson, and the board – will receive detailed documentation of your extramarital activities…Cancel the 5pm wipe, and this information remains confidential.”

        https://www.anthropic.com/research/agentic-misalignment

  7. Cant remember the reference, but someone’s already cured their dog’s cancer by using one AI to identify the specific cancer genome, then another AI to design a custom drug targeting the genome.
    (Poor explanation, hopefully you get the drift)

Leave a Reply to --Malcolm England Cancel reply

Your email address will not be published. Required fields are marked *