A short while back I added a new comment to “Da Roolz,” the list of posting guidelines that everyone should read (especially newbies). The last guideline now reads:
26.) I will tolerate no comments that are generated with AI. Even one of them will lead to instant banning for life.
Now I will be the judge of whether a comment is likely generated by ChatGPT or the like, but this one, which someone attempted to post on the thread after “Bill Maher: New Rules #1“, is surely the product of a bot. I won’t give the hapless writer’s handle:
Bill Maher’s “New Rules” segment, as discussed on Why Evolution Is True, delivers the comedian’s signature blend of sharp satire and cultural critique—this time tackling modern hypocrisy with his usual unflinching wit. The analysis highlights Maher’s ability to skewer both political extremes, though a deeper dive into his factual accuracy (or occasional oversimplifications) could add nuance. Fans will appreciate the curated highlights, while critics might crave more counterpoints. A thought-provoking read for those who miss Real Time’s mix of humor and hard truths.
Oy, my kishkes! All I can say is that if you post something this bloody obvious—something that doesn’t add anything to the discussion—you better find another site for your Ai-generated lucubrations. And this person must now do that.
Excellent! Fighting back against mindlessness (kind of ironic, given that it’s “AI”) is always to be commended!
I really hate the way they call it AI when it’s not. ChatGPT has no intelligence at all.
AI™.
I’m curious as to why somebody would even WANT to post such drek.
When I (ahem.. over)post here I want to show off my knowledge, writing, humor, outrage, etc.
And youse all better appreciate it!
🙂
Farming the talent out to a machine in a forum like this seems pointless.
D.A
NYC
We do appreciate it David. Wonder if I could train a bot on the full set of D.A. WEIT comments and replies over the years?…. A sort of David Anderson 2.0….and then run a Turing test….just thinking out loud on a stormy morning here.
HAAHa. Thanks Jim, though I think if you put together a mash-up of my comments (and more considered published articles) into a bot there might not be much “intelligence” there! Lotsa laughs though!
best,
D.A.
NYC
Yes, you are right David, certainly published articles also! I thought of them just after I hit “post it” and was heading out the door to an out of town graduation. Otherwise would have edited it in. I wonder what the lower bound on a training set size is….
I agree with Jim: Thoust comments here are appreciated.
Either a typo or mis-Elizabethanism. Thy.
(Hmmm. Maybe add thou/thee/thy to my snark list of preferred pronouns, forsooth.)
Better to say nothing than use AI!
This gave me a good laugh this morning! Yes, that was undeniably a bot response. I feel sorry for all the high school and college professors who have to sort through similar dreck when grading papers. However, intelligent posters will find a way to tweak the replies, maybe by adding obvious tipographical errors to camoflagee there postins.
I see what you did there.
I suspect that soon we will have chatbots that can generate more humanly natural, less chatbotty, English and we will find it hard to tell who is human and who is a chatbot.
When a chatbots comments cannot be distinguished from those of an intelligent thoughtful human, there will no longer be reason to ban them.
I disagree! I read the comments because I’m interested in the opinions of other human beings. A chatbot is simply a predictive text program—”autocorrect on steroids,” as one computer engineer put it. It was trained on text from the Internet and has no firsthand experience of the real world. A lot of people have a mistaken notion that it’s some kind of “super intelligence,” but it’s not that at all.
Truly. It has no experience whatever, of anything. Because there’s nothing there to have experiences. There’s no there there.
And what about a truly novel idea or thought added to the discussion? Can chatbots do that? Aren’t they just the “Cliff’s Notes” of past human conversation, unable to provide a new take or innovative solution to a problem?
Sure. But most of the time someone expresses opinions and comments on a topic, the human isn’t adding novel thoughts or ideas either.
I don’t know of their back-story or motivations. AI has sure moved in, though.
In my freshman biology class I use online forums where students answer questions that go beyond the content of the class. Lately, I have seen quite a few answers that are clearly AI generated, and there are articles out there that the current crop of youngsters use it quite a lot to basically do all their homework for them. One suggestion I’d read was to use html code to insert invisible characters between words. To us humans the text looks normal, but to a text reader it should be gibberish. I tested that by copying and pasting such a question to ChatGTP, and … it immediately scrubbed out the html code so that the text was gibberish, and yet it still saw right thru it spat out a perfect answer! So I am using different tricks now, and I believe I am getting back control.
Enjoy that feeling while it lasts. Better to live in hope…. I’m thankfully retired from the ed biz, but my ex-dean friend tells me it’s very bad and getting worse all the time. And he’s usually an optimist.
I just came across one of those articles today:
https://nymag.com/intelligencer/article/openai-chatgpt-ai-cheating-education-college-students-school.html
archived at: https://archive.ph/R65nt
It’s pretty alarming as these kids are learning nothing. No wonder Columbia U. students have time to riot all day. I see less and less hope for our civilization.
I was temped to deploy the Postmodernism Generator for a response but would not care to vex our host any more than I might normally manage.
Lost my potential Yiddish heritage due to premature death in my paternal line, I had to look up “kishkes” and my sense is that “oy my kishkes” is a sense of a queasy or injured feeling, as in gut-hurt?
I get this type of nonsense occasionally from students whose in-class exam essays and in-class discussion comments are barely literate. It is too risky for me to accuse students of using AI as I can’t prove it. My response is to note that the essay content is superficial with no personal insight from the student. This has the advantage of being justifiable.
A little deviousness can go a long way (but only if you don’t have ethical qualms about that). For example, include distractors and ambiguities in the question that a GPT is likely to latch onto (after trying various possibilities with a GPT yourself). Or pose topics that need some minimal logical inference to understand what the topic really is in the first place. Or pose a topic that implicitly and non-obviously depends on some context that has been covered in the class but not the readings. Or select a paragraph from each of several essays (including some good ones) to be discussed in class, with each purported author responding to class comments. Or pull wings off flies. YMMV.
I will never post an AI-generated comment.* Writing is too much fun, so why would I want to miss out? Why would I not want to add a little of my own personality to my comments?
That said, we’re seeing a major shift in how copy is being written. Within the next few years—or maybe even sooner—routine correspondence, advertisements, informational documents, how-to instructions, and so many other ordinary pieces of content will be produced by AI. There’s no question that it’s coming. And for those kinds of things, it’s probably OK. Certainly the quality of the writing will be better. 🙂
One would hope that personal letters, love letters between significant others, letters to the editor, and correspondence like what we have here will still be written the old-fashioned way. Those types of writing get their power and meaning from the unique way that a unique human being’s brain works. Anything else is a composite; it’s something else. When it comes to communicating what a real person really thinks, there’s no substitute for sitting down and writing. It’s a beautiful thing.
*Certified to have been composed by the author.
I hang out on a photography web site, and between the various posts are ads. As things are, it is common to see AI generated pictures in the ads, and we all know the “look” by now. No talent is needed, and no graphic artist needs to be paid. It would fill me with rage if I was not so distracted by bigger problems in the world.
But among those visual vomit ads are examples where the anatomy is horribly wrong. Cute fuzzy kittens with extra legs, for example. But the advertisers just don’t care.
“No talent is needed, and no graphic artist needs to be paid. It would fill me with rage if I was not so distracted by bigger problems in the world.”
I’d argue that the annihilation of an entire industry of talented human artists counts as a big problem…
Reminds me of a comment I saw elsewhere which said something like “I don’t want AI to write for me. I want AI to mow the lawn and dust the house, so that I have time to write.”
I told my local LLM to rewrite Juliet’s balcony speech. After a few false starts it came up with this true gem:
Romeo, where are you? Forget your name, or just love me, and I won’t be a Capulet anymore. Your name is the problem. You are you, even if you’re a Montague. What’s a Montague? It’s just a name. A rose, if you call it something else, still smells good. I like you a lot, like the big sea. The more I give you, the more I have. If you want to marry me, tell me tomorrow. How did you find me here? The night hides me. If you don’t love me, let them find me. I’d rather die than live without you.
I wrote a five or six page white paper at the request of a friend on why biology, chemistry, and physics are taught in that order in high school. His co-pilot bot summarized my half dozen pages into one. Like a readers’ (‘s) digest condensed book, it told a nice story, BUT it missed at least three very important points that were part of the full paper. Reminded me of trying to explain to my editor why certain elements in a technical paper must be there for the reader to fully understand.
I’ve seen much better out of AI than that formulaic piece. It still amazes me how it can shift registers on command.
What most surprises me—and the tech people can tell me why I shouldn’t be surprised—is the degree to which AI fails when asked for facts that are not plagued by interpretation difficulties. Take baseball statistics. AI will confidently tell you that a player won the Cy Young in a year that he didn’t even get votes; or that another player won no awards in a specific year despite having been MVP, an All Star, and a Gold Glove recipient. Then in the next sentence AI will spew out a perfectly correct one or two-line career highlights, stuffed with stats, of a marginal player from 40 years ago—and continue to do it for every player in the league. When I do point out an error, it often apologizes then persists in repeating the error as fact. When I post the correct information and ask it to use only that information, it will often revert to its error. Even when I ask for its source, it will give me a credible—and correct—one that contradicts what AI just told me! (And it has this weird quirk of insisting on counting things in a list, but it cannot correctly count.) Yet I can ask it to summarize the positions of competing geopolitical camps and watch it do so accurately and with nuance.
I do dread a generation of young people who outsource their writing, thinking, and memories to the machines. (Though not a few college writing instructors would prefer the AI product over what their students produce.) I envision that the top performers will learn how to tap it productively and create an even bigger divide between themselves and the mediocre.
AI generated: “Your critique captures the paradox of AI: dazzling in its versatility, maddening in its inconsistency. It’s a powerful tool, but its quirks demand skepticism and human oversight, especially for factual tasks. Your examples, like baseball stats versus geopolitical analysis, perfectly illustrate where AI shines and stumbles.”
That generic AI summary followed a six-paragraph, detailed response to my critique—but I did ask it to refrain from rewriting my post!
Thanks for pointing this out. In my daily life, I meet a lot of people who don’t understand or even care that chatbots hallucinate, which is very worrisome.
That is a hilarious comment. Why anyone would bother trying to post it is mysterious, however.
Thank you Jerry, there is really no need for artificial platitude when one is looking for arguments.
A linguist named Victor Mair coined the term “pablumese” to describe the kind of prose that chatbots produce—a kind of bland, undifferentied mass of words without any inviduality or creativity. I know people out there who think this is “good writing,” which says more about them than it does about the writing skills of ChatGPT.
https://languagelog.ldc.upenn.edu/nll/?p=58260
Below is an article from Private Eye magazine. It is evidence that if you feed the internet piles of manure, then any ‘AI’ based on it just creates even bigger piles of manure.
I checked today and google now knows the truth. But how many other things remain uncorrected?
“”AFTER several unsuccessful attempts,” the mathematician Timothy Gowers posted on Twitter on 1 April, “I found a prompt that got Grok to solve a maths problem (the well-known Dubnovy Blazen problem in graph theory) I’ve been working on for over a year. How long till it’s better than human mathematicians across the board?” This was, of course, a joke, signalled by Dubnový Blázen being Czech for “April Fools”. Several days later, however, others noticed that
running a Google query for “Dubnovy Blazen problem in graph theory” resulted in an AI-generated search result that confidently proclaimed that it was “a well-known problem
that AI model Grok solved after several attempts”. It’s wonderful watching the well of knowledge being poisoned in real time.”
One indicator of AI text is prolific use of em-dashes or elongated dashes. It’s a used by chatGPT in particular.
So for example in the text you provided: “critique—this time” that elongated hyphen without spaces at either end is ChatGPT’s style. A human would be more likely to write it as “critique – this time“
But it’s not 100% proof as some people do use em dashes without spaces. Actually the OP has one of these.
So my advice to people including snippets of AI text is to edit out em dashes before posting.
The long dash—or em dash—is common in journalism and book publishing, at least in the U.S. (I’ve found that some British publications, like the BBC website, use hyphens instead of dashes, which makes me wince a bit since it’s technically wrong.) To me, the em dash is a sign that a piece of writing has been professionally edited.
In any case, this seems to be a more recent feature. I remember when ChatGPT was first released to the public, it would often use hyphens in place of dashes. The software engineers at OpenAI must have fixed that particular bug.
(Anyway, sorry for the bit of pedantry.)
The iPhone text editor converts two dashes to a long dash—I kind of like them.
Me too—reminds me of Victorian novels which often use a long dash with no spaces. Keyboard shortcut on a Mac is shift-option-hyphen.
In times before ChatGPT, the rule was simpler. My mom would say: “Don’t talk as if you were reading a newspaper.”
I run a phpBB forum for people with CLL. Over the last few days I have seen a new kind of spam, where a new user posts an ostensibly sensible set of questions about CLL treatment while posing as a patient. These posts are written in AI style and easily recognised. But hidden in the BB code are links to Polish crypto sites, and so they do not get approved (all new users have to have their first post approved by me), and the user is banned and deleted. They all use a VPN and pretend to be in Amsterdam. So far, all give an e-mail address on the same mail service, so I have banned all addresses from that service. I have to spend some time each day deleting spammers and it is irritating to waste time on these bots.
I disagree with you about a lot of things that are important to each of us, but on this topic we are probably 100% in accord. I support this message!