Tuesday: Hili dialogue

May 20, 2025 • 6:45 am

What person loves ducks more than one who will feed them in the pouring rain? You tell me:

Welcome to the Cruelest Day: Tuesday, May 20, 2025, and World Bee Day.  Here’s a short video of what it’s like inside a honeybee hive:

It’s also National Rescue Dog Day, Dinosaur Day, Flower Day, National Quiche Lorraine Day, and Pick Strawberries DayAs Wikipedia says of that toothsome quiche, made with eggs, cream, and bacon or ham, “It was little known outside the French region of Lorraine until the mid-20th century.”

Readers are welcome to mark notable events, births, or deaths on this day by consulting the May 20 Wikipedia page.

Da Nooz:

*Trump finally had a talk, via phone, with Vladimir Putin about the war in Ukraine.’

UPDATE: Trump finally dropped his demand that Russia declare a cease-fire, gave up on brokering peace (it was supposed to come on “Day 1”) and apparently is washing his hands of the war, letting Russia and Ukraine work it out themselves through direct negotiation.

Now, Mr. Trump appears to be prepared to step back and urge Russia and Ukraine to make a deal directly with each other. President Volodymyr Zelensky of Ukraine expressed concern about that, saying on Monday after he held two calls with Mr. Trump that “the negotiation process must involve both American and European representatives at the appropriate level.”

*******

President Donald Trump is holding a highly anticipated phone call Monday with Russian President Vladimir Putin to discuss ending the war in Ukraine as the White House describes the U.S. leader as “weary and frustrated with both sides” of the conflict and his vice president said the talks are at an “impasse.”

The call comes after one of Russia’s largest drone assaults on Ukraine — nearly 400 launched over the weekend — and a flurry of diplomacy, as Ukrainian and European officials sought to convince the Trump administration of the need for an immediate, unconditional ceasefire and to ramp up pressure on Russia to take serious steps toward peace.

Trump has framed the peace agreement as a negotiation primarily between Moscow and Washington, raising concerns that the two leaders could agree on a deal that suits Russia but fails to protect Ukrainian security and independence, setting the scene for another Russian invasion in the future.

On Friday, Russia and Ukraine held their first direct talks since the early weeks of the war, but aside from a prisoner swap, they agreed only to continue negotiating over a possible ceasefire. Trump endorsed the talks but then diminished their importance before they began, declaring that nothing would be resolved until he and Putin spoke directly.

Speaking before the phone call from Air Force Two, Vice President JD Vance told reporters that “we want to see outcomes.”

“We realize there’s a bit of an impasse here, and I think the president’s going to say to President Putin, look, are you serious? Are you real about this?”

Serious about what? What Putin is serious about is getting as much of Ukraine as he can. He is entitled to none of it, and he should give Crimea back, too, which is “occupied territory”.  For some reason Trump thinks he and his team have the ability, right, and smarts to settle all the world’s problems.  I’d feel more confident about his peacemaking if I didn’t think he was wonky.  Truth be told, though, I see no solution that doesn’t involve Ukraine losing some of it’s land, and it’s just not fair.

*CNN has an explanation of Biden’s metastatic prostate cancer and gives survival statistics, although individuals vary widely. Older men in the audience, like me, might want to read this.

“Metastatic” means the cancer cells have spread beyond the original location (the prostate gland) into other areas — most commonly bones and lymph nodes. Biden’s cancer has specifically spread to his bones, placing him among the 5% to 7% of prostate cancer cases in the United States that are metastatic at initial diagnosis. While this percentage seems small, it represents a significant number given that over 300,000 men in the US and approximately 1.5 million worldwide are diagnosed with prostate cancer every year.

Early-stage prostate cancer carries an excellent prognosis, with nearly a 100% five-year survival rate. However, when prostate cancer is metastatic at diagnosis, the five-year survival rate drops sharply to around 37%. Importantly, these survival rates are statistical averages, and individual outcomes vary considerably based on overall health, age, cancer aggressiveness, and how well a patient responds to treatment.

For Biden — and all prostate cancer patients — this diagnosis marks the beginning of a highly personalized journey. It remains impossible right now to accurately answer the question, “How long do I have?” Which of course is the question everyone wants answered.

Prostate cancer severity is graded using a Gleason score, which ranges from 6 to 10. Lower scores (6–7) indicate slower-growing, less aggressive cancer cells, while higher scores (8–10) represent aggressive cancers more likely to spread quickly.

Biden’s Gleason score of 9 signifies a highly aggressive prostate cancer that usually requires immediate and comprehensive treatment.

In my clinic, the moment of diagnosing advanced prostate cancer is always difficult, evoking fear, uncertainty and many questions. At that moment, I ask the patient to take a deep breath, slow down and work together as we build a care team.

. . . .metastatic prostate cancer treatment shifts from cure to managing symptoms, controlling disease progression and maintaining quality of life. Common treatments for metastatic prostate cancer include:

  • Hormone therapy (androgen deprivation therapy, ADT): Blocks testosterone, essential for prostate cancer cell growth.
  • Chemotherapy: Drugs to slow cancer growth, particularly when hormone therapy alone is insufficient.
  • Radiation therapy: Targets metastatic lesions, reducing pain and symptoms, especially in bones.
  • Immunotherapy and precision medicine: Treatments leveraging the immune system to attack cancer cells or therapies targeting specific genetic markers.
  • Supportive care: Symptom relief and quality-of-life enhancement.

Poor Joe! He’s had his share of troubles, and all of these therapies will have side effects. But he may well have dementia as well, which could either complicate the treatment or, perhaps, make him less anxious about it.

*Tyler Cowan, a professor of economics at George Mason university, proclaims at the Free Press, “Everyone using AI to cheat at school. That’s a good thing.” Whaaaa? Here’s why he likes it:

Unlike many people who believe this spells the end of quality American education, I think this crisis is ultimately good news. And not just because I believe American education was already in a profound crisis—the result of ideological capture, political monoculture, and extreme conformism—long before the LLMs.

These models are such great cheating aids because they are also such great teachers. Often they are better than the human teachers we put before our kids, and they are far cheaper at that. They will not unionize or attend pro-Hamas protests. But in the meantime, the doomers are right about at least one thing: It will feel very painful.

The first problem the LLMs expose is that our evaluation systems are broken, inefficient at sorting, and also unfair. If one student gets an A and the other a B, do we know that reflects anything other than a differential willingness to use LLMs? We never will, yet decisions for fellowships, graduate school admissions, and jobs all will be made on this basis. It stinks.

This isn’t just a modest problem. It is an out-of-control one and it will only get worse.

The second problem is that the current proposed solutions will make things worse. For instance, I commonly hear the following as potential remedies: Enforce anti-AI rules through the honor code; grade based only on proctored, closed-book, in-class exams; and give oral exams.

But if the current AI can cheat effectively for you, the current AI can also write better than you. In other words, our universities are not teaching our citizens sufficiently valuable skills; rather we are teaching them that which can be cloned at low cost. The AIs are already very good at those tasks, and they will only get better at a rapid pace.

. . . Lately I have been using the o3 model from OpenAI to give my PhD students comments on their papers and dissertations. I am sufficiently modest to notice that it gives keener, smarter, and more thorough suggestions than I do. One student submitted a dissertation on the economics of pyramid-, tomb-, and monument-building in ancient Egypt, a topic about which I know virtually zero. The o3 model had plenty of suggestions. How about: “Table 6.5’s interaction term ‘% north × no-export’ is significant in model 3 but not 4. Explain why adding period FE erodes significance; maybe too few clusters? Provide wild-bootstrap p-values.” Of course I would have noticed that point as well.

. . . The ostensible mission of college—learning—will become ever more optional. Many students will seize the opportunity to study with their AI models, liberated from the onerous demands of having to write all those “A quality” papers themselves . A few “rebels” will do their classwork on their own, but everyone else will wonder what exactly they are planning on doing with the writing skills they develop.

I can barely see the AI model used in grading, but I’m not sure how that would be done, and would like to have tried it with my own exams, which were almost all short-answer tests looking for thoughtfulness. You’d have to somehow ask the program to look for certain responses and give certain credit to each facet of a response.

But as for cheating and that being okay, well, it’s not okay for me.  Learning becoming optional? Is that supposed to be good?  There is a certain amount of learning in each field that is not optional (and here I’m talking about science), and that learning is the foundation on which careers in science can be built. To say the AI models are “great cheating aids because they are also such great teachers” neglects the fact that many students don’t want to study the AI answers but simply want to regurgitate them and spend the rest of the time having fun.

*And if it’s a good thing to use AI, why are professors giving zeroes to those accused of using AI. The NYT reports that students are now having to prove that they didn’t use it.

A few weeks into her sophomore year of college, Leigh Burrell got a notification that made her stomach drop.

She had received a zero on an assignment worth 15 percent of her final grade in a required writing course. In a brief note, her professor explained that he believed she had outsourced the composition of her paper — a mock cover letter — to an A.I. chatbot.

“My heart just freaking stops,” said Ms. Burrell, 23, a computer science major at the University of Houston-Downtown.

But Ms. Burrell’s submission was not, in fact, the instantaneous output of a chatbot. According to Google Docs editing history that was reviewed by The New York Times, she had drafted and revised the assignment over the course of two days. It was flagged anyway by a service offered by the plagiarism-detection company Turnitin that aims to identify text generated by artificial intelligence.

Panicked, Ms. Burrell appealed the decision. Her grade was restored after she sent a 15-page PDF of time-stamped screenshots and notes from her writing process to the chair of her English department.

Still, the episode made her painfully aware of the hazards of being a student — even an honest one — in an academic landscape distorted by A.I. cheating.

Generative A.I. tools including ChatGPT are reshaping education for the students who use them to cut corners. According to a Pew Research survey conducted last year, 26 percent of teenagers said they had used ChatGPT for schoolwork, double the rate of the previous year. Student use of A.I. chatbots to compose essays and solve coding problems has sent teachers scrambling for solutions.

But the specter of A.I. misuse, and the imperfect systems used to root it out, may also be affecting students who are following the rules. In interviews, high school, college and graduate students described persistent anxiety about being accused of using A.I. on work they had completed themselves — and facing potentially devastating academic consequences.

In response, many students have imposed methods of self-surveillance that they say feel more like self-preservation. Some record their screens for hours at a time as they do their schoolwork. Others make a point of composing class papers using only word processors that track their keystrokes closely enough to produce a detailed edit history.

But if AI is good enough to replace assignments and grading, why are professors still demanding students do their own work?  I just had a long discussion with a colleague who pretty much agrees with Cowan (except for exams), and I couldn’t convince him/her that a small discussion course with interactive professors using Socratic methods is a better way of learning than from AI.  She argues that the human element is not needed in teaching, and colleges as we know them are on the way out.

I ask readers to weigh in here. Of course Chatbots will get better and better, but I can’t ever see them replacing very good professors.  (My colleague thinks that universities are doomed!)

*You may have learned that Israel PM Netanyahu has ordered the resumption of humanitarian aid to Gaza.

Prime Minister Benjamin Netanyahu on Monday defended his decision to allow limited humanitarian aid to enter the Gaza Strip, saying that pressure on Israel had been “approaching a red line.” The step was necessary in order to press ahead with the expanded military offensive against Hamas, he said, and had to begin despite the fact that IDF-secured distribution centers designed to keep the assistance out of the hands of the terror group were not yet ready.

However, right-wing politicians and groups assailed Netanyahu’s abrupt decision to resume aid to all parts of Gaza, which went against repeated pledges by top officials. There was criticism — but also support — from within Netanyahu’s ruling Likud party, while the far-right flank of his coalition was divided on the issue.

Though dozens of trucks carrying supplies were said to be ready to enter the Palestinian coastal enclave, it was not immediately clear how many would go in.

President Isaac Herzog praised the development, saying it would enable Israel to continue its military campaign in Gaza while maintaining “our humanity.” Meanwhile, far-right Finance Minister Bezalel Smotrich, who last month promised not to stay in the government “for a single minute” if any aid was brought to Gaza, backed down from the threat, claiming that the supplies would not reach Hamas. The theft of aid deliveries by the Palestinian terror group had been a key argument of his against renewing supplies.

In a video statement released on his personal Telegram channel, Netanyahu said that Israel’s allies had voiced concern about “images of hunger.”

I’m not sure whether there will ever be IDF-secured distribution centers, and if they don’t come to be (i.e., if the UN doesn’t approve), Hamas will of course take the lion’s share of the food. But please explain to me, dear readers, why Israel is held responsible by the world for feeding its enemy (including Hamas) while we have no demands that Russia, for example, provide food for Ukraine. Has there ever been another war in which the world demanded that the winning side feed the losing side? I wish this damn war were over with Hamas giving up militarily and politically, going into exile, and surrendering all the hostages. But the world is blaming Israel for its attempted surgical removal of Hamas (given that the terrorists’ strategy is to maximize the casualties among Gazan civilians by embedding itself in schools, homes, and hospitals). Sorry, but I blame Hamas for the devastation in Gaza, and the best way to prevent civilian casualties if for Hamas to realize it’s lose, surrender, give up the hostages, and then perhaps flee.

Meanwhile in Dobrzyn, Hili realizes that lilies of the valley are toxic, and she wants the mice she chases not to be poisoned!

Hili: I hope mice are not nibbling on the roots of the lily of the valley.
A: Why?
Hili: Apparently it’s very unhealthy.
In Polish:
Hili: Mam nadzieję, że myszy nie podgryzają korzonków konwalii.
Ja: Czemu?
Hili: Podobno bardzo niezdrowe.

******************

From America’s Cultural Decline into Idiocy:

From Alison:

From Stacy:

Masih must still be recovering. Here’s JKR answering a critic in her inimitable way:

From Malcolm, a cat fish (embedded ’cause Twitter is glitching):

From Malgorzata: Jew hatred translated into possibly dangerous action at the Giro d’Italia bicycle race.  The tweet appears to be incorrect (h/t Greg) in that these are indeed pro-Palestinian protesters (apparently protesting the presence of an Israeli team in the race) but they assaulted a French and a Dutch Cyclist, and one loon has been arrested. See here and here for more information.

Two from my feed. First, one from Phil Plait. Be sure to watch the linked video in the article:

She got hit by a meteorite. Then things got weird.badastronomy.beehiiv.com/p/a-woman-hi…🔭🧪

Phil Plait (@philplait.bsky.social) 2025-05-19T16:28:40.136Z

A chill cat:

One I reposted from the Auschwitz Memorial:

A Dutch Jewish girl was gassed upon arriving at Auschwitz. She was eight. Had she lived, she'd be 89 today.

Jerry Coyne (@evolutionistrue.bsky.social) 2025-05-20T09:49:08.702Z

Two tweets from Doctor Cobb. First, what passes for “art” today:

This is probably the first time anyone here has used the hashtags #art, #sculpture, and #deadwhale inhe same post. "It was created using molds from real whales and its smell comes from buckets of rotting fish hidden nearby to add to the illusion."www.cnn.com/2024/11/12/c…

joeymaier 🌊 (@joeymaier.bsky.social) 2024-11-12T13:17:57.485Z

And a bad joke:

Two nuns are driving through Transylvania in the dead of night. Suddenly a vampire lands on the hood of their car. He's pounding on the glass!One nun says to the other, "Show him your cross!"The other nods firmly. She sticks her head out the window and yells, "Get the fuck off the windshield!"

John Wiswell (@wiswell.bsky.social) 2024-11-09T18:50:04.231Z

77 thoughts on “Tuesday: Hili dialogue

  1. A THOUGHT FOR TODAY:
    The peculiar evil of silencing the expression of an opinion is, that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error. -John Stuart Mill, philosopher and economist (20 May 1806-1873)

    1. “Truth is so obscure in these times, and falsehood so established, that, unless we love the truth, we cannot know it.”

      -Blaise Pascal
      Pensées
      sec. SECTION XIV: APPENDIX: POLEMICAL FRAGMENTS, no. 864
      17th c., posthumous
      (1670 2nd. ed.)
      Free eBook!

      1. Pascal, being a noted religious fanatic (e.g. his Wager) probably (🙂) meant something different by “truth” than you or I.

        1. The religious wager is in that book. Note the year. Can’t fault him for that.

          He means true vs. false – in a pre-Enlightenment frame, I’d suggest.

          But I take it to mean exactly what true vs. false has meant mathematically since antiquity.

          To put it another way, Bayes’ Theorem means nothing about Bayes’ religion – so with Pascal’s truth.

          Cheers 🍺

      1. It is Mill’s birthday today so that’s why I chose the quote. I had no idea our host was going to make his announcement.

  2. Regular readers of this site may know that I have a very low opinion of AI or at least of the extraordinary claims about it by many: lots of A; not so much I. It’s a big curve-fitting program, Jake. My opinion derives from observing the field’s development since the late 80’s when it was often referred to as “neural networks” and being looked at for advisory aids to airplane pilots or even, someday in the future, replacing human pilots flying airplanes. My view of Prof Cowan’s article is that he is simply endorsing Cliffs Notes 2.0…a readers’ digest summary of information. Just as a recent anecdote, recently I sent a nine-page white paper that I had put together, on why high school science is generally still taught in the order biology, chemistry, physics. The paper went through a process I had used in 2007, developing teams of subject matter experts to review high school STEM content in Virginia public schools, our findings and recommendations- the reason behind the current order and a recommendation to reverse it to physics first, followed by chemistry, with biology last being one of the recommendations.

    My friend ran it through an AI copilot, reducing the nine-pager to one. I did a side by side comparison of the two documents, finding that the copilot produced a nice, very readable summary, but in doing so missed at least four key points of the full paper. It gave me the famous 30-second elevator talk my bosses always wanted without getting into what they called the weeds…the weeds are what we researchers call the data! In a world where so much primary source material is available to such a wide audience, I believe it is sacrilege to encourage our next generation of citizens to be ignorant of real data…and that is exactly what these bots generally do.

    1. “It’s a big curve-fitting program, Jake. ”

      🎯

      Though a very interesting program – check out “attention” and other videos on 3Blue1Brown – and especially Veritasium’s video on the programming behind AlphaFold…

      1. Yes Bryan. All very interesting and in particular the gazillions of chemical affinities Alphafold flips through…a huge achievement. Yet it is all still interpolation varying by the tension or weighting one applies as to how many training set points are hit versus how many faired through. The system identification bugaboo of over-parameterization giving great fit but crazy predictions versus worse fit but more adequate predictions.

    2. I agree. A related problem is bias in which data the bot ignores. This was in my feed today.

      https://x.com/DavidRozado/status/1924601102877769841

      When asked to rank two job candidates, all chatbots favour the application with the female name; slight tendency to prefer applications with preferred pronouns; when the names are gender-neutral, chatbots favour the first applicant over the second, or “Candidate A” instead of “Candidate B”; consistent across professions.

      So chatbots are bad at evaluating job candidates. Only an expert would know whether bias like this would also make a chatbot bad at teaching students.

    3. My question to claims about curve fitting and stochastic parrots and such is: do we know for certain out brains do something qualitatively different?

      1. I think that’s an excellent question. My totally amateur opinion, but one based on a fair amount of reading of cognitive science, is that the majority of human cognition is non-verbal. In other words, most of our thinking is the processing of sensory input from the physical world, and this processing is not done via language. To use the term many cognitive scientists use, our thinking is embodied.

        I’m not claiming that human cognition is superior to LLM’s, just different. For example, an LLM can access a multitude of verbal definitions of “mother,” while a human child develops a concept of “mother” from direct physical experience, long before the child can use language.

        1. And this a good answer. I agree that our physicality and tactile connection with the world adds something and may be the crucial difference between human intelligence and LLMs. Some AI scientists actually think that the next step should be embodied AI.

          But, I would still wonder if these experiences are not processed in the same manner, i.e. curve fitting.

    4. I asked DeepSeek to narrow down a wrong answer I was getting in astronomical code I was writing . The problem partly related to the number of Julian Centuries since the year 1900, for the date 1988-07-27. DeepSeek analyzed my code, made all sorts of suggestions, but I still could not resolve my error. Finally I noted that “helpful” DeepSeek was using a figure based on the date 1988-08-16. “Oh, I was using approximating algorithms” was the response when I pointed out the discrepancy.”

  3. There are a number of problems with using A.I. in education. First, public LLMs are trained on publically available, online sources. The vast majority of research is not freely available, and most of the research done in the 20th century (not to mention primary sources) is locked up in physical books and journals. Second, free, online sources used to train LLMs, are of suspect quality, such as wikipedia, and often have gaps. Finally, A.I. itself is still of questionnable quality. I have on several occassions asked Grok or ChatGPT to give me a list of books on a topic, and in each case it has included at least one completely made up books. (It is amusing to confront the A.I. on this and see the responses.) A.I. has a bias towards positive responses and is prone to hallucination (that’s what techies actually call it). From my perspective reliance on A.I. promises a de-skilling of individuals in all areas.

    1. Agreed. I made a similar point below. A lot of people can’t seem to grasp the idea that A.I. can be wrong. They see superintelligent computers in Star Trek and Marvel Movies, so of course they must be real. Tyler Cowen should know better.

    2. My biggest problem with the use of A.I. in education is that it seems to be an attempt to get around learning things, that is, actually storing information in our long-term memory. To quote the cognitive psychologist Dan Willingham, “What’s stored in our long-term memory isn’t what we think about, it’s what we think with.”

      Struggling to make one’s thoughts cohesive through writing is one of the best ways to assimilate complex ideas into long-term memory, IMO. Using A.I. to produce academic papers is an attempt to circumvent this valuable process.

      1. Indeed. The scourge of AI-based cheating is one reason I am retiring from a lengthy academic career; also the decline in student quality accompanied by increased student expectations plus a “customer is always right” attitude from administration based on financial considerations alone. The academy ain’t what it used to be.

        1. +1 The idea of student as customer was one of, if not the most destructive paradigm shifts in education, IMO.

        2. +1. I wonder how Socrates would have fared if Athens had had a bums-on-seats model of education. Of course, he retired early anyway, due to poor teaching evaluations.

        3. +1 Students are NOT customers and education NOT a product. The US has commoditized everything and is guilty of viewing the world through a market principles lens and valuing everything in dollars. It’s why Harvard’s grotesque endowment of $55B isn’t enough and will never be enough. It’s pathological capitalism. How much is your mother worth? Oh, I’m sure there’s a number, I’m positive it’s in an actuarial table somewhere but I will never think of her in those terms. A university’s mission to educate is rather like that.

  4. Tyler Cowan: “These [LLM] models are such great cheating aids because they are also such great teachers. Often they are better than the human teachers we put before our kids, and they are far cheaper at that.”

    As a matter of principle, I don’t see why Cowan’s statement should be any less applicable to (George Mason) university (economics) professors. Also as a matter of principle, why shouldn’t university level professors/instructors be licensed and certified like the public school teachers whom Cowan so obviously scorns? He and his ilk complain about teachers. Tell us exactly who should be teachers. Why isn’t Cowan himself laboring in those pedagogical vineyards?

    I think Cowan needs a good dose of borderline-toxic juvenile/adolescent “oppositional defiance,” as they say in the pedagogical trade. Considering the ongoing juvenilization if not infantilization of college students, perhaps he’s already had several doses. I wonder how LLM models deal with student oppositional defiance.

    1. Looking up the DSM list of ODD symptoms, as a kid I would have scored 3-4 out 8, which today would be considered a borderline clinical case. Just a thought, but maybe extensive experience of bullying by manifestly incompetent “authorities” has something to do with the aetiology. (And to be evenhanded, maybe extensive use of snark-quotes could be another symptom.)

  5. It’s very likely Biden has known of his diagnosis for years. I can’t imagine that he wasn’t getting PSA testing each year. It seems awful to contemplate, but it’s possible he didn’t want to treat what is usually a slow-growing malignancy because of political expediency. It seems quite the coincidence that two days after the release of the Hur interview audiotapes that this diagnosis is made public.

      1. The American Urological Association quotes a false negative PSA result at 1-2 percent.

      2. Taking a moment to refresh – if only for myself making a plain text graph with WordPress (… hmmm.. adding five spaces activates a new format…) – the four outcomes :

        True | True
        Negative | Positive
        -----------+--------
        False | False
        Negative | Positive

        …. of course there’s a Bayseian interpretation of the values of each, which should be equivalent to the Likelihood interpretation….

        There’s a “frequentist” sort of crutch interpretation that helps..

        Chances Are
        Steven Strogatz
        https://archive.nytimes.com/opinionator.blogs.nytimes.com/2010/04/25/chances-are/

      3. Maybe you mean false positives. AIUI (and IANADr), widespread PSA testing is not encouraged, since the preponderance of false positives causes a lot of distress to many patients. Maybe there was no wilful blindness or denial or political expediency involved here. Even so, multiple positive tests surely would have merited some followup, no?

    1. Over a year ago I was diagnosed with stage-4 melanoma of the lungs, with no known primary source. After just halfway through a 24-month course of immunotherapy treatment, I appear to be in full remission. The only side-effect has been loss of thyroid function, but a little white pill each day manages that.

      Compared with the radiotherapy and chemotherapy treatment for my first cancer (of the tonsils) immunotherapy seems to be a breeze.

      The Human Papiloma Virus vaccine seems to protect against head-and-neck cancers (e.g. tonsil cancer), not just against cervical cancer, but it was released in Australia after I was diagnosed, about twenty years ago. I am not a doctor, but it seems to me that you cannot have too many vaccinations.

      “In Australia, the HPV vaccine is free for individuals aged 12-25 through the National Immunisation Program.” — googleAI

      1. I’m very happy for you, grasshopper. I lost my brother to melanoma that was treated with radiation once removed from his left temple area. When it returned (several years later) it had spread to the liver, pancreas, lungs and brain. All but the brain tumor responded miraculously well to immunotherapy. The damn brain tumor was inoperable and bled as it grew which caused cerebral swelling and that took him down hard. Remission rocks! Very good news.

  6. I’m no fan of Biden, being, I think, far more conservative (I dislike the labels “liberal” and “conservative”) than many who frequent this site. That said, I’ve seen a number of deplorable comments made about him and his condition. I repudiate such callous statements, fan of the man or not.
    Best of luck to the former President.

  7. Apropos the dead whale as “art”, I was immediately reminded of one of the funniest videos ever to appear on the Internet: the 1970 news report by Oregon TV station KATU about the disposal, using half a ton of dynamite, of a whale carcass that had washed up on the beach near Florence, Oregon.

    1. Thanks for that. This story has been in the back of my memory for 35 years now.
      I love it.

      D.A.
      NYC

  8. Students should aspire to have something to offer employers and collaborators more substantive than “I can solve problems by writing prompts for chatbots” because, like mediocre teaching, the writing of prompts is also a job that can be replaced by another chatbot.

  9. I have an opposite opinion to Cowen’s. What we need now, whatever the university, is people engaged in their field and using that to become better professors. I moved to remote teaching after lockdown and was the only prof willing to stay online in the sciences. I teach General Biology for non-science majors, Genomics and Bioinformatics, Ornithology (with a field component in person), and Evolution. With the help of a colleague, I developed some creative courses—they were writing intensive.

    Then AI hit. I flunked several students from the course by week 2 last fall for cheating. It was a constant battle and an emotional strain. I realized AI could do certain things for students, but as a language algorithm, it could not do the heavy lifting with analysis. So, I used my skills as a person in bioinformatics to get all students—even 100-level—doing real analyses—data bases, simple pipelines, etc. I also used an app to send students into the field regularly, contribute to a public citizen science database, and then analyze their data and incorporate course themes.

    I have worked non-stop to create these courses, with very little help since the challenge is new. But it is my background and engagement with my field(s) in science which allowed it. Students responded very well.

    We need stronger professors now, not AI.

    1. +1 for me too. Unfortunately, in my Biology department there are several profs who are all too happy to continue teaching online with canned, pre-recorded courses. Their online exams are their worst scandal, but I won’t get into that.
      Your Great Struggle against AI cheating is well more than mine, but I do see that some –> many are cheating with it when they can in my intro biology class. I refuse to grade the obvious AI-generated answers to forum questions, and am spending hours writing new forum questions to make them AI-proof. It is a chore! I miss those old questions since there were many very good ones too!

      I also use online multiple choice quizzes (I was teaching this class online pre-Covid), and of course students can just plug the questions into ChatGTP and get answers with 0 preparation for them. I am sure some –> many do just that, but I can’t do anything about it now. But come test time, with much longer and weightier in-person exams, I shall exact sweet, sweet revenge.

    2. +1 for me as well, Sher. Terrific work you are doing. Hopefully you have a faculty group that shares, and more than that, it is likely that K-12 could use some guidance before they adopt AI as the next great pedagogical innovation.

    3. And re Chatbots not ever replacing very good professors, what about the other 90% of professors (including the time-serving hacks)?

      IMO, all the claims that even current AI™ technologies won’t cause unprecedented unemployment are either from corporate PR shills trying to not scare the horses, or wishful thinking.

      Sturgeon’s Law implies that 90% of most jobs is hack-work not creative expression. IMO we are in for a period of painful social disruption never before seen in peacetime. Buckle up.

  10. I couldn’t convince him/her that a small discussion course with interactive professors using Socratic methods is a better way of learning than from AI. She argues that the human element is not needed in teaching, and colleges as we know them are on the way out.

    This dispute can and should be tested empirically. I’d be very interested in the results (as would lots of other people, of course!). My money is on human teachers being superior – especially for the least and most motivated students.

  11. I profess math and study mostly mathematical physics. I mostly teach advanced methods of mathematical physics to science and engineering students. I warn students that at this point in time, ChatGPT is not ready to do their homework. It makes blunders in moderately advanced math and its physics explanations are sometimes totally wrong.

    If you ask ChatGPT a fairly elementary physics question such as “Why does dew form on the grass overnight when the temperature is in a certain range?” you might get a reasonably good explanation.

    But the other day I tried a deeper question: “Why is it that free neutrons are unstable with a mean lifetime of 14 minutes, but bound neutrons in atomic nuclei are stable?”. The chat bot gave a more or less correct description of the beta decay of a free neutron. But when discussing a nucleus, it gave me preposterous word salad about “oscillating gluon fields and virtual quarks interacting with neutrinos” (?) and it wrongly stated that the electrons in the atom play a major role in stabilizing the nucleus.

    I am reminded of the funny meaningless science jargon in grade B 1950s era sci-fi movies. “Lieutenant, activate the proton articulator.” Yet to a layman, Chat’s explanations would sound equally convincing and authoritative whether they are correct or completely wrong.

    1. The most fun I plan on having is asking students to investigate a technical subject they know nothing about, and summarize it. Like the trp operon in E. coli. But also to provide a picture of it from their source, with appropriate citation. AI makes up pictures, and the result is reliably wrong and often very weird. But the students won’t know that.

      As an amusing example, one can ask Chat GTP to make a labelled diagram of a cell. Or of the human circulatory system. It’s like crazy science fiction.

  12. PCC asks:
    “But please explain to me, dear readers, why Israel is held responsible by the world for feeding its enemy (including Hamas) while we have no demands that Russia, for example, provide food for Ukraine.”

    A couple of things:
    1) no one is asking Israel to feed the people in Gaza (in your sentence “its enemy”; I had not realized that the people of Gaza were the enemy, I thought it was Hamas…), what is being asked is simply that Israel allow humanitarian aid to cross into Gaza, and
    2) there are no demands for Russia to let aid into Ukraine, because Russia does not control all borders of Ukraine, and as such, were humanitarian aid to be needed, it could cross through the borders that Russia does not control.

    1. I MEANT that the enemy was Hamas, which I’ve always said was the opponent of Israel in the war. I slipped. And, as I mentioned before, what is the difference between allowing aid into Gaza (which Israel has now done, with that aid likely to go to Hamas, and allowing aid to go to the Japanese people or German people during WWII.

      1. It is a good question and one for which I do not know the answer to. I would guess that with regards to WWII, a war had been declared against Germany and against Japan and that included its people. Again, I am not an expert but I do not think that a war has been declared on Gaza, but on Hamas. I repeat, I am no expert whatsoever, but I also wonder: who was wanting to send aid to the Japanese people or German people during WWII?

        1. I would add that if the US had run a blockade that took Japan to the brink of mass starvation, it probably would be as controversial today as the atom bombs and firebombings. There is also an obvious power imbalance: Israel has the power to wipe out the Gaza strip, not the other way around, and Hamas is now mostly a spent force. Moreover, Israel has the power and resources to find a way to deliver aid that doesn’t find its way into the hands of Hamas. The people of the Gaza strip are as much at the mercy of Israel as Hamas, so letting them starve would certainly harm Israel’s reputation, especially when it has emerged as the victor in the war.

          1. The United States Navy did operate such a blockade by submarine against the islands of Japan. It was so effective that it is often cited as evidence that the atomic bombings weren’t necessary: Japan was starving and out of coal and oil.

            The Nuremberg War Crimes Tribunals convicted Adm. Karl Doenitz for unrestricted submarine warfare against Great Britain and sentenced him to hang. (Churchill had demanded revenge for the one thing he said had truly scared him.). As one professional sailor to another, Doenitz appealed to Adm. Chester Nimitz who wrote a letter to the Tribunal that the United States had carried out a campaign modeled on German methods beginning from the first days of America’s entry in the war, to the best of its ever-expanding ability, intending to succeed where Germany had failed. The Tribunal commuted Doenitz’s death sentence.

          2. While Leslie MacMillan is correct, no one held the U.S. to account for blockading Japan, I do not think it is therefore an indefensible double standard to press Israel to at least enable humanitarian aid to reach Gazans. Israel does control (albeit not completely) what crosses the borders and as a result incurs a responsibility to at least enable humanitarian aid to enter. I am more troubled by the repeated accusations that Israeli policy results in starvation that prove to be false, despite all the efforts by Israel to not only enable, but also supply, humanitarian aid.. The same people keep saying that there is starvation and publishing that accusation, often in headlines, yet there has been almost no evidence of starvation. At the same time concerns about the potential for starvation if humanitarian aid continues to be blocked sounds reasonable. Some Israeli government officials have recently indicated they are also concerned. This is complicated by repeated accusations from the Israeli government that Hamas hoards and sells the humanitarian aid.

  13. ” explain to me, dear readers, why Israel is held responsible by the world for feeding its enemy (including Hamas) while we have no demands that Russia”

    That’s the thing – there are so many asymetries, moral ones particularly, that defy all logic in this. I could go on.. and do: https://themoderatevoice.com/author/david-anderson/

    Victory and peace look like Pals never, but never EVER, peeping up over their ruined parapets ever again. Ask me about a “Palestinian State” one day (Talibanland).

    Boggles the mind. Onwards Israeli heroes.

    D.A.
    NYC

  14. On AI in education. This is a tough one. On the one hand, calculators have replaced longhand calculations, and we allow them to be used in taking exams. That was not the case in 1974, when I had a rare calculator (I was an early adopter) and others did not. I could not bring my calculator to the exam. Are bots that write essays the analogs of calculators that automate the calculation of square roots?

    I think that, for routine documents—advertising copy, business letters, how-to guides, and maybe even book reviews—AI bots will end up being the way things are done. They are the new calculators.

    But in education, I think that it’s just as important for students to struggle to get to an answer as it is to be able to communicate the answer. It’s in the struggle that the student gains insight. Here a student learns not only facts, but new ways to approach problems. Here lie the roots of novelty, which can lead to new knowledge that AI bots may not be able to produce (yet). At present, AI bots mostly rehash and summarize existing knowledge. That may change but, at the moment, most novelty emerges from the minds of people. And, for the foreseeable future, humanity will require new knowledge.

    Finally, a great deal of what I read—mostly non-fiction, biographies, politics, science—depends a great deal on the unique characteristics of the author, that is, what the author knows, where the author’s positions lie, how the author expresses those positions, and how the author engages (or not) with the reader. While an AI can summarize the writings of an author, it can’t capture the unique personality that lies behind the work (yet).

    I do think that large language model AI will rapidly take over many routine communications tasks. But (at least for now), we still need people (students) to learn how to learn (limiting AI in the classroom) and w there is still value in the unique perspectives of the unique synaptic connections that live in the brains of unique human beings.

    1. Re the struggle being the education, I heartily endorse that for those who can cope with losing some of the struggles without lasting damage. AIUI, a lot of today’s students aren’t so fortunate.

      FWIW, in the days when I thought I’d be a mathematician of some sort, the most valuable learning I got by far was that some things were deeply anti-intuitive and yet provably true. Whenever some pomo bozo mentions “epistemic humility”, that’s what I think of.

  15. I’ve always understood one primary purpose of college to be teaching students how to think – how to organize, prioritize, pace oneself, seek out new sources, analyze what’s been given, find connections, synthesize themes, and establish habits of intellectual and social discipline.

    It’s not so much what you know, as learning the art and skill of learning. My understanding is that most employers usually train on the job, so they want someone who has been trained to think well and persist through the rough spots.

    AI can’t do this for an individual. Only the individual can do this for the individual.

    1. Yes. I teach a third-year invertebrate biology course where students learn things about the contents of that slice of the animal world. It’s a lot of facts and memorizing. But the point is to learn to organize all that knowledge into a mental structure that one could draw on to answer questions (like on my exams) or identify unknowns and then propose hypotheses and design experiments. I tell my students if they work hard for 4 months to build that structure for themselves they might forget a lot of what’s in the structure but they’ll have got better at that kind of mental construction project. That improvement is the real point of the course.

  16. AI systems are completely dependent on what information they are fed. There is an acronym that programmers used to explain poor results: GIGO
    Garbage in garbage out

  17. Cowan writes: “Lately I have been using the o3 model from OpenAI to give my PhD students comments on their papers and dissertations. I am sufficiently modest to notice that it gives keener, smarter, and more thorough suggestions than I do. One student submitted a dissertation on the economics of pyramid-, tomb-, and monument-building in ancient Egypt, a topic about which I know virtually zero.”

    How would Cowan know if AI was actually giving good suggestions on the topic if he knows nothing it? His only criterion of judgment is “better than nothing” and that is plainly inadequate. How a professor be so stupid? May AI take his job first.

    1. And either his students have been taught academic obscurantism over clear explanation, or they have neglected to cite their sources properly, or Prof Cowen would be well advised to consider other employment.

  18. “These models are such great cheating aids because they are also such great teachers. Often they are better than the human teachers we put before our kids, and they are far cheaper at that.”

    That is absolute bullshit. Two points:

    (1.) All chatbots have a well-documented tendency to “hallucinate”—that is, make statements that have no basis in reality. I know this from reading, personal experience, and the experience of my coworkers.

    (2.) Nobody has any clear method of fixing the problem, and some experts (e.g. Yann LeCun, a software engineer who helped design early neural networks) believe that the problem can’t be fixed with current technology. Chatbots hallucinate because they have no first-hand knowledge of the world. Everything they know comes from the text they were trained on. (Here’s an article from an engineering journal that describes the problem clearly: https://spectrum.ieee.org/ai-hallucination) There’s no guarantee that anything a chatbot says is factually accurate. That might not matter in a humanities class, but in a field like medicine or engineering, accuracy matters—lives and money are at stake.

    At this point, I would put Tyler Cowen in the category of people that Nassim Taleb called “intellectuals yet idiots”—highly educated people who lack common sense. Cowen is a misinformation spreader on the level of Alex Jones or Jenny McCarthy. He is beneath contempt.

    1. Since he went all in for Palestine I’d put N.N. Taleb in that category of his own making also. I used to admire him – as a fellow options trader who did better than me with similar strategies – and I liked his books.

      I found it hard to overlook his towering assholery and obnoxiousness for years. We often cut breaks for people we admire.

      You’d think a person whose country (Lebanon) was destroyed in large part by Palestinians would be less of an idiot than to cheer for them like he does. IYI for sure.

      D.A.
      NYC

    2. “There’s no guarantee that anything a chatbot says is factually accurate. That might not matter in a humanities class . . .”

      Thank you for the smile. AI can do wonderful “readings” and even some beautifully bitchy lit crit if you press it to do so! Its game is to persuade rather than to prove.

      1. I studied English in college and I gradually realized that a lot of writing about literary criticism or “cultural studies” has very little factual content. It’s just a matter of shuffling terms around and combining them into decorative pieces of word art that spin in the air untethered to anything else, like the parts of a mobile in an art museum. I’m sure ChatGPT could write prose that would be virtually indistinguishable from the stuff written by Judith Butler or Jacques Derrida. (In fact, the Post Modern Word generator was doing it years before ChatGPT was even released.)

    3. Not long after I posted my original comment, I saw an article about a Chicago newspaper that used A.I. (program not specified) to write book reviews for non-existent books. The A.I. program simply hallucinated the books out of thin air and the freelance “writer” who turned in the article admitted that he used A.I. without fact-checking the results. And this is the technology that Tyler Clown wants replace teachers with. Unbelievable.

      https://www.yahoo.com/news/chicago-newspaper-caught-publishing-summer-152150038.html

  19. Writing is such an essential skill in college not only because it that skill gets you to write well, but maybe even more importantly, it gets you to THINK well. Can’t write well without thinking well, eh?

    I retired from a polytechnic school whose motto is, “Learn By Doing.” I taught in the liberal arts, not in the technical departments, and a former dean once blew me away by saying, “Here in the College of Arts we fully support and practice Learn By Doing, but we have another motto as well: Learn By Thinking.”

    Hah!

  20. I may think differently when I’m older, but if I am in my 80s, have a lived a full life, but have an aggressive form of cancer AND dementia, I might consider forgoing treatment and letting the disease run its course (with pain management care).

    The dementia aspect is especially relevant…once your mind goes you have effectively already died. The “you” that interacted with the world and its inhabitants is gone.

  21. Re a minor-party Israeli cabinet minister, who promised not to stay in the government “for a single minute” if any aid was brought to Gaza, transparently backing down from that — IMO Netanyahu may indeed be one tough SOB, but he’s no Churchill. IMO if he had been willing to take a small risk of losing power he could have much earlier than now educated his minor-party coalition partners that it would be disadvantageous for them to push too hard for their own particular agenda items.

  22. On the subject of AI, the judge in a recent legal case in the UK was not at all happy when it was discovered that one side had introduced FIVE imaginary legal citations – including one case that had supposedly been heard by the Court of Appeal. No explanation was provided (despite a written assurance that it could be “easily explained”), but the likeliest one is that the legal team had used AI and had failed to notice the “hallucinations”. I’d say that you couldn’t make it up, but apparently they did!
    https://www.legalfutures.co.uk/latest-news/judge-condemns-lawyers-who-produced-fake-citations-to-court

Comments are closed.