ChatGPT, the bot site that automatically produces prose, is back in the news again, but not in a humorous way and not as an example of students cheating. Rather, the University got the bot to write an official message from a university to its students.
As the Vanderbilt Hustler reports (the student newspaper of Vanderbilt University), the bot was used to write a message of consolation to the students after the Michigan State University shooting on February 13 that killed three. The robot message was then sent to students by the school’s EDI office (“Equity, Diversity, and Inclusion”).
“Peabody” is Vanderbilt’s College of Education and Human Development. Click below to read about the mistake—which I assume it was.

Here’s the entire email, which reveals the very source of its prose at the bottom, though it was said to be “paraphrased” (I’ve put a red box around the bot bit as well as the endless promotion of inclusivity and diversity as well as the call to examine our biases):

From the newspaper:
A note at the bottom of a Feb. 16 email from the Peabody Office of Equity, Diversity and Inclusion regarding the recent shooting at Michigan State University stated that the message had been written using ChatGPT, an AI text generator. [Note that the newspaper gives only the last paragraph of the full email.]
Associate Dean for Equity, Diversity and Inclusion Nicole Joseph sent a follow-up, apology email to the Peabody community on Feb. 17 at 6:30 p.m. CST. She stated using ChatGPT to write the initial email was “poor judgment.”
“While we believe in the message of inclusivity expressed in the email, using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College,” the follow-up email reads. “As with all new technologies that affect higher education, this moment gives us all an opportunity to reflect on what we know and what we still must learn about AI.”
The only justification for that email is that at least it cites sources, which of course college students are suppose to do. It even gives the ChatGPT message as a “personal communication,” though a “robotic communication” would have been more appropriate. The paper beefs that there was only one “incident” and not “multiple” shootings, though I can’t be bothered about that.
I suspect what happened is that some semi-literate functionary decided to produce a model email using ChatGPT rather than express his/her own sentiments. But then, god almighty, the functionary was honest enough to send it out saying where it came from.
The reaction of the students was typical, and similar to mine:
Laith Kayat, a senior, is from Michigan, and his younger sister attends MSU. He stated that the EDI Office’s use of ChatGPT in drafting its email is “disgusting.”
“There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself,” Kayat said. “[Administrators] only care about perception and their institutional politics of saving face.”
That’s a good statement. Here’s another:
Senior Jackson Davis, a Peabody undergraduate, said he was disappointed that the EDI Office allegedly used ChatGPT to write its response to the shooting. He stated that doing so is in line with actions by university administrations nationwide.
“They release milquetoast, mealymouthed statements that really say nothing whenever an issue arises on or off campus with real political and moral stakes,” Davis said. “I consider this more of a mask-off moment than any sort of revelation about the disingenuous nature of academic bureaucracy.”
I’m not sure what “moral and political stakes” that Mr. Davis wanted highlighted here. A simple, humane message that expresses sorrow and empathy without politics would, I think, have been appropriate. And they should have left out all the “inclusivity and diversity” stuff, which strikes me as superfluous and off message. Statements about gun control and the like (an initiative that, as you know, I strongly approve of) are debatable statements that do not belong in official communiques, and you’d never see such a thing coming out of the University of Chicago, which maintains institutional neutrality on such issues, though against considerable pressure from faculty and students to make the college take sides on issues.
But to me, the most striking thing about the message above is that it seems to be using the tragedy as an excuse to flaunt the University’s virtue of promoting not only diversity, but “inclusivity”, mentioning that term, or “inclusive,” four times in a very short email. So beyond the heartlessness and lack of empathy involved in turning to ChatGPT, the email is doubly offensive because it’s touting DEI (or EDI) principles more than it is reaching out to people. And there’s not even a single word about showing empathy for the families and loved ones of those who were murdered.
I can ask only “what kind of tendentious mushbrains would put together a message like this?” They are taking advantage of a tragedy to promote a Social Justice agenda. This is the fruit of institutionalized DEI offices.