Vanderbilt responds to the Michigan State shooting by sending its own community students a message written using ChatGPT

February 19, 2023 • 9:20 am

ChatGPT, the bot site that automatically produces prose, is back in the news again, but not in a humorous way and not as an example of students cheating. Rather, the University got the bot to write an official message from a university to its students.

As the Vanderbilt Hustler reports (the student newspaper of Vanderbilt University), the bot was used to write a message of consolation to the students after the Michigan State University shooting on February 13 that killed three. The robot message was then sent to students by the school’s EDI office (“Equity, Diversity, and Inclusion”).

“Peabody” is Vanderbilt’s College of Education and Human Development. Click below to read about the mistake—which I assume it was.

Here’s the entire email, which reveals the very source of its prose at the bottom, though it was said to be “paraphrased” (I’ve put a red box around the bot bit as well as the endless promotion of inclusivity and diversity as well as the call to examine our biases):

From the newspaper:

A note at the bottom of a Feb. 16 email from the Peabody Office of Equity, Diversity and Inclusion regarding the recent shooting at Michigan State University stated that the message had been written using ChatGPT, an AI text generator. [Note that the newspaper gives only the last paragraph of the full email.]

Associate Dean for Equity, Diversity and Inclusion Nicole Joseph sent a follow-up, apology email to the Peabody community on Feb. 17 at 6:30 p.m. CST. She stated using ChatGPT to write the initial email was “poor judgment.”

“While we believe in the message of inclusivity expressed in the email, using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College,” the follow-up email reads. “As with all new technologies that affect higher education, this moment gives us all an opportunity to reflect on what we know and what we still must learn about AI.”

The only justification for that email is that at least it cites sources, which of course college students are suppose to do. It even gives the ChatGPT message as a “personal communication,” though a “robotic communication” would have been more appropriate. The paper beefs that there was only one “incident” and not “multiple” shootings, though I can’t be bothered about that.

I suspect what happened is that some semi-literate functionary decided to produce a model email using ChatGPT rather than express his/her own sentiments. But then, god almighty, the functionary was honest enough to send it out saying where it came from.

The reaction of the students was typical, and similar to mine:

Laith Kayat, a senior, is from Michigan, and his younger sister attends MSU. He stated that the EDI Office’s use of ChatGPT in drafting its email is “disgusting.”

“There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself,” Kayat said. “[Administrators] only care about perception and their institutional politics of saving face.”

That’s a good statement.  Here’s another:

Senior Jackson Davis, a Peabody undergraduate, said he was disappointed that the EDI Office allegedly used ChatGPT to write its response to the shooting. He stated that doing so is in line with actions by university administrations nationwide.

“They release milquetoast, mealymouthed statements that really say nothing whenever an issue arises on or off campus with real political and moral stakes,” Davis said. “I consider this more of a mask-off moment than any sort of revelation about the disingenuous nature of academic bureaucracy.”

I’m not sure what “moral and political stakes” that Mr. Davis wanted highlighted here. A simple, humane message that expresses sorrow and empathy without politics would, I think, have been appropriate. And they should have left out all the “inclusivity and diversity” stuff, which strikes me as superfluous and off message. Statements about gun control and the like (an initiative that, as you know, I strongly approve of) are debatable statements that do not belong in official communiques, and you’d never see such a thing coming out of the University of Chicago, which maintains institutional neutrality on such issues, though against considerable pressure from faculty and students to make the college take sides on issues.

But to me, the most striking thing about the message above is that it seems to be using the tragedy as an excuse to flaunt the University’s virtue of promoting not only diversity, but “inclusivity”, mentioning that term, or “inclusive,” four times in a very short email. So beyond the heartlessness and lack of empathy involved in turning to ChatGPT, the email is doubly offensive because it’s touting DEI (or EDI) principles more than it is reaching out to people. And there’s not even a single word about showing empathy for the families and loved ones of those who were murdered.

I can ask only “what kind of tendentious mushbrains would put together a message like this?” They are taking advantage of a tragedy to promote a Social Justice agenda. This is the fruit of institutionalized DEI offices.

20 thoughts on “Vanderbilt responds to the Michigan State shooting by sending its own community students a message written using ChatGPT

  1. Actually, “inclusive” is used *5* times–there’s another one in the second line of the e-mail.

    And, of course, by “inclusive” they mean “people who agree with us”.

    1. When I see”inclusive” (unless in a well worked out argument) it works like a red flag: this is nonsensical PC/Woke speech. Not to be paid serious attention to. I think even a robot could do it, oops!

    2. Six times! Also first line, paragraph three. Perhaps I might feel different if I attended there, but I found the repetition very off-putting.

  2. “I can ask only ‘what kind of tendentious mushbrains would put together a message like this?'”

    People whose brains have been so ravaged by the woke mind virus that they are no longer functional.

  3. I saw a piece the other day (I think on CNN) about how Chat GPT would make realtors’ lives easier — by writing property listings for them. All I could think is that these are completely formulaic. To have Chat GPT “write” them you’d have to enter the same info as you would in the form. How is this saving labor? The quality of Chat GPT productions aside, I think people are just enamored by the new technology, to the detriment of their responsibilities.

    1. I’d always treated property listings as fairly information free generic efforts that might have been written by a bot anyway. But point taken!

  4. What’s the beef? It is perfectly normal for all messages from a DEI office to be generated by a DEI-bot. Aren’t they already? Think of the salaries higher ed could save if all the human time-servers in DEI offices were overtly replaced by computer programs.

  5. Using a bot to write a condolence letter is tacky indeed, but it’s not all that different from posting the obligatory “thoughts and prayers” message. Neither is sincere and both are self-serving. It’s cheap, it’s fast, and it checks the “We care.” box.

    And yes, all that diversity stuff is exactly what a bot would write—endlessly repeated phrases sucked in from every corner of the internet, all of which is infested with DEI ideology.

    1. So we’ve moved past the “thoughts and prayers” response into the “sending heuristics and algorithms” phase…

  6. A mindless bureaucrat used a mindless chatbot to write a mindless letter about a tragedy — no part of this is unexpected. It makes perfect sense. The awkward fools who run DEI offices are something beneath a hack — this is quite objectively true: at least hacks produce their hackwork, and it’s more than mere office paperwork. DEI drones are talentless, intense conformists with no taste.

  7. It’s ironic that most colleges and universities are concerned that essays written outside of the class where the students aren’t monitored will be done by chatGPT, as that is considered cheating. Receiving a communication from the administration itself which paraphrases chatGPT is tacit approval for its use.

    This letter says: “All which follows is BS whipped up by a bot. That’s how important you, the students, are to us. That’s how important your education is.” Hopefully someone in authority at Vandy will send out an apology for this insulting mess. Turning young people into cynics shouldn’t be the purpose of university departments.

  8. Responding to mass shootings with pleas for greater acceptance of diversity and inclusivity is becoming almost as formulaic as sending ” thoughts and prayers,” and undoubtedly no more effective. One can only wonder, why ChatGPT didn’t generate a message which suggested that people should be coming together to demand stricter gun control.

    Leaving aside the first four paragraphs (though I’m not suggesting they shouldn’t also be substantially amended along similar lines), the final paragraph would have been far more to the point had it read:

    “In the wake of the Michigan shootings, let us come together as a community to reaffirm our commitment to more effective gun control and promoting a culture of non-violence. By doing so, we can honor the victims of this tragedy and work towards a safer, more compassionate future for all.

  9. The fact that without the line at the bottom, it would have been hard to recognize that the letter was written by a chatbot, is telling. I’ve long thought it would soon be possible (“soon” is apparently “now” now) to produce convincing AI-generated woke essays, but arguably impossible (I still hold that view) to produce convincing AI-generated essays about findings of real science. You can fake the superficial style, but not the content. Good science writing actually tends to be simple to read, with all the complexity buried in the subject matter. I would argue that this goes to the heart of the difference between science and wokism in general.

  10. There is a whiff of hypocrisy here. The letter came from the school’s office of Equity, Diversity, and Inclusion, and people are ranting against their use of AI? Don’t diversity and inclusion apply to artificial intelligences as well?

    If not, seems like human-centric racism.

Leave a Reply to davelenny Cancel reply

Your email address will not be published. Required fields are marked *