ChatGPT is uber-woke

February 24, 2023 • 11:20 am

If you’ve fooled around with ChatGPT—and who hasn’t?—you may have noticed that it’s programmed to be woke. I guess that’s no surprise, but the quantity of “progressive” verbiage and gratuitous moralizing is almost unbelievable. Try it yourself here (just ask it a question or ask it to write an essay). Loads of fun!

A reader from Catalonia was doing this and discovered the fulminating wokeness of the site. I enclose his words (with permission) and the answer he got from the bot:

I’ve been having a lot of fun with ChatGPT (https://chat.openai.com/chat) and trying to understand its limits and capacities. My provisional conclusion is that it is the most stupid thing ever invented that, at the same time, looks as if it is intelligent beyond words; it’s also quite annoying when, as very often happens, it starts moralizing or when it withholds what it considers controversial information or gives you unsolicited ethical advice. While chatting with it, I asked a question about gender and chosen pronouns and, in a screenshot I’m enclosing, there’s the answer that I got. (Sorry for my English. My first language is Catalan).

His English is, as you see, perfect, and here’s the question and answer involved. It gives the answer in the first sentence; the rest is boilerplate moralizing. Note the trope about “sex assigned at birth”, which always bothers me, for it’s not assigned, but recognized.  

If you want, ask it a question, preferably one with a short answer, and report it in the comments. There’s no doubt that “progressive” Leftists were involved in the programming!

I just thought of my own question. Here’s how the bot responded. Note the false claim that “hate speech” is not protected by the First Amendment (“the principle of free speech”. Of course it is! The bot is LYING!

(Red emphasis is mine):

Vanderbilt responds to the Michigan State shooting by sending its own community students a message written using ChatGPT

February 19, 2023 • 9:20 am

ChatGPT, the bot site that automatically produces prose, is back in the news again, but not in a humorous way and not as an example of students cheating. Rather, the University got the bot to write an official message from a university to its students.

As the Vanderbilt Hustler reports (the student newspaper of Vanderbilt University), the bot was used to write a message of consolation to the students after the Michigan State University shooting on February 13 that killed three. The robot message was then sent to students by the school’s EDI office (“Equity, Diversity, and Inclusion”).

“Peabody” is Vanderbilt’s College of Education and Human Development. Click below to read about the mistake—which I assume it was.

Here’s the entire email, which reveals the very source of its prose at the bottom, though it was said to be “paraphrased” (I’ve put a red box around the bot bit as well as the endless promotion of inclusivity and diversity as well as the call to examine our biases):

From the newspaper:

A note at the bottom of a Feb. 16 email from the Peabody Office of Equity, Diversity and Inclusion regarding the recent shooting at Michigan State University stated that the message had been written using ChatGPT, an AI text generator. [Note that the newspaper gives only the last paragraph of the full email.]

Associate Dean for Equity, Diversity and Inclusion Nicole Joseph sent a follow-up, apology email to the Peabody community on Feb. 17 at 6:30 p.m. CST. She stated using ChatGPT to write the initial email was “poor judgment.”

“While we believe in the message of inclusivity expressed in the email, using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College,” the follow-up email reads. “As with all new technologies that affect higher education, this moment gives us all an opportunity to reflect on what we know and what we still must learn about AI.”

The only justification for that email is that at least it cites sources, which of course college students are suppose to do. It even gives the ChatGPT message as a “personal communication,” though a “robotic communication” would have been more appropriate. The paper beefs that there was only one “incident” and not “multiple” shootings, though I can’t be bothered about that.

I suspect what happened is that some semi-literate functionary decided to produce a model email using ChatGPT rather than express his/her own sentiments. But then, god almighty, the functionary was honest enough to send it out saying where it came from.

The reaction of the students was typical, and similar to mine:

Laith Kayat, a senior, is from Michigan, and his younger sister attends MSU. He stated that the EDI Office’s use of ChatGPT in drafting its email is “disgusting.”

“There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself,” Kayat said. “[Administrators] only care about perception and their institutional politics of saving face.”

That’s a good statement.  Here’s another:

Senior Jackson Davis, a Peabody undergraduate, said he was disappointed that the EDI Office allegedly used ChatGPT to write its response to the shooting. He stated that doing so is in line with actions by university administrations nationwide.

“They release milquetoast, mealymouthed statements that really say nothing whenever an issue arises on or off campus with real political and moral stakes,” Davis said. “I consider this more of a mask-off moment than any sort of revelation about the disingenuous nature of academic bureaucracy.”

I’m not sure what “moral and political stakes” that Mr. Davis wanted highlighted here. A simple, humane message that expresses sorrow and empathy without politics would, I think, have been appropriate. And they should have left out all the “inclusivity and diversity” stuff, which strikes me as superfluous and off message. Statements about gun control and the like (an initiative that, as you know, I strongly approve of) are debatable statements that do not belong in official communiques, and you’d never see such a thing coming out of the University of Chicago, which maintains institutional neutrality on such issues, though against considerable pressure from faculty and students to make the college take sides on issues.

But to me, the most striking thing about the message above is that it seems to be using the tragedy as an excuse to flaunt the University’s virtue of promoting not only diversity, but “inclusivity”, mentioning that term, or “inclusive,” four times in a very short email. So beyond the heartlessness and lack of empathy involved in turning to ChatGPT, the email is doubly offensive because it’s touting DEI (or EDI) principles more than it is reaching out to people. And there’s not even a single word about showing empathy for the families and loved ones of those who were murdered.

I can ask only “what kind of tendentious mushbrains would put together a message like this?” They are taking advantage of a tragedy to promote a Social Justice agenda. This is the fruit of institutionalized DEI offices.

A bot discusses the relationship between science and religion

January 28, 2023 • 11:30 am

Matthew was trying out the ChatGPT site—the one in which an AI bot answers questions and writes essays on any specified topic. It’s being used widely, especially by students who get the bot to write their assignments—cheating, of course.  Here’s what Matthew did:

I gave ChatGPT an essay:  Science and religion are enemies. Discuss.”

Science and religion have long been considered to be in opposition to each other, with some arguing that they are outright enemies. However, this view is not entirely accurate and a more nuanced understanding is necessary to fully grasp the relationship between these two fields. In this essay, I will discuss the ways in which science and religion have been perceived as enemies, as well as the ways in which they can complement each other.

One of the main reasons why science and religion are often seen as enemies is due to the fact that they have different ways of understanding the world. Science is based on empirical evidence and the scientific method, while religion is based on faith and spiritual beliefs (Duarte, 2018). This can lead to conflicts, particularly when scientific discoveries challenge religious beliefs. For example, the theory of evolution has been met with resistance from certain religious groups who believe that it contradicts their creationist beliefs (Numbers, 1992). Additionally, the idea of an old earth and the Big Bang theory have also been met with resistance from religious groups who believe in a young earth and literal interpretation of the creation story in the Bible (Numbers, 1992).

Additionally, some argue that science and religion have different goals and aims. Science aims to understand the natural world through observation and experimentation, while religion aims to provide a sense of meaning and purpose in life (Pigliucci, 2002). These different goals can lead to tensions and conflicts, as some may see science as threatening to their religious beliefs. For example, the idea of naturalism in science, which posits that everything can be explained by natural causes, can be seen as a threat to religious belief in supernatural causes (Pigliucci, 2002).

However, it is important to note that science and religion do not always have to be in conflict. In fact, many scientists and religious individuals have found ways to reconcile their beliefs and integrate them into their understanding of the world (Polkinghorne, 2002). For example, some religious individuals have found ways to reconcile evolution with their religious beliefs by seeing it as a means through which God created life (Polkinghorne, 2002). Additionally, some scientists have recognized the limitations of science and have turned to religion for a sense of meaning and purpose in life (Polkinghorne, 2002).

Furthermore, science and religion can also complement each other in certain ways. For example, science can provide a deeper understanding of the natural world, which can enhance one’s spiritual experience (Polkinghorne, 2002). Additionally, religion can provide a moral framework that can guide scientific research and ensure that it is conducted ethically (Polkinghorne, 2002). For instance, religious moral principles such as non-maleficence and beneficence can guide the ethical conduct of scientific research, ensuring that it is done in a way that minimizes harm and maximizes benefit (Beauchamp and Childress, 2013).

In conclusion, the relationship between science and religion is complex and multifaceted. While there may be conflicts and tensions between the two, it is important to recognize that they do not have to be enemies. Instead, they can complement each other and provide a more holistic understanding of the world. It is important to approach the relationship between science and religion with an open mind and a willingness to consider different perspectives.

References:

    • Beauchamp, T. L., & Childress, J. F. (2013). Principles of biomedical ethics. Oxford University Press.
    • Duarte, J. (2018). Science, Religion, and the Meaning of Life. Springer.
    • Numbers, R. L. (1992). The Creationists: The Evolution of Scientific Creationism. University of California Press.
    • Pigliucci, M. (2002). De

(The bot quit before giving all the references, but they’re easy enough to look up.)

Can you tell that this essay, complete with references, was composed by a programmed AI machine? I can’t. Moreover, it makes sense and covers a number of relevant areas. In fact, I could have written the first paragraph! If this were part of a take-home essay exam, it would no doubt have earned a decent grade.

Now you see why ChatGPT is so beloved by students and so feared by educators. But there are ways around it, and teachers are finding them. You can give in-class exams, as I did, or ask questions that a bot simply could not answer. There are also ways of checking prose to gauge the probability that it was written by a bot, but as bots improve, those prose-checkers will become less useful.  I doubt that any of us could, by reading it along, tell that this wasn’t written by a human!