I’m not aware of any university that explicitly allows students to use “bots” (AI generators such as ChatGPT) to prepare their applications, but it was only a matter of time. This announcement from the law school of Arizona State University explicitly allows students to use AI in their applications, most likely in their application essays. Click to read, but I’ve reproduced the meat of the announcement below:
Most of what they say:
The Sandra Day O’Connor College of Law at Arizona State University, ranked the nation’s most innovative university since 2016, announces that applicants to its degree programs are permitted to use generative artificial intelligence (AI) in the preparation of their application and certify that the information they submit is accurate, beginning in August 2023.
The use of large language model (LLM) tools such as ChatGPT, Google Bard and others has accelerated in the past year. Its use is also prevalent in the legal field. In our mission to educate and prepare the next generation of lawyers and leaders, law schools also need to embrace the use of technology such as AI with a comprehensive approach.
“Our law school is driven by an innovative mindset. By embracing emerging technologies, and teaching students the ethical responsibilities associated with technology, we will enhance legal education and break down barriers that may exist for prospective students. By incorporating generative AI into our curriculum, we prepare students for their future careers across all disciplines,” says Willard H. Pedrick Dean and Regents Professor of Law Stacy Leeds.
. . . Our Center for Law, Science, and Innovation (LSI) has been leading the field in the understanding and expansion of technology in law since its establishment 30 years ago. Nearly every field within the law now involves interactions with technology that is rapidly changing and evolving. Lawyers comfortable dealing with the scientific and technological aspects underlying many legal issues are in high demand worldwide. Artificial intelligence, along with its related technologies, has quickly emerged as one of the most fundamental technologies affecting all aspects of our lives and the law today, one that LSI has been examining closely for many years.
We are embracing this technology because we see the benefits it may bring to students and future lawyers. Generative AI is a tool available to nearly everyone, regardless of their economic situation, that can help them submit a strong application when used responsibly.
Now why are they doing this? They give a couple of reasons, the most unconvincing being that the law school has always embraced “the expansion of technology in law”, and this is a new form of technology; familiarity with it can help the students. (That doesn’t mean, however, that you have to use it in an application essay!) Also, they argue that using AI can help students “submit a strong application when used responsibly.” I have a sneaking suspicion that this is being done as a DEI initiative, as it says that “Generative AI is a tool available to nearly everyone, regardless of their economic situation.”
But that makes it counterproductive, because it takes away from the admissions committee any judgment about whether a student is able to write. Isn’t that part of judging an application—seeing whether a student can write a coherent essay? Now everyone can write a coherent essay because the bot will do it for them! The result of using bots is that the differential writing abilities of the students will be minimized, and I can’t imagine what answer the school would have to that except that “we WANT everybody to write on a level playing field.”
At least ASU LAW still requires the Law School Admissions Test, as well as grade-point averages and this stuff:
. . . . . quality and grading patterns of undergraduate institutions, previous graduate education, demonstrated commitment to public service, work and leadership experience, extracurricular or community activities, history of overcoming economic or other disadvantages, uniqueness of experience and background, maturity, ability to communicate, foreign language proficiency, honors and awards, service in the armed forces, and publications.
Note the “history of overcoming economic or other disadvantages,” which surely comes straight from the recent Supreme Court decision banning affirmative action. But note as well that you’re supposed to have a good “ability to communicate”. How can you show that if you’re using a bot?