Arizona State University will allow use of AI generators for law school applications

July 30, 2023 • 1:15 pm

I’m not aware of any university that explicitly allows students to use “bots” (AI generators such as ChatGPT) to prepare their applications, but it was only a matter of time.  This announcement from the law school of Arizona State University explicitly allows students to use AI in their applications, most likely in their application essays. Click to read, but I’ve reproduced the meat of the announcement below:

Most of what they say:

The Sandra Day O’Connor College of Law at Arizona State University, ranked the nation’s most innovative university since 2016, announces that applicants to its degree programs are permitted to use generative artificial intelligence (AI) in the preparation of their application and certify that the information they submit is accurate, beginning in August 2023.

The use of large language model (LLM) tools such as ChatGPT, Google Bard and others has accelerated in the past year. Its use is also prevalent in the legal field. In our mission to educate and prepare the next generation of lawyers and leaders, law schools also need to embrace the use of technology such as AI with a comprehensive approach.

“Our law school is driven by an innovative mindset. By embracing emerging technologies, and teaching students the ethical responsibilities associated with technology, we will enhance legal education and break down barriers that may exist for prospective students. By incorporating generative AI into our curriculum, we prepare students for their future careers across all disciplines,” says Willard H. Pedrick Dean and Regents Professor of Law Stacy Leeds.

. . . Our Center for Law, Science, and Innovation (LSI) has been leading the field in the understanding and expansion of technology in law since its establishment 30 years ago. Nearly every field within the law now involves interactions with technology that is rapidly changing and evolving. Lawyers comfortable dealing with the scientific and technological aspects underlying many legal issues are in high demand worldwide. Artificial intelligence, along with its related technologies, has quickly emerged as one of the most fundamental technologies affecting all aspects of our lives and the law today, one that LSI has been examining closely for many years.

We are embracing this technology because we see the benefits it may bring to students and future lawyers. Generative AI is a tool available to nearly everyone, regardless of their economic situation, that can help them submit a strong application when used responsibly.

Now why are they doing this? They give a couple of reasons, the most unconvincing being that the law school has always embraced “the expansion of technology in law”, and this is a new form of technology; familiarity with it can help the students. (That doesn’t mean, however, that you have to use it in an application essay!)  Also, they argue that using AI can help students “submit a strong application when used responsibly.”  I have a sneaking suspicion that this is being done as a DEI initiative, as it says that “Generative AI is a tool available to nearly everyone, regardless of their economic situation.”

But that makes it counterproductive, because it takes away from the admissions committee any judgment about whether a student is able to write. Isn’t that part of judging an application—seeing whether a student can write a coherent essay?  Now everyone can write a coherent essay because the bot will do it for them! The result of using bots is that the differential writing abilities of the students will be minimized, and I can’t imagine what answer the school would have to that except that “we WANT everybody to write on a level playing field.”

At least ASU LAW  still requires the Law School Admissions Test, as well as grade-point averages and this stuff:

. . . . . quality and grading patterns of undergraduate institutions, previous graduate education, demonstrated commitment to public service, work and leadership experience, extracurricular or community activities, history of overcoming economic or other disadvantages, uniqueness of experience and background, maturity, ability to communicate, foreign language proficiency, honors and awards, service in the armed forces, and publications.

Note the “history of overcoming economic or other disadvantages,” which surely comes straight from the recent Supreme Court decision banning affirmative action. But note as well that you’re supposed to have a good “ability to communicate”.  How can you show that if you’re using a bot?


h/t: Luana

18 thoughts on “Arizona State University will allow use of AI generators for law school applications

  1. Lawyers and law firms who use chatbots for rough drafts of certain documents will probably be able to save time and effort. It will still require a human “editor”, at least for a few generations of chatbots, but that’s still faster, and time is money. If you believe that a professional school should train students for actual skills relevant to the profession, ASU is off to a good start.

  2. I suspect that you are right that this is being done to level the playing field. Now to see when students are required to use AI, so that those who can write, and choose to, don’t have an advantage.

  3. I’m not sure, for law school it may make sense, but in science?
    Will the future DoJ be ‘manned’ by AI, the AG a bot? I think a lot of new rules and regulations concerning AI need to be constructed.

  4. And yet at some point, the pretense has to drop. Is it when, and if, the person is hired as a lawyer or paralegal? Because you know that bar exams, etc will be dumbed down as well.

  5. Maybe I’m being too charitable, but I think they might be trying to weed out lazy people by letting them submit chatbot applications, and will reject anyone who does! It should be easy to tell who took the time to write a real application.

    1. It’s ASU. I wouldn’t hold my breath. Their usual standard for admission seems to be “Is there a pulse?” (Note that I say this as a member of the faculty of a community college in Arizona—a lot of my students end up at ASU and I work with folk from ASU regularly to discuss articulation issues; I don’t have many kind things to say about ASU.)

  6. I’m curious about the word “generative” as used in “generative AI”. Seems crudely referential of “generator”.

    Would this be AI (or other specific AI) that generates output, and if so, what else would AI do? Not produce output?

    Now I must read a bit….

    … and I have returned, with:

    “Historically, AI was used to understand and recommend information. Now, generative AI can also help us create new content. Generative AI builds on existing technologies, like large language models (LLMs) which are trained on large amounts of text and learn to predict the next word in a sentence. For example, “peanut butter and ___” is more likely to be followed by “jelly” than “shoelace”. Generative AI can not only create new text, but also images, videos, or audio.”

  7. If I were applying today to a university / law school / job I’d use AI to get a rough draft or outline, and then edit, edit, edit. A lot of gray area when it comes to “AI writing applications.” And how could you stop it, anyway? It’s a classic example of a Pandora’s Box.

    And in regards to law school, it might help you get in, but if you rely on AI to get a degree and pass the BAR, good luck! AI can’t memorize or learn for you.

    1. Alternatively, services like Grammarly suggest writing the essay yourself, then running it through a GenAI system for editing. This is, fundamentally, one of the services which Grammarly offers. Honestly, I am much more okay with this approach than with having the AI write the first draft.

  8. I’m curious about whether anyone has addressed the differences between this and using a calculator for arithmetic operations rather than doing that by hand.

    1. Speaking as a mathematics graduate, I’m not sure that the two are equivalent.

      I was in school in England in the mid 1970s when calculators were just becoming widespread. I’d been taught how to do long multiplication and division by hand at age 9 or 10, and later, around age 12 or 13, how to use logarithms to make those operations simpler by turning them into additions and subtractions. That was also when I got my first calculator, and by the time I sat my O-levels (in 1979), calculators were permitted in those exams.

      I went on to study mathematics at A-level and at university, and whilst calculators were allowed in exams, most of the syllabus was algebra and calculus — all calculations were symbolic, not numerical, so there was little or no scope to use a calculator.

      In my A-level physics and chemistry classes, and in my astronomy classes at university, there were some topics that required numerical calculations — stoichiometry experiments in chemistry class spring to mind — but even there, the exam questions tested whether we understood the principles, not whether we could do arithmetic.

      Using generative AI to write essays would be more like getting the computer to work out the algebraic manipulations, or figure out how to integrate a mathematical function in calculus, a notoriously difficult problem. And there ARE software packages that can do symbolic mathematics — they have been around since the early 1960s, and some of them could easily be used to answer the algebra and calculus questions on a British A-level mathematics exam.

      I’ve used some of these computer algebra systems myself on research projects which involved large-scale algebraic manipulation, because I wanted the CORRECT answer, darn it, and even someone with a mathematics Ph.D. can make mistakes in a long and complicated calculation.

  9. I think this is actually a good idea. The essays should be evaluated as before and will need at least a fair amount of tweaking before they can be submitted if they have any hope of standing out. Of course if the University is not terribly selective in the first place, that issue might be moot.

  10. Is it time to rethink the “essay” as an admissions tool? Why would any university want to read an AI created essay, all of which will soon be “adversity essays” if they’re not already?
    We will soon have students using AI to generate essays and admissions officers using AIs to grade them. When that happens, I guess the machines will truly have won.

  11. To quote the tagline for comedian Courtney Pauroso’s show at this year’s Edinburgh fringe, “Artificially intelligent. Genuinely stupid”.

Leave a Reply