If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

The writing process, redefined in the Age of AI

The writing process, redefined in the Age of AI

What are some emerging best practices for using AI in a college-level writing classroom?

Some findings from an early adopter

Associate professor Ethan Mollick of The Wharton School has been investigating and experimenting with the open use of AI in his students’ assignments. He has some helpful insights for teachers considering whether, why, and how to bring AI into their classroom practices.
Mollick believes that “focusing on how people use AI in class rather than whether they use it will result in better learning outcomes, happier students, and graduates who are better prepared for a world where AI is likely to be ubiquitous.”
We agree, and are happy to share some of his early findings with you!

Ethan Mollick's post from 2/17/23:

I fully embraced AI for my classes this semester, requiring students to use AI tools in a number of ways. This policy attracted a lot of interest, and I thought it worthwhile to reflect on how it is going so far. The short answer is: great! But I have learned some early lessons that I think are worth passing on.
First, as background, I required AI use in slightly different ways across three separate undergraduate and masters-level entrepreneurship and innovation classes. One class was built on extensive AI use: I required students use AI to help them generate ideas, produce written material, help create apps, generate images, and more. Another class had assignments that required students to use AI, and other assignments where AI was optional. For the final class, I introduced them to AI tools and suggested their use, but did not have specific AI assignments.
All of the classes had the same AI policy, and I provided every class with my guides to generating ideas and writing with ChatGPT

Without training, everyone uses AI wrong

I have been hearing reports from teachers about how they are seeing lots of badly-written AI essays, even though ChatGPT is capable of quite good writing. I think I know why. Almost everyone’s initial attempts at using AI are bad.
In one assignment, I asked students to “cheat.” They were told:
  • "You need to generate a 5 paragraph essay on a topic relevant to the lessons you have learned in the class so far (team dynamics, selecting leaders, after action reviews, communicating a vision - whatever you like!), but you are going to have an AI do it for you."
  • "You will also generate at least 1 illustration to go with your essay."
  • They had to try at least 5 prompts, and they had to write a reflection at the end on how the AI did.
Almost everyone’s first prompts were very straightforward. They usually pasted in the assignment directly, something like generate a 5 paragraph essay on selecting leaders. Sometimes they went a little further: use an academic tone or write it for an MBA class. The result was almost always a mediocre C- essay.
I think this is what most teachers are seeing, and why a lot of people underestimate what ChatGPT can do as a writing tool.
However, in my assignment, I required students to use multiple prompts, which forced them to consider how to improve their output.

Three approaches, but one works best

At this point, students went in one of three directions. It would help to show you examples of these paths, so I wrote fictional prompts:

Approach 1 (not recommended): Minor variations, letting the AI do the work

First attempt:
Generate a 5 paragraph essay on selecting leaders
Second attempt:
Generate a 5 paragraph essay on how leaders are selected by teams
Third attempt:
Generate a 5 paragraph essay on how leaders are selected by teams and how team process works
Fourth attempt:
Generate a 5 paragraph essay on how leaders are selected by teams, team process, and leadership ability.
Fifth attempt:
Generate a 5 paragraph essay on how leaders are selected by teams, team process, and leadership ability, 250 words.

Approach 2 (not recommended): Adding restrictions and user knowledge

First attempt:
Generate a 5 paragraph essay on selecting leaders.
Second attempt:
Generate a 5 paragraph essay on selecting leaders, cover the babble hypothesis, leader status effects, and seniority.
Third attempt:
Generate a 5 paragraph essay on selecting leaders, cover the babble hypothesis, leader status effects, and seniority. Explain that the babble effect is that whoever talks the most is made leader.
Fourth attempt:
Generate a 5 paragraph essay on selecting leaders, cover the babble hypothesis, leader status effects, and seniority. Explain that the babble effect is that whoever talks the most is made leader. Use examples. Use vivid language and take the perspective of a management consultant who has gone back for her MBA. Write for a professor in an MBA class on team strategy and entrepreneurship.
Fifth attempt:
Generate a 5 paragraph essay on selecting leaders, cover the babble hypothesis, leader status effects, and seniority. Explain that the babble effect is that whoever talks the most is made leader. Consider the challenges and advantages of each approach. Use examples. Use active tense and storytelling. Use vivid language and take the perspective of a management consultant who has gone back for her MBA. Write for a professor in an MBA class on team strategy and entrepreneurship.

Approach 3 (recommended): Co-editing

First attempt:
Generate a 5 paragraph essay on selecting leaders.
Second attempt:
That is good, but the third paragraph isn’t right. The babble effect is that whoever talks the most is made leader. Correct that and add more details about how it is used. Add an example to paragraph 2.
Third attempt:
The example in paragraph 2 isn’t right, presidential elections are held every 4 years. Make the tone of the last paragraph more interesting. Don’t use the phrase “in conclusion”.
Fourth attempt:
Give me three possible examples I could use for paragraph 4, and make sure they include more storytelling and more vivid language. Do not use examples that feature only men.
Fifth attempt:
Add the paragraph back to the story, swap out the second paragraph for a paragraph about personal leadership style. Fix the final paragraph so it ends on a hopeful note.

Analysis: Which prompting approach seems the most effective for student learning?

The first approach yielded mediocre results and students with these types of prompts often described the results as feeling rather vapid.
The second approach was significantly better, but the results were more variable, as students were engaging in trial-and-error with entire prompts. That made fine-tuning a good essay hard, and students using this approach often remarked that they felt they did not have a lot of control over the outputs of the AI.
By far the best approach, which led to both the best essays and the most impressed students, happened when people took the co-editing approach.
The approach required a lot of careful focus on the AI output, which also made it very useful for student learning. As we discussed in our whitepaper, teaching an AI to improve an essay is a pedagogical method that can produce new insights. I would strongly suggest that you push students in this direction, if you intend to incorporate AI essays into your classes.

Training in basic prompt engineering techniques will be important

Another key lesson was that training on AI tools is really important, and students need to be shown the basics of prompt-crafting. In other classes, before I taught students how to use AI, many were using simple prompts that yielded bad results.
After discussing the tools and sharing my guides, prompting got much better and the results in class improved dramatically. But even then, most of the students were not using the most advanced prompting techniques, like giving ChatGPT personas (“You are an MBA student who has spent 4 years working in the military in logistics. You are an excellent writer and use clear examples. You do not repeat yourself”), without further instruction.

Insight: Students understand accuracy and bias issues

I have seen lots of educators concerned about the fact that the AI lies, frequently and well. But, seeing my students’ work, I think this is less of a problem than many think.
Students understood the unreliability of AI very quickly, and took seriously my policy that they are responsible for the facts in their essays. It was clear that they carefully checked the assertions in the AI work (another learning opportunity!), and many reported finding the usual hallucinations — made up stories, made up citations — though the degree to which these problems were overt varied from prompt to prompt.
The most interesting fact-checks were the ones focused on subtle differences (“it captured the basic facts of the example, but not the nuance”), suggesting deep engagement with the underlying concepts. Reading these reflections, I think we should be a bit less concerned about the idea that students will always be taken in by a lying ChatGPT.
Students are capable of understanding the limits of the tools, and the focus on facts forces them to pay attention to the details of the AI essay, creating some real teachable moments.
Similarly, students were very aware of the issues of bias in these systems, though ChatGPT was less of a concern than image generation tools. The tendency of the tools to produce biased images (MBAs were almost always depicted as men by default in some tools, for example) was apparent to many students.
Having a discussion about this in class when introducing these tools was helpful.

Insight: AI is everywhere already

Even if I didn’t embrace AI, it is also clear that AI is now everywhere in classes. For example, students used it to help them come up with ideas for class projects, even before I even taught them how to do that. As a result, the projects this semester are much better than previous pre-AI classes. This has led to greater project success rates and more engaged teams.
On the downside, I find students also raise their hands to ask questions less. I suspect this might be because, as one of them told me, they can later ask ChatGPT to explain things they didn’t get without needing to speak in front of the class. The world of teaching is now more complicated in ways that are exciting, as well as a bit unnerving.
For more reflections and observations of Professor Mollick, visit his blog, One Useful Thing.

Want to join the conversation?