If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Learning with AI: Promising practices for students

Some tips for students working with AI-powered chatbots

AI-powered learning for students

Large Language Models offer learners a lot of power, and with this power comes an equal responsibility to use it thoughtfully and ethically. There are some great ways to use this technology to inspire and propel your learning, and some irresponsible ways that may actually be harmful. We’re here to help you strengthen your judgment of which are which!

Keep your critical thinking skills sharp!

Don’t trust! Verify!
Teachers and parents are rightly concerned that increasing reliance on AI-powered chatbots may lead to a decrease in students’ critical thinking skills or problem-solving abilities. The need to develop strong discretion, judgment, and reasoning skills has never been greater.
In addition to the obvious potential for letting an AI do your thinking for you, we know that AIs aren’t always right! They can give you inaccurate information, and they don’t have any judgment!
  • They sometimes calculate math incorrectly, or in an inefficient way.
  • When their training data doesn’t include sufficient information for them to answer correctly, they sometimes just make things up! This is sometimes called “hallucinating”. It can take the form of websites that don’t exist, or completely imaginary data.

How should I talk to an AI for best results?

Use a "genius in the room" mental model

This “mental model”, shared by our friends at OpenAI, is a helpful way to think about how to get good results when talking to an AI:
An illustration of a standing figure on the left holds a long list with the word "prompt" at the top. The words "I have questions" are above the figure's head. The long list trails down under a door in the center of the image, and the end of the long list continues to the other side of the door, where a crouching figure reads instructions. The words "I will give you words" are above the crouching figure's head.
"The "genius in the room" mental model to help with prompt engineering"
  • Imagine you live next door to a genius who is familiar with almost everything that was ever written before the end of 2021. This genius also gets things wrong sometimes. The only way you can communicate with them is to slide a piece of paper under the door and ask for a reply.
  • The genius doesn’t know anything about you or the problem you’re trying to solve.
  • The genius can't see your face, doesn't know where you are, cannot read your emotions, doesn't have the unique knowledge you have and has no idea what you are trying to do.
  • The genius only accepts questions when they are written on a piece of paper and slipped underneath the door.
Given that, how would you communicate with the genius?
Here are some best practices:
  • Explain the problem clearly: Remember, the genius is a totally clean slate and knows nothing about you or your problems!
  • Explain the structure of the output you want: Here are some examples: "Answer in a bulleted list," “Respond in fewer than 100 words,” “Answer in the form of a limerick”
  • Explain the tone, style, or personality you want it to communicate with: For example, "answer the question as a patient math teacher"
  • Give the genius any special tools they might need to meet your request: For example, provide any unique knowledge needed for the task.
The statement that you are crafting is called a prompt and writing them is called prompt engineering. If a model you are interacting with doesn't return what you want, revise how you ask and try again. It sometimes feels more natural to say, "oh, it isn't good at that," but sometimes it just needs more guidance on what you want.

A word on academic honesty

There are all sorts of ways that ChatGPT and other AI-driven resources might be used in a harmful way that will hurt your learning.
Here are two harmful ways that AI can be used—both of which could get you in a lot of trouble!
  • Using it to give you the answer to homework questions
  • Using it to help you write essays in a way that is not explicitly approved by your school’s academic honesty or anti-plagiarism policy
Schools take academic honesty very seriously! If you break the rules in order to give yourself an unfair advantage or make your work look better than it really is, it not only could lead to consequences like getting zero on an assignment, or even getting suspended—it cheats you out of an opportunity to learn.

Stay sharp! The world needs your critical thinking skills more than ever!

When you start communicating with an AI, you need to stay on your toes for so many reasons. It knows a lot of things, but it doesn’t always know when it is wrong! In fact, it will sometimes double down on falsehoods.
Don’t trust it—review its output carefully and check every “fact” or claim.

Want to join the conversation?