Guide

How to Write Better AI Prompts with the PARR Framework

Mike McGee

Written By Mike McGee

Liz Eggleston

Edited By Liz Eggleston

Last updated December 9, 2025

Course Report strives to create the most trust-worthy content about coding bootcamps. Read more about Course Report’s Editorial Policy and How We Make Money.

If you’ve ever typed a question into an AI tool and wondered why the answer felt generic or off-base, you’re not alone. The difference between a mediocre response and a great one often comes down to how you prompt, and Kat Kemner, an AI instructor at General Assembly, teaches a simple structure called the PARR framework – Persona, Action, Rules, Refine – that anyone can use to get clearer, more useful outputs from large language models like ChatGPT, Claude, or Gemini.

The PARR Framework: Persona, Action, Rules, Refine

Two people can ask the “same” question and get totally different responses based on how they prompt. PARR gives you a quick checklist to structure your request.

🏌️ Why It’s Called “PARR”

Kat calls this PARR on purpose: just like in golf, you probably won’t get a hole-in-one.

Even with a strong initial prompt, you usually won’t get a perfect answer right away. Each iteration gets you closer to the desired outcome.

The goal is to treat AI as a collaborator, not a vending machine:

  • Read the response

  • Tell the model what needs to be fixed, expanded, or refocused

  • Ask follow-up questions or give corrective feedback

  • Iterate until it’s right for your use case

Think of it like texting a colleague: you go back and forth until you’re aligned.

P – Persona

Start by deciding who you want the AI to be. Ask yourself: If I could get any expert in the world to do this task for me, who would it be? That expert becomes your persona

Kat acknowledges that there are mixed viewpoints on using a persona, but if the task requires creativity or innovation, she prefers to use it.

Examples of persona:

  • A world-class executive assistant

  • A senior product manager at a Fortune 500 company

  • A compassionate career coach who specializes in tech career changers

Naming the persona helps the model adopt the right tone, level of detail, and perspective.

A – Action

Next, clearly explain what action you want the AI to do.

This is where most prompts fall short. Large language models don’t have your context, so you need to spell it out:

  • Provide background information and constraints

  • Be specific about the task (summarize, rewrite, brainstorm, outline, critique, etc.)

  • Include relevant details rather than assuming the model will “fill in the gaps”

Kat’s advice: don’t worry about making this part “too long.” It’s usually better to over-explain than under-explain.

R – Rules

Add rules to shape the output.

Rules are the guardrails that tell the AI how to present its answer:

  • Desired length (e.g. 300 words, 5 bullet points)

  • Format (table, bullet list, step-by-step instructions)

  • Style choices (formal vs casual, audience level, tone)

  • Special instructions (cite sources, avoid jargon, focus on X and ignore Y)

Rules keep the output aligned with how you actually plan to use it.

R - Refine

Test the prompt, refine, and iterate to improve. Just as in golf you aren’t likely to get a hole in one, when it comes to advanced prompts, it’s a similar experience. 

A PARR Example in Action

Here’s how PARR might look in practice:

  • Persona: Act as a world-class executive assistant who is very detail-oriented and creates the best follow-up summaries possible.

  • Action: I’m going to give you notes from our brainstorming session. Clearly outline the next steps for everyone to take and organize the action items by owner.

  • Rules: Put the information in a table, then include bullet points under the table so people are clear on the next steps to take.

That single prompt sets expectations about expertise, task, and final format – which dramatically increases your chances of getting a usable output on the first try.

Bonus Prompting Tips from Kat

On top of PARR, Kat shares a few simple tweaks that can level up your results:

  • Reduce hallucinations: Add a line like “If you don’t know the answer, say ‘I don’t know’ instead of making something up.” This nudges the model away from confident but incorrect responses.

  • Dial up creativity: After you get an initial answer, try: “Make your response 10 times more creative,”  or “Push this idea further with unexpected but realistic examples.” Sometimes you just need to ask for “better” or “more interesting” output from the LLM.

These small additions help you avoid generic outputs and get outputs that actually move your work forward.

Find out more and read General Assembly reviews on Course Report. This article was produced by the Course Report team in partnership with General Assembly.


Mike McGee

Written by

Mike McGee, Content Manager

Mike McGee is a tech entrepreneur and education storyteller with 14+ years of experience creating compelling narratives that drive real outcomes for career changers. As the co-founder of The Starter League, Mike helped pioneer the modern coding bootcamp industry by launching the first in-person beginner-focused program, helping over 2,000+ people learn how to get tech jobs, build apps, and start companies.


Liz Eggleston

Edited by

Liz Eggleston, CEO and Editor of Course Report

Liz Eggleston is co-founder of Course Report, the most complete resource for students choosing a coding bootcamp. Liz has dedicated her career to empowering passionate career changers to break into tech, providing valuable insights and guidance in the rapidly evolving field of tech education.  At Course Report, Liz has built a trusted platform that helps thousands of students navigate the complex landscape of coding bootcamps.

Also on Course Report

Find the path that fits your
career goals

Match with Bootcamps
Explore Courses

Sign up for bootcamp advice

Enter your email to join our newsletter community.

By submitting this form, you agree to receive email marketing from Course Report.