All Guides · M365 Copilot Field Guide · Kesslernity
Kesslernity · M365 Copilot Field Guide
Prompt Hygiene
Copilot produces what you ask for. Ask precisely.
Most Copilot answers that feel wrong come from prompts that are incomplete. Four components make the difference: role, task, context, and format. All four together produce traceable, usable output.
Applies to All Copilot Chat sessions
UPDATED April 2026
Licence M365 Copilot paid add-on
Works with Every guide in this series
PUBLISHED BY kesslernity.com
Before you start

Prompt hygiene applies to M365 Copilot Chat (paid add-on). The same principles apply in Copilot inside Word, PowerPoint, Excel, and Teams — the four-component structure works in every surface. Free Copilot Chat has no access to your M365 data regardless of prompt quality.

At a glance
The problem
Vague prompts produce vague answers.
Copilot has no way to ask you to clarify.
The fix
Four components: Role, Task, Context, Format.
Each one removes a guess Copilot has to make.
The payoff
Answers that are specific, scoped, and usable
without a second round of prompting.
R
Role
Tell Copilot what lens to use. Sets the expertise level, tone, and perspective for the answer.
  • Act as a project controller
  • Write this for a non-technical audience
  • Review this as a procurement manager would
T
Task
Start with a verb. Copilot needs a clear action to take — not a topic to discuss.
  • Summarise / Draft / Extract / Compare
  • Rewrite / List / Identify / Translate
  • Flag / Score / Group / Explain
C
Context
Supply what Copilot cannot infer: the specific source, the audience, the constraints, the deadline. Reference files with @.
  • from @Risk-Register.xlsx
  • for the ExCom, who haven't seen the data
  • focus only on overdue items
F
Format
Specify how the output should look. Without this, Copilot chooses — and often chooses wrong.
  • in a table with columns: owner, status, due date
  • in under 150 words
  • as three bullet points, no preamble
Prompt structure
[Task verb] + [What / Source] + [For whom / Constraints] + [Output format]
Example: "Summarise the open risks in @Q2-Risk-Register.xlsx for the project director, who needs a one-page brief before the gate review. List by severity. Exclude risks already closed. Under 200 words."
Summarising a report
Weak prompt
"Summarise this report."
Length and focus are Copilot's choice. May miss the section that matters. Hard to reuse.
Strong prompt
"Summarise @Q1-Performance-Report.pdf for the CFO in 5 bullet points. Focus on financial performance and delivery schedule variance only. No preamble."
Scoped to source, audience, length, and topic. Output is ready to forward.
Components used: Task · Context (@file) · Audience · Format
Drafting a follow-up email
Weak prompt
"Write an email to the client about the delay."
Copilot invents tone, length, and content. May over-commit to a new date or sound apologetic in the wrong register.
Strong prompt
"Draft a 3-sentence email to the client acknowledging the 2-week delay in the piping deliverable. Tone: professional, not defensive. Do not offer a new delivery date. Do not use the word 'unfortunately'."
Tone, length, and constraints are explicit. Negative instructions prevent the most common failures.
Components used: Task · Context (situation) · Constraints · Format + negative instructions
Extracting action items
Weak prompt
"What were the action items from yesterday's meeting?"
Copilot searches broadly. May return items from the wrong meeting or miss informal commitments.
Strong prompt
"Extract every action item from @Project sync Apr 9. For each: owner, description, due date if stated. Present as a table. Flag any item with no clear owner."
Sourced from one specific meeting. Structured output. Gaps are visible (no owner = flagged).
Components used: Task · Context (@meeting) · Format (table) · Edge case instruction
Patterns that fail
The most common weak prompts — and what to write instead
Fails
"Make it better."
Copilot has no definition of "better." It will change something — often the wrong thing.
Works
"Rewrite this to be more direct. Shorten each paragraph by half. Keep all numbers unchanged."
Specific action, specific constraint, specific exception.
Fails
"What are the main risks?"
No source. Copilot guesses which project, which period, which format you want.
Works
"List the top 5 risks in @Risk-Register.xlsx with a score above 12. Table format: risk, owner, score, status."
Source, filter, format — nothing to guess.
Fails
"Summarise my emails and tell me what to prioritise and draft replies and flag anything urgent."
Four tasks in one prompt. Copilot will attempt all four partially.
Works
"Summarise my unread emails from today. List by sender and subject only. No body text."
One task. Run a second prompt for the reply draft once you see what needs one.
Fails
"Analyse this data."
No source referenced. No analysis type specified. Copilot produces a generic description.
Works
"In @Budget-Apr.xlsx, identify any line items where actual spend exceeds budget by more than 10%. List them with the variance amount."
Source, metric, threshold, output — fully specified.