Prompting
How to write task prompts that get good results on the first try.
Your agent is good at acting on clear goals, less good at guessing what you meant. A few habits cut iteration cost.
Anatomy of a good prompt
A prompt that lands well usually has three parts:
Goal
What success looks like, in one sentence.
Constraints
Anything your agent shouldn't assume — where files live, which model, time budget, format.
Example: weak vs strong
"Clean up my downloads folder."
What happens: your agent has to guess what "clean up" means, picks a strategy, and you discover it didn't match what you wanted.
"Group every file in ~/Downloads/ by file type into subfolders (Images, PDFs, Archives, Other). Don't touch anything modified in the last 24 hours. Print a summary table when done."
What happens: your agent has clear scope, clear rules, and a verification step. First try usually lands.
Patterns that work
Name files and apps directly
"Open Apple Notes and search for X" is unambiguous; "find that note" makes your agent guess.
Be explicit about side effects
"Don't send" / "don't delete" / "dry-run first" — your agent honours these.
Give it the format you want
"Reply as a bulleted list, max 5 items" → it stops there. "Reply with a JSON array" → you get JSON.
Re-use the right skill
If your goal touches Shopify, mention Shopify in the prompt so your agent picks the Shopify skills instead of generic web tools.
Patterns that don't
When the result isn't right
Read the plan
The plan your agent drafted usually shows the mismatch before any tool is called. Reject the plan and tell it what's wrong.
Check the skill it picked
The wrong skill explains most wrong results. Mention the right domain in your prompt, or pin a specialized agent with only the relevant skills enabled.
Make repeated fixes permanent
If the same correction comes up across tasks, save it as a Rule so you don't have to repeat it.
ToShop Docs