System instructions
Persistent rules and context you attach to an assistant (Gem, custom GPT, Claude project, or API system role) so every turn inherits tone, format, must-nots, and CTAs without repeating them in each user message.
Michal Juhas · Last reviewed May 2, 2026
Who this is for
Sourcers and recruiters who are tired of retyping company context and want repeatable outreach and intake quality across the team.
In practice
- Start from one channel: nail LinkedIn DM rules in instructions, then add email subject lines and bodies in the same pack.
- Encode negatives: phrases you never want (fluff, jargon) often improve output faster than more praise rules.
- Keep LLM tokens in mind: trim PDF dumps; prefer lean Markdown sources the model can scan reliably.
Where it breaks
Stale instructions after a rebrand, conflicting rules nobody reconciled, or secrets pasted where logs retain them. Automation nodes that reuse instructions without a human inbox for failures multiply those risks.
System instructions versus few-shot in one turn
| Layer | Role |
|---|---|
| System instructions | Stable voice, policies, CTAs, channel limits |
| User message | Task today (“role X, candidate Y”) |
| Few-shot examples | Optional fresh anchors when the req is new |
Related on this site
- Glossary: Few-shot prompting, Markdown for AI, AI adoption ladder, AI-native
- Blog: How to write better AI prompts
- Tools: ChatGPT, Claude
- Workshops: Workshops
Frequently asked questions
What should recruiting teams put in system instructions?
Agency or employer basics, role families you hire for, tone (short versus formal), disallowed phrases, LinkedIn versus email rules, booking links, and how to cite uncertainty. Pair with few-shot prompting when you need fresh examples per req.
How is this different from a long chat prompt?
System instructions load before the user message and persist across turns in that assistant. A long one-off prompt is easy to lose or forget to paste. The AI adoption ladder treats systemizing as the step after basic chat.
Do system instructions reduce hallucinations?
They cut style drift and missing fields, but they do not guarantee facts. You still verify employers, dates, and policies. See hallucination for guardrails.
Where should the source of truth live?
Many teams maintain a master file in Markdown for AI agents can read, then copy slices into vendor UIs. That supports diffs, reviews, and workflow automation that passes the same text into an API node.
Which tools expose system-style fields?
Who should approve changes?
Recruiting plus legal or HRBP for anything candidate-facing or retention-sensitive. Treat updates like policy changes, not personal experiments.