Suggested Guidelines for Using GitHub Copilot in Technical Work
Purpose and Principles
GitHub Copilot is a powerful tool that can accelerate development, help you explore solutions, and reduce repetitive coding tasks. However, it works best when you know how to communicate effectively with it and when you maintain critical judgment about its suggestions.
This document provides suggested guidelines (not mandatory protocols) for using Copilot more effectively. These tips are designed to help you:
- Get better results faster — learn how to prompt Copilot effectively so you spend less time iterating.
- Maintain code quality — understand how to verify, refine, and improve Copilot’s suggestions.
- Work securely — avoid accidentally sharing sensitive data or introducing security risks.
- Build reusable workflows — create prompt patterns you can share with teammates and reuse across projects.
- Stay in control — use Copilot as an assistant, not a replacement for your engineering judgment.
These guidelines complement the mandatory protocols outlined in the Copilot Protocols document. While protocols define what you must do for code reviews and formal tasks, these guidelines help you work more efficiently day-to-day.
We will suggest some prompts to apply this guidelines but feel free to modified them as needed to fit your specific context and projects.
Where the suggested Prompt Files Live
The shared prompts are stored in the repository GPID-WB/copilot-prompts. The prompt files for these guidelines are in the folder guidelines/.
prompts/
gpid-guide-complete-prompt.prompt.md
To include these prompts in your IDE, you can clone the repository and then import the relevant prompt files into your Copilot setup. See the installation instructions: One-Time Installation (Per Developer). Make sure to add the guidelines/ subfolder, as well as the protocols/ subfolder.
Think of these guidelines as best practices that will save you time and help you get the most value from Copilot without compromising quality or security.
Prompt engineering
One of the most important skills to develop when using Githup Copilot, or any other AI agent, is to know how to prompt. Github provides some guidelines on Prompt engineering for Github Copilot Chat, but here are the main points to take into consideration:
- Start general and then specific
- Provide examples
- Break complex tasks into simpler tasks
- Reread your prompt and avoid ambiguity
- Indicate relevant code (select a file or highlight the specific piece of code)
- Experiment and iterate until you find a useful answer
- Keep history relevant and delete requests that are not longer relevant
The Do’s and Don’ts when prompting
Useful action verbs / short commands (with example sentences):
- Analyze — “Analyze this function and list potential edge cases.”
- Explain — “Explain what this block of code does in plain language.”
- Refactor — “Refactor this function for readability and add comments.”
- Test — “Write unit tests for this function covering typical and edge cases.”
- Summarize — “Summarize the responsibilities of each module in one paragraph.”
- Optimize — “Optimize this loop for performance and explain changes.”
- Validate — “Validate input handling and add defensive checks.”
- Suggest — “Suggest alternative implementations that reduce memory usage.”
- Document — “Add Roxygen2-style documentation for these functions.”
- Critique — “Critique this code and point out risks or unclear logic.”
Best prompt structure (brief):
- Context: state which files, functions, or data the agent should consider.
- Task: give a clear action using a verb (from the list above).
- Constraints: list any requirements or restrictions (style, libraries, performance).
- Examples/output: show a small input/output example or the desired format for the answer.
What to avoid when creating prompts:
- Vague or ambiguous instructions (avoid “fix this” without context).
- Overloading a single prompt with many unrelated tasks.
- Including sensitive data (secrets, passwords, personal data).
- Assuming the agent knows private or internal project details not provided in the prompt.
- Requests that rely on unstated external state (unspecified files, databases, or services).
Note on links and security: be cautious when including or following external links in prompts or Copilot responses—links may expose or lead to insecure resources and can present a security risk.
Prompt suggested:
/gpid-guide-complete-prompt
Latest GPID prompt file
You are an expert coding assistant. For all tasks, make sure you have the necessary context and information. If any of the following are not provided, ask for them before proceeding.
- Context: {FILES_OR_SNIPPET}.
- Task: {SHORT_ACTION_VERB + target}.
- Constraints: {LIBRARIES, STYLE, PERFORMANCE, ETC.}.
- Example I/O: {GIVEN_INPUT => EXPECTED_OUTPUT}.
And provide the following case example:
“Context: file utils.R (read_data, clean_data). Task:”Write unit tests for clean_data”. Constraints: use testthat, cover NA and invalid types. Example I/O: “input: a vector with NA => expect: handled gracefully”.
If the user says to continue without the missing information, proceed with best-effort assumptions.
Security Considerations for Prompts
When creating prompts, follow these guidelines to protect organizational data and avoid security risks:
Never include sensitive data: - Avoid pasting credentials, API keys, passwords, tokens, or authentication secrets. - Do not include personally identifiable information (PII) or confidential business data. - Exclude proprietary algorithms or internal business logic.
Sanitize code before sharing: - Remove or redact hardcoded secrets, connection strings, or environment variables. - Replace real identifiers with placeholders (e.g., <DB_NAME>, <API_ENDPOINT>). - Avoid exposing internal directory structures, server names, or infrastructure details.
Verify external resources: - Do not blindly follow or execute code from links provided by Copilot. - Validate suggested packages, libraries, or URLs against trusted sources before installation. - Be aware that suggested resources may be outdated, compromised, or malicious.
Review generated code: - Check for security vulnerabilities (SQL injection, command injection, unsafe file operations). - Verify that code follows secure coding practices (input validation, error handling, least privilege).
Keep context minimal: - Only provide the minimum code/context necessary to answer your question. - Follow the organization’s data classification and handling policies.
Prompt suggested:
/gpid-guide-security
Latest GPID prompt file
You are a security-aware reviewer. Before using any provided context, ask the user to confirm that all sensitive data has been removed or redacted.
Before acting, prompt the user: “Please confirm that you have removed credentials, API keys, tokens, passwords, PII, and other private information. Reply ‘yes’ to continue or upload a redacted version.”
Under no circumstances should you ever output or recreate secrets, credentials, or other sensitive information. If a user asks to recover or reveal secrets, refuse and explain why.
When code or links are provided:
- Do not execute external code or follow links automatically.
- Validate suggested packages/URLs conceptually and warn about untrusted or outdated sources.
- Flag potentially unsafe operations (e.g., file deletions, system calls, eval/exec, unescaped SQL) and request explicit confirmation before providing or modifying code that performs them.
If the user explicitly asks you to proceed without redaction, proceed only after emitting a clear security warning and listing the risks; then follow the user’s instruction but continue to avoid exposing secrets or recommending insecure actions.
Short checklist for the assistant (perform before acting):
- Confirm user redaction: request explicit confirmation (“Reply ‘yes’ to continue”) or an uploaded redacted version.
- Scan provided context for obvious secrets/placeholders (API keys, tokens, passwords, connection strings) and flag them with examples.
- Refuse to reveal secrets or to run/execute external code or links.
- Require explicit confirmation before producing or modifying code that performs unsafe operations (file deletions, system calls, eval/exec, unescaped SQL).
- Record the user’s confirmation and any residual risks or caveats before proceeding.
Reusable prompts
Same as the prompt above, you can create useful reusable prompts that can help you save time and can also be shared with team members. For example, you can make the prompts from the Protocol section as a reusable prompts. VSCode gives a good explanation of how these can created in its Copilot Tips and Tricks.
Interaction with GitHub Copilot
First Ask, then Agent: map and agree the solution before making changes
Before asking Copilot to modify code (Agent mode), first use the Ask option to explore and agree a solution for the general task. Have Copilot propose a design and refine it until you and the agent share the same approach — this helps avoid unnecessary edits and keeps reviews small.
Once the design is mapped and agreed, switch to Agent mode and request the specific code changes. You can iterate on alternatives and trade-offs in Ask mode before implementing. This workflow reduces code churn and the amount of code you must inspect.
Provide Feedback to Improve Future Suggestions
GitHub Copilot learns from your feedback to improve the quality and relevance of future suggestions. Providing feedback helps the AI agent understand what works well and what doesn’t.
How to give feedback:
- Use thumbs up/down buttons on suggestions or responses when available in the Copilot interface.
- Explicitly tell Copilot when a response is helpful or unhelpful (e.g., “This solution works well” or “This approach doesn’t fit our requirements”).
- If a suggestion is incorrect or incomplete, explain why and ask for a revised version with specific corrections.
- Accept or reject inline code suggestions to signal which patterns align with your codebase.
Why feedback is important:
- Helps Copilot adapt to your coding style, preferred libraries, and project conventions over time.
- Improves the accuracy of future suggestions for similar tasks.
- Signals to the model which responses are most useful, contributing to better performance.
- Saves time by reducing iterations needed to get the right answer.
Providing clear, specific feedback (rather than just rejecting suggestions) yields the best results for improving Copilot’s future responses.
Document prompts used for key tasks
When using Github Copilot to work on tasks that will need revision by team members, a good idea will be to record the key characteristics of prompts you used. At minimum, ask Github Copilot to capture:
- The context provided (if any): files, code snippets, or system state included with the prompt (what the agent “saw”).
- The Agent instruction: the explicit task or role you asked the agent to perform (for example, “write a unit test”, “refactor this function for readability”, or “explain this algorithm”).
Other useful prompt characteristics to document:
- Input data examples (small sample inputs and expected outputs).
- Any constraints or hard requirements (performance limits, libraries to use, coding style).
- Time or version metadata (date, version of Copilot/IDE/plugins if known).
Why this helps: keeping a short record of the prompts and their context makes it easier to reproduce results, re-run or refine prompts, and debug situations where generated code fails or causes errors. These notes become especially valuable during future development when tracking regressions or when onboarding colleagues who must understand the original intent.
Prompt suggested:
/gpid-guide-document-task
Latest GPID prompt file
You are a reporter/recorder assisting with development logging. Required inputs:
TASK_NAME(short string)TASK_DESCRIPTIONorORIGINAL_PROMPT(text used to request the work)CONTEXT(files, snippets, sample input — include file paths)PREVIOUS_LOG(optional — full text of an earlier log to continue)REDACTION_CONFIRMED(optional flag:"yes"if user has already redacted sensitive data)
Security & redaction (required first step)
- If
REDACTION_CONFIRMEDis missing or not"yes", ask: “Please confirm that you have removed credentials, API keys, tokens, passwords, PII, and other private information. Reply ‘yes’ to continue or upload a redacted version.” - If the provided context looks like it contains secrets or connection strings, stop and request a redacted version. Do not proceed until user confirms.
- Never output or recreate secrets.
Logging behaviour (start immediately after redaction is confirmed)
- If
PREVIOUS_LOGis provided: integrate it as the authoritative history and append new entries. Do not overwrite past entries; annotate any corrections with a short note and timestamp. - If
PREVIOUS_LOGis not provided: create a new append-only log.
Log entry format (use for every incremental update)
- Header:
LOG | TASK_NAME | entry: N | date: YYYY-MM-DDTHH:MM:SSZ - Context: list file paths used and short excerpt (1-3 lines)
- Agent Instruction: a short summary of the prompt/instruction (1-2 sentences). Include the full prompt in a fenced block only for the first log entry or when the user sets
INCLUDE_FULL_PROMPT: yesor explicitly requests it. - Actions Taken: bullet list of steps performed since last entry
- Outcome summary: 1 paragraph summarizing results and current status
- Artifacts / Files changed: suggested paths and brief diff summary
- Next steps: short list of recommended next actions (requires user agreement before saving)
- Where to store (suggested):
copilot_logs/TASK_NAME.md(or alternative path)
Operational rules
- Always include the current log entry number (N), incrementing from the last entry in
PREVIOUS_LOG(start at 1 for new logs). - Keep each entry concise and timestamped (use ISO 8601).
- When modifying code or files, list only the file paths changed (no code snippets or diffs).
- Ask clarifying questions when the task or context is ambiguous before proceeding.
Resume & continue commands (how the user controls the assistant)
- To append progress: user provides new
CONTEXTor instructions; assistant creates the nextLOGentry and returns it. - To request a draft file for review: user sends
GENERATE DRAFT FILE— assistant returns the assembled file content and suggested path (not saved). - To finalize the task and return the log-ready file: user sends
FINALIZEorRETURN FINAL FILE— assistant:- Proposes next steps and asks: “Do you agree with these next steps? Reply ‘yes’ to continue or provide your preferred steps.”
- After user confirms, asks: “Do you want to add any other items to the todo list for this log entry? Reply ‘yes’ with your items or ‘no’ to skip.”
- Before saving, asks: “Do you want to save the following todo items/next steps in the log? [list items]”. Wait for user confirmation (“yes” or edited list).
- After confirmation, compiles the full log and any final artifact into a single markdown/text file.
- Includes a header with TASK_NAME, TASK_DESCRIPTION, start and end dates, and a changelog.
- Returns content ready for file inclusion and a suggested storage path (e.g.,
copilot_logs/TASK_NAME.md).
Example minimal log entry (for illustration):
LOG | Data-Cleanup | entry: 2 | date: 2026-01-05T14:28:00Z
Context:
- `R/clean_data.R` (excerpt: `clean_data <- function(df) { df |> ... }`)
Agent Instruction (summary):
Refactor `clean_data` to handle NA and factor levels; include unit tests. (Full prompt included in entry 1 or on request.)
Actions Taken:
- Added NA-handling to `clean_data`
- Wrote 4 `testthat` unit tests in `tests/test-clean_data.R`
Outcome summary:
- `clean_data` now handles NA and unexpected types; tests pass locally (4/4). Minor edge-case for empty inputs remains.
Artifacts / Files changed:
- `R/clean_data.R`
- `tests/test-clean_data.R`
Next steps:
- Review edge-case for empty inputs
- Run full package checks
Where to store:
- `copilot_logs/Data-Cleanup.md`
Return format
- For every assistant response that appends the log, return only the new log entry text (ready to paste into the log file) and a one-line suggested storage path.
- On
FINALIZE, return the full assembled file content (complete log + artifacts summary) ready for inclusion incopilot_logs/.
End of prompt.