Exercise 08: Build a Reusable Skill¶
Objective¶
Create a new reusable skill with a SKILL.md file. This tests understanding of the skill format, frontmatter, and how skills provide on-demand context to agents.
Required Reading
- Practitioner README -- Skills section
- Agent Skills | Cursor Docs -- Official Cursor docs on
SKILL.mdformat, skill discovery, and activation by description matching - Agent Skills Guide -- SKILL.md format, dynamic context discovery
- Review the existing skill:
sandbox/.cursor/skills/jg-pipeline-artifact-io/SKILL.md
Skills use the SKILL.md format in .cursor/skills/.
Claude Code uses the identical SKILL.md format in .claude/skills/. Skills are portable between Cursor and Claude Code with no changes -- just move the skill folder. The frontmatter fields (name, description) and body structure you create here work in both systems.
Context¶
The sandbox has 2 existing skills (jg-pipeline-artifact-io and jg-benchmark-ops). You will add a 3rd skill that teaches agents how to run and interpret the sandbox test suite.
Tasks¶
-
Create the skill directory:
mkdir -p sandbox/.cursor/skills/jg-sandbox-test-runner -
Create
sandbox/.cursor/skills/jg-sandbox-test-runner/SKILL.mdwith this frontmatter:--- name: jg-sandbox-test-runner description: "Run sandbox test suite and report results. Use when verifying sandbox code changes." --- -
Fill in the body with these sections:
# JG Sandbox Test Runner -- Title
## When to Use -- Use this skill when any agent needs to verify that sandbox code changes pass the test suite. Typically invoked by jg-tester or jg-worker after implementation.
## Running Tests -- How to execute: cd sandbox && npm test. Expected output format. What exit code 0 vs non-zero means.
## Interpreting Results -- How to read Jest output: test count, pass/fail breakdown, error messages, stack traces.
## Writing Test Artifacts -- After running tests, write results to .pipeline/<issue-id>/test-result.json with the schema: { "verdict": "PASS"|"FAIL", "phase_1": { "tool": "jest", "exit_code": N, "summary": "..." } }.
## Anti-patterns -- Do not skip tests for "simple" changes. Do not mark verdict as PASS if any test fails.
- Verify the skill sits alongside the existing skills:
ls sandbox/.cursor/skills/
Validation
python3 docs/practitioner/tutorials/verify.py --exercise 08
Checks: file exists, has valid frontmatter with name: and description:, body has sufficient content, mentions test commands.
Reflection
- How does the agent decide when to load this skill vs. the pipeline-artifact-io skill?
- What would you add to the
descriptionfield to make activation more precise? - Could this skill be shared across multiple projects? What would you change?
Answer
SKILL.md frontmatter:
---
name: jg-sandbox-test-runner
description: "Run sandbox test suite and report results. Use when verifying sandbox code changes."
---
Body sections: "When to Use" (after implementation, by tester or worker), "Running Tests" (cd sandbox && npm test, exit codes), "Interpreting Results" (Jest output format), "Writing Test Artifacts" (test-result.json schema), "Anti-patterns" (never skip tests, never mark PASS on failure).
The description must be specific enough for accurate activation -- mentioning "sandbox" and "test suite" helps Cursor match it correctly.