DevOps Skills
DojOps includes 38 built-in DevOps skills covering CI/CD, infrastructure-as-code, containers, monitoring, and system services. All skills follow a consistent pattern built on the BaseSkill<T> abstract class. A custom skill system lets you extend DojOps with your own skills via declarative .dops files.
Only
.dops v2format is supported. The legacy v1 format andtool.yamlmanifests have been removed.
Skill Overview
| Tool | Output Format | Output Files | Detector | Verifier |
|---|---|---|---|---|
| GitHub Actions | YAML (raw) | .github/workflows/ci.yml | Yes | Structure lint + actionlint |
| Terraform | HCL (raw) | main.tf, variables.tf | Yes | terraform validate |
| Packer | HCL2 (raw) | main.pkr.hcl, variables.pkr.hcl | Yes | packer validate |
| Kubernetes | YAML (raw) | K8s manifests | — | kubectl --dry-run |
| Helm | YAML (raw) | Chart.yaml, values.yaml | — | — |
| Ansible | YAML (raw) | {name}.yml | — | — |
| Docker Compose | YAML (raw) | docker-compose.yml | Yes | — |
| Dockerfile | Dockerfile (raw) | Dockerfile, .dockerignore | Yes | hadolint |
| Nginx | Nginx conf (raw) | nginx.conf | — | — |
| Makefile | Make syntax (raw) | Makefile | Yes | — |
| GitLab CI | YAML (raw) | .gitlab-ci.yml | Yes | Structure lint + yamllint |
| Prometheus | YAML (raw) | prometheus.yml, alert-rules.yml | — | — |
| Systemd | INI (raw) | {name}.service | — | — |
| Jenkinsfile | Groovy (raw) | Jenkinsfile | Yes | — |
| Pulumi | TypeScript/Python/Go/YAML (raw) | Pulumi programs | — | — |
| ArgoCD | YAML (raw) | ArgoCD Application manifests | — | — |
| CloudFormation | YAML/JSON (raw) | CloudFormation templates | — | — |
| Grafana | JSON (raw) | Grafana dashboard JSON | — | — |
| OpenTelemetry Collector | YAML (raw) | OTEL Collector configuration | — | — |
Skill Pattern
All 38 built-in skills are defined as .dops v2 skill files in packages/runtime/skills/. Each skill is processed by DopsRuntime, which compiles prompts, calls the LLM, and writes raw file content directly (no JSON→serialize step).
packages/runtime/skills/
github-actions.dops GitHub Actions workflow generator
terraform.dops Terraform HCL generator
packer.dops Packer machine image builder
kubernetes.dops Kubernetes manifest generator
helm.dops Helm chart generator
ansible.dops Ansible playbook generator
docker-compose.dops Docker Compose generator
dockerfile.dops Dockerfile generator
nginx.dops Nginx config generator
makefile.dops Makefile generator
gitlab-ci.dops GitLab CI pipeline generator
prometheus.dops Prometheus monitoring generator
systemd.dops Systemd service unit generator
jenkinsfile.dops Jenkinsfile pipeline generatorBaseSkill Abstract Class
abstract class BaseSkill<TInput> {
abstract name: string;
abstract inputSchema: z.ZodType<TInput>;
// Zod validation of raw input
validate(input: unknown): TInput;
// LLM generation - returns structured data
abstract generate(input: TInput): Promise<Result>;
// Optional: write generated files to disk
execute?(input: TInput): Promise<void>;
// Optional: validate generated output with external tools
verify?(data: unknown): Promise<VerificationResult>;
}Verifier (verifier.ts)
Optional validation of generated output. Five tools implement verification:
| Tool | Verification Method | Verification Command / Check |
|---|---|---|
| Terraform | External binary (terraform) | terraform validate |
| Packer | External binary (packer) | packer init . && packer validate . |
| Dockerfile | External binary (hadolint) | hadolint Dockerfile |
| Kubernetes | External binary (kubectl) | kubectl --dry-run=client |
| GitHub Actions | Structure lint + external binary (actionlint) | Checks on trigger, jobs, runs-on, step run/uses + actionlint workflow.yml |
| GitLab CI | Structure lint + external binary (yamllint) | Checks job script, stages array, stage references + yamllint -d relaxed |
Verification runs by default in CLI commands. Use --skip-verify to disable. Built-in verifiers always run.
Auto-Install of Missing Binaries
When a verification command requires an external binary that is not installed, DojOps automatically installs it into the sandboxed toolchain (~/.dojops/toolchain/) before running verification. This means you don’t need to manually install terraform, hadolint, actionlint, or other verification tools, DojOps handles it transparently on first use.
{entryFile} Placeholder
Verification commands in .dops skills support the {entryFile} placeholder, which is replaced at runtime with the path to the primary output file. This lets skills define verification commands that reference the generated file dynamically:
verification:
command: hadolint {entryFile}The {entryFile} value is resolved from the first entry in the skill’s files array.
Existing Config Auto-Detection
All skills auto-detect existing config files and switch to update mode when found. Each tool knows its output file path and reads existing content automatically:
| Tool | Auto-Detect Path |
|---|---|
| GitHub Actions | {projectPath}/.github/workflows/ci.yml |
| Terraform | {projectPath}/main.tf |
| Kubernetes | {outputPath}/{appName}.yaml |
| Helm | {outputPath}/{chartName}/values.yaml |
| Ansible | {outputPath}/{playbookName}.yml |
| Docker Compose | {projectPath}/docker-compose.yml (+ .yaml, compose.yml variants) |
| Dockerfile | {outputPath}/Dockerfile then {projectPath}/Dockerfile |
| Nginx | {outputPath}/nginx.conf |
| Makefile | {projectPath}/Makefile |
| GitLab CI | {projectPath}/.gitlab-ci.yml |
| Prometheus | {outputPath}/prometheus.yml |
| Systemd | {outputPath}/{serviceName}.service |
| Jenkinsfile | {projectPath}/Jenkinsfile |
Behavior:
- If
existingContentis explicitly passed in the input, it takes priority over auto-detection - Otherwise, the tool reads the file at the auto-detect path using
readExistingConfig()(from@dojops/sdk) - Files larger than 50KB are skipped (returns
null) - The
generate()output includesisUpdate: booleanso callers (CLI, planner) can distinguish create vs update
Atomic File Writes
All tool execute() methods use atomicWriteFileSync() from @dojops/sdk. This writes to a temporary file first, then atomically renames it to the target path using fs.renameSync (POSIX atomic rename). This prevents corrupted or partial files if the process crashes mid-write.
Backup Before Overwrite
When execute() writes to a file that already exists, it creates a .bak backup first using backupFile() from @dojops/sdk. For example:
main.tf→main.tf.bak.github/workflows/ci.yml→.github/workflows/ci.yml.bak
Backups are only created when updating existing files, not when creating new ones. The .bak files are used by dojops rollback to restore the original content.
File Tracking
All tool execute() methods return filesWritten and filesModified arrays in the SkillOutput:
filesWritten, All files written during execution (both new and updated)filesModified, Files that existed before and were overwritten (have.bakbackups)
This metadata flows through the executor into audit entries and execution logs, enabling precise rollback (delete new files, restore .bak for modified files).
Idempotent YAML Output
All YAML generators use shared dump options with sortKeys: true for deterministic output. Running the same generation twice produces identical YAML, eliminating diff noise from key reordering.
GitHub Actions uses a custom key sort function that preserves the conventional top-level key order (name → on → permissions → env → jobs) while sorting all other keys alphabetically.
VerificationResult:
interface VerificationResult {
valid: boolean;
issues: VerificationIssue[];
}
interface VerificationIssue {
severity: "error" | "warning" | "info";
message: string;
line?: number;
file?: string;
}Skill Details
GitHub Actions
Generates GitHub Actions workflow files (.github/workflows/ci.yml).
- Serialization:
js-yaml - Detector: Finds existing workflow files in
.github/workflows/ - Verifier: Built-in structure lint, validates
ontrigger,jobssection,runs-onper job (skipped for reusable workflow jobs withuses), steprun/usespresence. Optional binary verification viaactionlint(skipped if not installed) - Output: Complete workflow YAML with jobs, steps, triggers
Terraform
Generates Terraform HCL configurations.
- Serialization: Custom HCL builder
- Detector: Detects existing
.tffiles - Verifier:
terraform validate, checks HCL syntax and provider requirements - Output:
main.tf(resources),variables.tf(input variables)
Packer
Generates Packer HCL2 machine image templates.
- Serialization: HCL2 (raw)
- Detector: Detects existing
.pkr.hclfiles - Verifier:
packer init . && packer validate ., checks HCL2 syntax and plugin requirements - Output:
main.pkr.hcl(build blocks),variables.pkr.hcl(input variables),source.pkr.hcl(source blocks)
Kubernetes
Generates Kubernetes manifests (Deployments, Services, ConfigMaps, etc.).
- Serialization:
js-yaml - Verifier:
kubectl --dry-run=client, validates manifest structure - Output: YAML manifests
Helm
Generates Helm chart structures.
- Serialization:
js-yaml - Output:
Chart.yaml,values.yaml
Ansible
Generates Ansible playbooks.
- Serialization:
js-yaml - Output:
{name}.ymlplaybook
Docker Compose
Generates Docker Compose configurations.
- Serialization:
js-yaml - Detector: Checks for existing
docker-compose.yml - Output:
docker-compose.yml
Dockerfile
Generates optimized Dockerfiles with multi-stage builds.
- Serialization: Custom string builder
- Detector: Checks for existing Dockerfile
- Verifier:
hadolint, lints Dockerfile for best practices - Output:
Dockerfile,.dockerignore
Nginx
Generates Nginx server configurations.
- Serialization: Custom string builder
- Output:
nginx.conf
Makefile
Generates Makefiles with proper tab indentation.
- Serialization: Custom string builder (with tabs)
- Detector: Checks for existing Makefile
- Output:
Makefile
GitLab CI
Generates GitLab CI pipeline configurations.
- Serialization:
js-yaml - Detector: Checks for existing
.gitlab-ci.yml - Verifier: Built-in structure lint, validates
stagesis an array, jobs havescript(ortrigger/extends), stage references are declared, hidden jobs (.prefix) are skipped. Optional binary verification viayamllint -d relaxed(skipped if not installed) - Output:
.gitlab-ci.yml
Prometheus
Generates Prometheus monitoring and alerting configurations.
- Serialization:
js-yaml - Output:
prometheus.yml,alert-rules.yml
Systemd
Generates systemd service unit files.
- Serialization: Custom string builder (INI format)
- Output:
{name}.service
Jenkinsfile
Generates Jenkins pipeline files (declarative or scripted).
- Serialization: Custom string builder (Groovy)
- Detector: Checks for existing
Jenkinsfile - Output:
Jenkinsfile
Creating a New Skill
To add a new built-in skill, create a .dops file in packages/runtime/skills/ following the v2 format. See the Contributing guide for the full step-by-step pattern.
DOPS Skill Format
Built-in skills are defined as .dops skill files, a declarative format combining YAML frontmatter with markdown prompt sections. The @dojops/runtime package processes these skills through DopsRuntime.
Frontmatter Sections
All sections are defined in YAML between --- delimiters:
| Section | Required | Description |
|---|---|---|
meta | Yes | Skill name, version, description, author, license, tags, repository |
context | Yes | Technology context, output guidance, best practices, Context7 libs |
files | Yes | Output file specs (path templates, format) |
scope | No | Write boundary, explicit list of allowed write paths |
risk | No | Skill risk self-classification (LOW / MEDIUM / HIGH + rationale) |
execution | No | Mutation semantics (mode, deterministic, idempotent flags) |
update | No | Structured update behavior (strategy, inputSource, injectAs) |
detection | No | Existing file detection paths for auto-update mode |
verification | No | Structural rules + optional binary verification command |
permissions | No | Filesystem, child_process, and network permission declarations |
Context Block
The context block replaces v1’s input and output sections. It provides technology context and generation guidance to the LLM:
context:
technology: "GitHub Actions"
fileFormat: yaml
outputGuidance: "Generate a complete GitHub Actions workflow YAML file..."
bestPractices:
- "Use matrix strategy for multi-version testing"
- "Pin action versions with full SHA hashes"
context7Libraries:
- name: "github-actions"
query: "workflow syntax and configuration"| Field | Type | Description |
|---|---|---|
technology | string | Technology name (e.g. “GitHub Actions”, “Terraform”) |
fileFormat | string | Output file format (e.g. yaml, hcl, raw) |
outputGuidance | string | Instructions for the LLM on what to generate |
bestPractices | string[] | Best practices injected into the prompt |
context7Libraries | array | Context7 library references for documentation augmentation |
Prompt Variables
v2 prompts support additional template variables:
| Variable | Source | Description |
|---|---|---|
{outputGuidance} | context.outputGuidance | Generation instructions from the context block |
{bestPractices} | context.bestPractices | Numbered list of best practices |
{context7Docs} | Context7 API (runtime) | Documentation fetched via DocProvider |
{projectContext} | Project scanner (runtime) | Detected project context information |
The DocProvider interface enables Context7 integration for v2 tools, fetching relevant documentation at runtime based on context7Libraries entries.
File Spec Fields
Each entry in the files array defines an output file:
| Field | Type | Default | Description |
|---|---|---|---|
path | string | , | Output path (supports {var} templates) |
format | string | raw | Output format (always raw, LLM generates content directly) |
conditional | boolean | , | Only write if LLM produces content for this file |
The LLM generates raw file content directly, and DopsRuntime strips code fences via stripCodeFences() before writing.
Scope, Write Boundary Enforcement
The scope section declares which files a skill is allowed to write. Paths use the same {var} template syntax as files[].path:
scope:
write: ["{outputPath}/main.tf", "{outputPath}/variables.tf"]At execution time, resolved file paths are validated against the expanded scope patterns. Writes to paths not in scope.write are rejected with an error. Path traversal (..) in scope patterns is rejected at parse time.
When scope is omitted, the skill can write to any path.
Risk, Skill Self-Classification
Skills declare their own risk level:
risk:
level: MEDIUM
rationale: "Infrastructure changes may affect cloud resources"| Level | Typical Use Cases |
|---|---|
LOW | CI/CD, monitoring, build automation (github-actions, makefile) |
MEDIUM | Infrastructure, containers, deployments (terraform, k8s) |
HIGH | Production resources, IAM, security configurations |
Default when not declared: LOW with rationale “No risk classification declared”. The risk level is exposed via DopsRuntime.metadata.riskLevel for use by planners and approval workflows.
Execution, Mutation Semantics
execution:
mode: generate # "generate" or "update"
deterministic: false # same input always produces same output?
idempotent: true # safe to re-run without side effects?All fields have defaults: mode: "generate", deterministic: false, idempotent: false.
Update, Structured Update Behavior
update:
strategy: replace # "replace" or "preserve_structure"
inputSource: file # where existing content comes from
injectAs: existingContent # variable name for existing content in promptsWhen strategy is preserve_structure, the prompt compiler injects additional instructions to maintain the existing configuration’s organization. The injectAs field controls the variable name used in update prompts (default: existingContent).
Markdown Sections
After the closing --- delimiter, markdown sections define prompts:
## Prompt(required), Main generation prompt with{var}template substitution## Keywords(required), Comma-separated keywords for agent routing
Built-in Skill Risk Levels
| Skill | Risk | Rationale |
|---|---|---|
| terraform | MEDIUM | Infrastructure changes may affect cloud resources |
| packer | MEDIUM | Machine image builds may affect cloud resources |
| kubernetes | MEDIUM | Cluster configuration changes affect running services |
| helm | MEDIUM | Chart changes affect Kubernetes deployments |
| dockerfile | MEDIUM | Build image changes may affect production runtime |
| docker-compose | LOW | Compose changes are local development configurations |
| ansible | MEDIUM | Playbook changes execute on remote hosts |
| nginx | MEDIUM | Web server config changes affect traffic routing |
| systemd | MEDIUM | Service unit changes affect system processes |
| github-actions | LOW | CI/CD workflow changes require PR review |
| gitlab-ci | LOW | CI/CD pipeline changes require MR review |
| makefile | LOW | Build automation changes are local |
| prometheus | LOW | Monitoring config changes are observable |
| jenkinsfile | LOW | CI/CD pipeline changes require PR review |
Custom Skill System
DojOps supports custom skills via the @dojops/skill-registry custom skill system. Custom skills are discovered automatically and behave exactly like built-in skills, they go through the same Planner, Executor, verification, and audit pipeline.
Custom Skill Discovery
Custom skills are discovered from two locations (in priority order):
- Project skills:
.dojops/skills/<name>.dops(highest priority) - Global skills:
~/.dojops/skills/<name>.dops
Custom skill discovery happens automatically on every command, no manual registration needed.
Custom Skill CLI Commands
# List all discovered custom skills (global + project)
dojops skills list
# Validate a skill
dojops skills validate .dojops/skills/my-skill.dops
# Scaffold a .dops skill (uses AI when provider is configured)
dojops skills init my-skill
# Publish a .dops skill to DojOps Hub (requires DOJOPS_HUB_TOKEN)
dojops skills publish my-skill.dops
dojops skills publish my-skill.dops --changelog "Added Docker support"
# Install a .dops skill from DojOps Hub
dojops skills install nginx-config
dojops skills install nginx-config --version 1.0.0 --global
# Search the DojOps Hub for skills
dojops skills search docker
dojops skills search terraform --limit 5
dojops skills search k8s --output jsonHub Integration
The publish and install commands connect to the DojOps Hub , a skill marketplace where users share .dops skills.
Authentication Setup
Publishing skills to the Hub requires an API token. Tokens follow the GitHub PAT model, shown once at creation, stored as SHA-256 hashes.
1. Sign in to the Hub, Go to hub.dojops.ai and sign in with your GitHub account.
2. Generate a token, Navigate to Settings → API Tokens (/settings/tokens), or click “Settings” in the navbar.
3. Create a named token, Give your token a descriptive name (e.g. “My laptop”, “CI/CD pipeline”) and choose an expiration:
| Expiration | Duration | Use Case |
|---|---|---|
| 1 month | 30 days | Short-lived tasks |
| 3 months | 90 days | Regular development |
| No expiration | Never | CI/CD pipelines |
4. Copy the token, The raw token (format: dojops_ + 40 hex chars) is displayed once. Copy it immediately, you won’t be able to see it again.
5. Set the environment variable:
# Add to your shell profile (~/.bashrc, ~/.zshrc)
export DOJOPS_HUB_TOKEN="dojops_a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2"You can manage up to 10 tokens per account. View active tokens, their last-used timestamps, and revoke compromised tokens from the Settings page at any time.
Publishing a Skill
The publish flow validates locally, computes a SHA-256 hash for integrity, and uploads to the Hub.
# Publish a .dops file
dojops skills publish my-skill.dops
# Publish with a changelog message
dojops skills publish my-skill.dops --changelog "Added Docker support"
# Publish by skill name (looks up in .dojops/skills/)
dojops skills publish my-skillWhat happens during publish:
- Local validation, The
.dopsfile is parsed and validated against the .dops v2 spec (frontmatter, sections, Zod schemas) - SHA-256 hash, The CLI computes a SHA-256 hash of the file as a publisher attestation
- Upload, The file and hash are sent to the Hub via
POST /api/packageswithAuthorization: Bearer <token> - Server verification, The Hub recomputes the hash and compares it against the client-provided hash. Mismatches are rejected
- Storage, The Hub stores the file and the publisher’s hash for download integrity verification
Example output:
◇ Validated: my-tool v1.0.0
◇ SHA256: a1b2c3d4e5f6...
┌ Published new skill
│ Name: my-tool
│ Version: v1.0.0
│ Slug: my-tool
│ SHA256: a1b2c3d4e5f6...
│ URL: https://hub.dojops.ai/packages/my-tool
└Publishing a new version of an existing skill uses the same command, the Hub detects the existing package and adds the new version:
# Update version in .dops frontmatter, then:
dojops skills publish my-skill.dops --changelog "v1.1.0: Added Redis support"Installing a Skill
The install flow downloads the skill, verifies its integrity against the publisher’s hash, and places it in your skills directory.
# Install latest version (project-local)
dojops skills install nginx-config
# Install a specific version
dojops skills install nginx-config --version 1.0.0
# Install globally (~/.dojops/skills/)
dojops skills install nginx-config --globalWhat happens during install:
- Fetch metadata, The CLI queries
GET /api/packages/<slug>to resolve the latest version (unless--versionis specified) - Download, The
.dopsfile is downloaded fromGET /api/download/<slug>/<version>with the publisher’s SHA-256 hash in theX-Checksum-Sha256response header - Integrity check, The CLI recomputes the SHA-256 hash locally and compares it against the publisher’s hash. Mismatches abort the install with a tampering warning
- Validation, The downloaded file is parsed and validated as a
.dopsskill - Write, The file is saved to
.dojops/skills/<name>.dops(project) or~/.dojops/skills/<name>.dops(global)
Example output:
◇ Downloading nginx-config v1.0.0...
┌ Skill installed
│ Name: nginx-config
│ Version: v1.0.0
│ Path: .dojops/skills/nginx-config.dops
│ Scope: project
│ SHA256: f9e8d7c6b5a4...
│ Verify: OK - matches publisher hash
└Integrity failure example (file tampered with):
✖ SHA256 integrity check failed! The downloaded file does not match the publisher's hash.
Publisher: f9e8d7c6b5a4...
Download: 0000aaaa1111...
This may indicate the file was tampered with. Aborting install.Environment Variables
| Variable | Description | Default |
|---|---|---|
DOJOPS_HUB_URL | Hub API base URL | https://hub.dojops.ai |
DOJOPS_HUB_TOKEN | API token for publishing (generated at /settings/tokens) | , |
Full Example: Creating and Publishing a Skill
Here’s a complete walkthrough, creating a .dops skill from scratch, publishing it to the Hub, and installing it.
1. Create the .dops file, docker-compose-generator.dops:
---
meta:
name: docker-compose-generator
version: "1.0.0"
description: "Generates Docker Compose files for multi-service applications with networking, volumes, and health checks"
author: your-username
license: MIT
tags: [docker, compose, containers, devops]
context:
technology: "Docker Compose"
fileFormat: yaml
outputGuidance: "Generate a complete, production-ready docker-compose.yml for the requested services."
bestPractices:
- "Use specific image tags, never latest"
- "Add health checks for databases and caches"
- "Use named volumes for persistent data"
- "Create a dedicated bridge network"
- "Set restart policies to unless-stopped"
files:
- path: "docker-compose.yml"
format: raw
scope:
write: ["docker-compose.yml"]
risk:
level: LOW
rationale: "Only generates a docker-compose.yml file, no execution"
execution:
mode: generate
deterministic: false
idempotent: true
permissions:
filesystem: project
child_process: none
network: none
---
## Prompt
Generate a production-ready docker-compose.yml for the requested services.
{outputGuidance}
{bestPractices}
## Keywords
docker, compose, docker-compose, containers, services, volumes, networks, health-check, multi-container, orchestration2. Set up authentication:
export DOJOPS_HUB_TOKEN="dojops_a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2"3. Publish:
dojops skills publish docker-compose-generator.dops --changelog "Initial release: Docker Compose generator with health checks, volumes, and networking"Expected output:
◇ Validated: docker-compose-generator v1.0.0
◇ SHA256: 7f3a2b1c9e8d...
┌ Published new skill
│ Name: docker-compose-generator
│ Version: v1.0.0
│ Slug: docker-compose-generator
│ SHA256: 7f3a2b1c9e8d...
│ URL: https://hub.dojops.ai/packages/docker-compose-generator
└4. Install and use (anyone can do this):
# Install the skill
dojops skills install docker-compose-generator
# Use it
dojops "Generate a docker-compose for node-api, postgres, redis"5. Publish a new version, bump meta.version in the frontmatter and publish again:
# After updating version to 1.1.0 in the .dops file:
dojops skills publish docker-compose-generator.dops --changelog "v1.1.0: Added MongoDB and RabbitMQ examples"Skill Policy
Control which custom skills are allowed via .dojops/policy.yaml:
# Only allow specific skills
allowedSkills:
- my-skill
- another-skill
# Block specific skills (takes precedence over allowedSkills)
blockedSkills:
- untrusted-skillSkill Isolation
Custom skills are sandboxed with the same guardrails as built-in skills, plus additional controls:
- Verification command whitelist, Only 34 known DevOps binaries are allowed (terraform, packer, kubectl, helm, ansible-lint, ansible-playbook, docker, hadolint, yamllint, jsonlint, shellcheck, tflint, kubeval, conftest, checkov, trivy, kube-score, polaris, nginx, promtool, systemd-analyze, make, actionlint, caddy, haproxy, nomad, podman, fluentd, opa, vault, circleci, npx, tsc, cfn-lint). Non-whitelisted commands are rejected at runtime
- Permission enforcement, The
permissions.child_processfield must be"required"for verification commands to execute. Omitted or"none"means the command is silently skipped (default-safe) - Path traversal prevention, File paths in
files[].pathanddetector.pathcannot contain..segments, preventing writes outside the project directory - Execution guardrails, Custom skills execute through the same
SafeExecutorpipeline as built-in skills, inheritingmaxFileSize(1MB default),timeoutMs(30s default), DevOps write allowlist enforcement, and per-file audit logging
Custom Skill Audit Trail
Custom skill executions include additional audit metadata:
toolType: "custom", distinguishes from built-in skillstoolSource: "global" | "project", where the custom skill was discoveredtoolVersion, version from the manifesttoolHash, SHA-256 hash of.dopsfile for integrity verificationsystemPromptHash, SHA-256 hash of the custom skill’s system prompt for reproducibility tracking