DojOps Tool Specification v1 (Deprecated)
Deprecated: This specification describes the legacy
tool.yamlmanifest format. New tools should use the.dopsfile format instead, which is the default when runningdojops tools initand the only format accepted by DojOps Hub . The.dopsformat includes richer metadata (scope,risk,execution,update) not available intool.yaml. Legacytool.yamltools are still supported for backward compatibility but will not receive new features.
Status: FROZEN (Legacy)
Spec version: spec: 1
Effective from: v1.x
This document defines the v1 custom tool contract for DojOps using tool.yaml manifests. The spec is frozen — no breaking changes will be made under spec: 1. See Compatibility Promise for evolution rules.
Table of Contents
- Overview
- Spec Version
- Directory Structure
- Discovery Paths
- Manifest Schema
- Input Schema
- Output Schema
- Verification Command Whitelist
- Tool Policy
- Security Constraints
- Tool Lifecycle
- Compatibility Promise
- Appendix A: Full Manifest Example
- Appendix B: Input Schema Example
1. Overview
DojOps custom tools extend the system with custom DevOps tools beyond the 12 built-in ones. Each custom tool is a declarative package consisting of:
- A manifest (
tool.yaml) defining the tool’s identity, LLM generation strategy, file outputs, and optional verification - An input schema (
input.schema.json) defining the tool’s input contract via JSON Schema - An optional output schema for structured LLM output enforcement
Custom tools are discovered from disk, validated against this spec, converted to runtime DevOpsTool-compatible objects, and registered alongside built-in tools in the ToolRegistry.
2. Spec Version
Every manifest MUST declare spec: 1. This integer field gates validation and compatibility:
spec: 1The spec version is validated as z.number().int().min(1).max(1). Future versions (spec: 2, etc.) will be handled by separate schema branches.
3. Directory Structure
A custom tool directory contains:
my-tool/
tool.yaml # Required: manifest file
input.schema.json # Required: JSON Schema for tool inputs
output.schema.json # Optional: JSON Schema for structured LLM outputThe tool.yaml references schema files via relative paths:
inputSchema: "input.schema.json"
outputSchema: "output.schema.json" # optional4. Discovery Paths
Custom tools are discovered from two locations:
| Location | Path | Priority |
|---|---|---|
| Global | ~/.dojops/tools/<tool-name>/ | Lower |
| Project | .dojops/tools/<tool-name>/ | Higher (overrides global) |
Discovery rules:
- Global tools are loaded first from
~/.dojops/tools/ - Project tools are loaded from
.dojops/tools/relative to the project root - If both locations contain a tool with the same
name, the project tool wins - Each subdirectory is checked for a
tool.yamlfile - Directories without
tool.yamlare silently skipped - Invalid manifests are silently skipped (no crash)
- Tools with missing input schema files are silently skipped
The HOME environment variable (or USERPROFILE on Windows) determines the global directory.
5. Manifest Schema
The tool.yaml file MUST conform to the following schema:
Top-level fields
| Field | Type | Required | Constraints | Description |
|---|---|---|---|---|
spec | integer | Yes | 1 (exactly) | Spec version |
name | string | Yes | 1-64 chars, /^[a-z0-9-]+$/ | Tool identifier (lowercase, hyphens) |
version | string | Yes | min 1 char | Semantic version string |
type | string | Yes | "tool" (literal) | Tool type |
description | string | Yes | 1-500 chars | Human-readable description |
inputSchema | string | Yes | min 1 char | Relative path to JSON Schema file |
outputSchema | string | No | min 1 char | Relative path to output JSON Schema file |
tags | string[] | No | — | Discovery/categorization tags |
generator | object | Yes | — | LLM generation configuration |
files | array | Yes | min 1 entry | Output file definitions |
verification | object | No | — | External verification command |
detector | object | No | — | Existing file detection |
permissions | object | No | — | Capability declarations |
generator object
| Field | Type | Required | Description |
|---|---|---|---|
strategy | string | Yes | Must be "llm" |
systemPrompt | string | Yes | System prompt sent to the LLM (min 1 char) |
updateMode | boolean | No | Enable update-existing-config mode |
existingDelimiter | string | No | Delimiter for existing content injection |
files array entries
| Field | Type | Required | Constraints | Description |
|---|---|---|---|---|
path | string | Yes | min 1 char, no .. traversal | Output file path (supports {input} templates) |
serializer | enum | Yes | yaml, json, hcl, ini, toml, raw | Serialization format |
Path traversal prevention: File paths are validated to reject any segment containing ... The check splits on both / and \ and rejects if any segment equals ...
verification object
| Field | Type | Required | Description |
|---|---|---|---|
command | string | Yes | Shell command to validate output (min 1 char) |
detector object
| Field | Type | Required | Constraints | Description |
|---|---|---|---|---|
path | string | Yes | min 1 char, no .. traversal | Path to detect existing configs |
permissions object
| Field | Type | Required | Values | Default behavior |
|---|---|---|---|---|
filesystem | enum | No | "project", "global" | No restriction |
network | enum | No | "none", "inherit" | No restriction |
child_process | enum | No | "none", "required" | Treated as "none" |
6. Input Schema
The input.schema.json file uses standard JSON Schema (draft-07 compatible subset). It is converted to a runtime Zod schema at tool load time.
Supported JSON Schema types
| JSON Schema type | Zod equivalent | Notes |
|---|---|---|
string | z.string() | Supports description, default |
number | z.number() | Supports description, default |
integer | z.number().int() | Supports description, default |
boolean | z.boolean() | Supports description, default |
array | z.array(...) | Supports items, description, default |
object | z.object(...) | Supports properties, required, description |
enum | z.enum(...) | Values are cast to strings |
Property handling
- Properties listed in
requiredare mandatory; others are optional (unless they have adefault) - Objects without
propertiesbecomez.record(z.string(), z.unknown()) - The
descriptionfield is preserved via.describe() - The
defaultfield is preserved via.default() - Nested objects and arrays are recursively converted
7. Output Schema
The optional output.schema.json follows the same JSON Schema subset as input schemas. When present, it is passed as the schema field on the LLMRequest, enabling structured JSON output from providers that support it.
If no output schema is provided, the LLM response is parsed as raw JSON or treated as a string.
8. Verification Command Whitelist
Custom tool verification commands are restricted to the following 16 binaries:
terraform kubectl helm ansible-lint
docker hadolint yamllint jsonlint
shellcheck tflint kubeval conftest
checkov trivy kube-score polarisVerification execution rules (3-tier check):
- No command defined (
verificationabsent orcommandempty) → verification passes (no-op) child_processpermission not"required"→ verification passes (never executes the command)- Command not in whitelist → verification fails with an error listing allowed binaries
- Command in whitelist AND
child_process: "required"→ command is executed with a 30-second timeout
The binary check matches: exact binary name, or binary name followed by a space or tab (to allow arguments).
9. Tool Policy
Project owners can control which custom tools are allowed via .dojops/policy.yaml:
# Allow only specific tools (allowlist mode)
allowedTools:
- my-terraform-tool
- my-k8s-tool
# Block specific tools (blocklist mode)
blockedTools:
- untrusted-toolPolicy rules (evaluated in order):
- If
blockedToolsincludes the tool name → denied - If
allowedToolsis set and non-empty → only listed tools are allowed - Otherwise → allowed (default-open)
The policy file is loaded from .dojops/policy.yaml relative to the project root. Missing or malformed policy files result in default-open behavior.
10. Security Constraints
Path traversal prevention
All file paths in the manifest (files[].path and detector.path) are validated to reject path traversal:
Rejected: ../../../etc/passwd
Rejected: foo/../../bar
Allowed: output/config.yaml
Allowed: {name}.yamlThe check: !path.split(/[/\\]/).includes("..")
Child process isolation
- Verification commands only execute when
permissions.child_processis explicitly set to"required" - Without this permission, verification silently passes (default-safe)
- Even with permission, only whitelisted binaries are allowed
Tool hash integrity
Each custom tool has a SHA-256 hash computed from its tool.yaml content. This hash is:
- Stored in
ToolSource.toolHashat discovery time - Pinned into
PlanStatetasks at plan creation time - Validated on
--resumeand--replayto detect tool modifications - Only covers
tool.yaml— changes toinput.schema.jsondo not affect the hash
System prompt hash
Each CustomTool exposes a systemPromptHash (SHA-256 of generator.systemPrompt). This enables:
- Per-task reproducibility tracking in plans
- Replay validation to detect prompt drift between plan creation and execution
11. Tool Lifecycle
Discovery
|
v
Manifest Validation (Zod schema)
|
v
Schema Loading (input.schema.json → Zod, optional output.schema.json → Zod)
|
v
Hash Computation (SHA-256 of tool.yaml)
|
v
Policy Check (.dojops/policy.yaml → allowed/blocked)
|
v
Registration (CustomTool created → added to ToolRegistry)
|
v
Execution
|-- validate(input) → Zod safeParse
|-- generate(input) → LLM call with systemPrompt + optional existingContent
|-- verify(data) → 3-tier check → optional whitelisted command
+-- execute(input) → generate + serialize + write files (with .bak backup on update)Key behaviors during execution:
- Update mode: When
generator.updateModeis true anddetector.pathexists, existing file content is read and appended to the system prompt - Input
existingContent: Can also be passed directly via input fields - File writing: Output is serialized using the configured
serializerformat - Backup: On update (existing content detected), a
.bakcopy is created before overwriting - Template paths: File paths support
{key}placeholders replaced from input values
12. Compatibility Promise
Under spec: 1, the following changes are permitted without a spec version bump:
| Change | Allowed? |
|---|---|
| Add new optional manifest fields | Yes |
| Add new serializer formats | Yes |
| Add new verification binaries to whitelist | Yes |
| Add new permission types (optional) | Yes |
| Remove or rename existing fields | No — requires spec: 2 |
| Change field types or constraints | No — requires spec: 2 |
| Remove serializer formats | No — requires spec: 2 |
| Remove verification binaries | No — requires spec: 2 |
| Change discovery paths | No — requires spec: 2 |
| Change hash algorithm | No — requires spec: 2 |
Appendix A: Full Manifest Example
spec: 1
name: my-terraform-module
version: "1.2.0"
type: tool
description: "Generates Terraform modules for AWS infrastructure"
inputSchema: "input.schema.json"
outputSchema: "output.schema.json"
tags:
- terraform
- aws
- infrastructure
generator:
strategy: llm
systemPrompt: |
You are a Terraform expert. Generate a complete Terraform module
for the requested AWS infrastructure. Use best practices:
- Use variables for all configurable values
- Include proper resource tagging
- Follow the standard module structure
- Include outputs for key resource attributes
updateMode: true
files:
- path: "modules/{moduleName}/main.tf"
serializer: hcl
- path: "modules/{moduleName}/variables.tf"
serializer: hcl
- path: "modules/{moduleName}/outputs.tf"
serializer: hcl
verification:
command: "terraform validate"
detector:
path: "modules/{moduleName}/main.tf"
permissions:
filesystem: project
child_process: requiredAppendix B: Input Schema Example
{
"type": "object",
"properties": {
"moduleName": {
"type": "string",
"description": "Name of the Terraform module to generate"
},
"provider": {
"type": "string",
"enum": ["aws", "gcp", "azure"],
"description": "Cloud provider"
},
"resources": {
"type": "array",
"items": {
"type": "string"
},
"description": "List of resource types to include"
},
"region": {
"type": "string",
"default": "us-east-1",
"description": "Cloud region"
},
"enableMonitoring": {
"type": "boolean",
"default": false,
"description": "Whether to include CloudWatch monitoring"
}
},
"required": ["moduleName", "provider", "resources"]
}