DevOps Tools
DojOps includes 12 built-in DevOps tools covering CI/CD, infrastructure-as-code, containers, monitoring, and system services. All tools follow a consistent pattern built on the BaseTool<T> abstract class. Additionally, a custom tool system lets you extend DojOps with custom tools via declarative tool.yaml manifests.
Tool Overview
| Tool | Serialization | Output Files | Detector | Verifier |
|---|---|---|---|---|
| GitHub Actions | YAML | .github/workflows/ci.yml | Yes | Structure lint |
| Terraform | HCL | main.tf, variables.tf | Yes | terraform validate |
| Kubernetes | YAML | K8s manifests | — | kubectl --dry-run |
| Helm | YAML | Chart.yaml, values.yaml | — | — |
| Ansible | YAML | {name}.yml | — | — |
| Docker Compose | YAML | docker-compose.yml | Yes | — |
| Dockerfile | Dockerfile syntax | Dockerfile, .dockerignore | Yes | hadolint |
| Nginx | Nginx conf | nginx.conf | — | — |
| Makefile | Make syntax (tabs) | Makefile | Yes | — |
| GitLab CI | YAML | .gitlab-ci.yml | Yes | Structure lint |
| Prometheus | YAML | prometheus.yml, alert-rules.yml | — | — |
| Systemd | INI | {name}.service | — | — |
Tool Pattern
Every tool follows the same file structure:
tool-name/
schemas.ts Zod input/output schemas (includes optional existingContent field)
detector.ts (optional) Filesystem detection of project context
generator.ts LLM call with structured schema -> serialization (YAML/HCL/custom)
Accepts optional existingContent param to switch between "generate new" and "update existing" prompts
verifier.ts (optional) External tool validation
*-tool.ts BaseTool subclass: generate() auto-detects existing files, verify() validates, execute() backs up + writes
index.ts Barrel exports
*.test.ts Vitest testsBaseTool Abstract Class
abstract class BaseTool<TInput> {
abstract name: string;
abstract inputSchema: z.ZodType<TInput>;
// Zod validation of raw input
validate(input: unknown): TInput;
// LLM generation — returns structured data
abstract generate(input: TInput): Promise<Result>;
// Optional: write generated files to disk
execute?(input: TInput): Promise<void>;
// Optional: validate generated output with external tools
verify?(data: unknown): Promise<VerificationResult>;
}Schemas (schemas.ts)
Each tool defines Zod schemas for its input and output:
- Input schema — Validates user/planner input before generation. All input schemas include an optional
existingContentfield for passing existing config content to update/enhance - Output schema — Validates LLM response structure before serialization
Detector (detector.ts)
Optional filesystem detector that identifies project context:
GitHubActionsDetector— Finds existing workflow filesTerraformDetector— Detects.tffilesDockerfileDetector— Checks for existing DockerfileDockerComposeDetector— Checks fordocker-compose.ymlMakefileDetector— Checks for existing MakefileGitLabCIDetector— Checks for.gitlab-ci.yml
Generator (generator.ts)
Calls the LLM with a structured Zod schema, parses the response, and serializes to the target format (YAML, HCL, Dockerfile syntax, INI, etc.).
All generators accept an optional existingContent?: string parameter. When provided:
- The system prompt switches from “Generate a new config” to “Update the existing config. Preserve existing structure and settings.”
- The user prompt appends the existing content in a delimited block (
--- EXISTING CONFIGURATION ---) - This enables tools to enhance existing configs rather than replacing them from scratch
Verifier (verifier.ts)
Optional validation of generated output. Five tools implement verification:
| Tool | Verification Method | Verification Command / Check |
|---|---|---|
| Terraform | External binary (terraform) | terraform validate |
| Dockerfile | External binary (hadolint) | hadolint Dockerfile |
| Kubernetes | External binary (kubectl) | kubectl --dry-run=client |
| GitHub Actions | Built-in structure lint | Checks on trigger, jobs, runs-on, step run/uses |
| GitLab CI | Built-in structure lint | Checks job script, stages array, stage references |
Verification runs by default in CLI commands. Use --skip-verify to disable. External binary checks gracefully skip if the binary is not installed. Built-in verifiers always run.
Existing Config Auto-Detection
All 12 tools auto-detect existing config files and switch to update mode when found. Each tool knows its output file path and reads existing content automatically:
| Tool | Auto-Detect Path |
|---|---|
| GitHub Actions | {projectPath}/.github/workflows/ci.yml |
| Terraform | {projectPath}/main.tf |
| Kubernetes | {outputPath}/{appName}.yaml |
| Helm | {outputPath}/{chartName}/values.yaml |
| Ansible | {outputPath}/{playbookName}.yml |
| Docker Compose | {projectPath}/docker-compose.yml (+ .yaml, compose.yml variants) |
| Dockerfile | {outputPath}/Dockerfile then {projectPath}/Dockerfile |
| Nginx | {outputPath}/nginx.conf |
| Makefile | {projectPath}/Makefile |
| GitLab CI | {projectPath}/.gitlab-ci.yml |
| Prometheus | {outputPath}/prometheus.yml |
| Systemd | {outputPath}/{serviceName}.service |
Behavior:
- If
existingContentis explicitly passed in the input, it takes priority over auto-detection - Otherwise, the tool reads the file at the auto-detect path using
readExistingConfig()(from@dojops/sdk) - Files larger than 50KB are skipped (returns
null) - The
generate()output includesisUpdate: booleanso callers (CLI, planner) can distinguish create vs update
Atomic File Writes
All tool execute() methods use atomicWriteFileSync() from @dojops/sdk. This writes to a temporary file first, then atomically renames it to the target path using fs.renameSync (POSIX atomic rename). This prevents corrupted or partial files if the process crashes mid-write.
Backup Before Overwrite
When execute() writes to a file that already exists, it creates a .bak backup first using backupFile() from @dojops/sdk. For example:
main.tf→main.tf.bak.github/workflows/ci.yml→.github/workflows/ci.yml.bak
Backups are only created when updating existing files, not when creating new ones. The .bak files are used by dojops rollback to restore the original content.
File Tracking
All tool execute() methods return filesWritten and filesModified arrays in the ToolOutput:
filesWritten— All files written during execution (both new and updated)filesModified— Files that existed before and were overwritten (have.bakbackups)
This metadata flows through the executor into audit entries and execution logs, enabling precise rollback (delete new files, restore .bak for modified files).
Idempotent YAML Output
All YAML generators use shared dump options with sortKeys: true for deterministic output. Running the same generation twice produces identical YAML, eliminating diff noise from key reordering.
GitHub Actions uses a custom key sort function that preserves the conventional top-level key order (name → on → permissions → env → jobs) while sorting all other keys alphabetically.
VerificationResult:
interface VerificationResult {
valid: boolean;
issues: VerificationIssue[];
}
interface VerificationIssue {
severity: "error" | "warning" | "info";
message: string;
line?: number;
file?: string;
}Tool Details
GitHub Actions
Generates GitHub Actions workflow files (.github/workflows/ci.yml).
- Serialization:
js-yaml - Detector: Finds existing workflow files in
.github/workflows/ - Verifier: Built-in structure lint — validates
ontrigger,jobssection,runs-onper job (skipped for reusable workflow jobs withuses), steprun/usespresence - Output: Complete workflow YAML with jobs, steps, triggers
Terraform
Generates Terraform HCL configurations.
- Serialization: Custom HCL builder
- Detector: Detects existing
.tffiles - Verifier:
terraform validate— checks HCL syntax and provider requirements - Output:
main.tf(resources),variables.tf(input variables)
Kubernetes
Generates Kubernetes manifests (Deployments, Services, ConfigMaps, etc.).
- Serialization:
js-yaml - Verifier:
kubectl --dry-run=client— validates manifest structure - Output: YAML manifests
Helm
Generates Helm chart structures.
- Serialization:
js-yaml - Output:
Chart.yaml,values.yaml
Ansible
Generates Ansible playbooks.
- Serialization:
js-yaml - Output:
{name}.ymlplaybook
Docker Compose
Generates Docker Compose configurations.
- Serialization:
js-yaml - Detector: Checks for existing
docker-compose.yml - Output:
docker-compose.yml
Dockerfile
Generates optimized Dockerfiles with multi-stage builds.
- Serialization: Custom string builder
- Detector: Checks for existing Dockerfile
- Verifier:
hadolint— lints Dockerfile for best practices - Output:
Dockerfile,.dockerignore
Nginx
Generates Nginx server configurations.
- Serialization: Custom string builder
- Output:
nginx.conf
Makefile
Generates Makefiles with proper tab indentation.
- Serialization: Custom string builder (with tabs)
- Detector: Checks for existing Makefile
- Output:
Makefile
GitLab CI
Generates GitLab CI pipeline configurations.
- Serialization:
js-yaml - Detector: Checks for existing
.gitlab-ci.yml - Verifier: Built-in structure lint — validates
stagesis an array, jobs havescript(ortrigger/extends), stage references are declared, hidden jobs (.prefix) are skipped - Output:
.gitlab-ci.yml
Prometheus
Generates Prometheus monitoring and alerting configurations.
- Serialization:
js-yaml - Output:
prometheus.yml,alert-rules.yml
Systemd
Generates systemd service unit files.
- Serialization: Custom string builder (INI format)
- Output:
{name}.service
Creating a New Tool
To add a new tool to DojOps:
-
Create the directory:
packages/tools/src/my-tool/ -
Define schemas (
schemas.ts):import { z } from "@dojops/sdk"; export const MyToolInputSchema = z.object({ /* tool-specific fields */ existingContent: z .string() .optional() .describe( "Existing config file content to update/enhance. If omitted, tool auto-detects existing files.", ), }); export const MyToolOutputSchema = z.object({ /* ... */ }); -
Implement the generator (
generator.ts):export async function generateMyTool(input, provider, existingContent?: string) { const isUpdate = !!existingContent; const system = isUpdate ? "Update the existing config. Preserve existing structure and settings." : "Generate a new config from scratch."; const prompt = isUpdate ? `${buildPrompt(input)}\n\n--- EXISTING CONFIGURATION ---\n${existingContent}\n--- END ---` : buildPrompt(input); const response = await provider.generate({ system, prompt, schema: MyToolOutputSchema }); return parseAndValidate(response.content, MyToolOutputSchema); } -
Create the tool class (
my-tool.ts):import { BaseTool, readExistingConfig, backupFile, atomicWriteFileSync } from "@dojops/sdk"; export class MyTool extends BaseTool<MyToolInput> { name = "my-tool"; inputSchema = MyToolInputSchema; async generate(input) { const existingContent = input.existingContent ?? readExistingConfig(outputPath); const isUpdate = !!existingContent; const result = await generateMyTool(input, this.provider, existingContent); return { success: true, data: { ...result, isUpdate } }; } async execute(input) { const result = await this.generate(input); if (result.data.isUpdate) backupFile(outputPath); atomicWriteFileSync(outputPath, result.data.content); return { ...result, filesWritten: [outputPath], filesModified: result.data.isUpdate ? [outputPath] : [], }; } } -
Add optional detector (
detector.ts) and verifier (verifier.ts). -
Export from index (
index.ts) and add to the tools barrel export inpackages/tools/src/index.ts. -
Write tests (
my-tool.test.ts) — include tests for auto-detection, update mode prompts, and backup creation.
DOPS Module Format
Built-in tools can also be defined as .dops module files — a declarative format combining YAML frontmatter with markdown prompt sections. The @dojops/runtime package processes these modules through a unified runtime engine (DopsRuntime).
Frontmatter Sections
All sections are defined in YAML between --- delimiters:
| Section | Required | Description |
|---|---|---|
dops | Yes | Version identifier (v1) |
meta | Yes | Tool name, version, description, optional icon URL |
input | No | Input field definitions with types, constraints, defaults |
output | Yes | JSON Schema for LLM output validation |
files | Yes | Output file specs (path templates, format, serialization options) |
scope | No | Write boundary — explicit list of allowed write paths |
risk | No | Tool risk self-classification (LOW / MEDIUM / HIGH + rationale) |
execution | No | Mutation semantics (mode, deterministic, idempotent flags) |
update | No | Structured update behavior (strategy, inputSource, injectAs) |
detection | No | Existing file detection paths for auto-update mode |
verification | No | Structural rules + optional binary verification command |
permissions | No | Filesystem, child_process, and network permission declarations |
Scope — Write Boundary Enforcement
The scope section declares which files a tool is allowed to write. Paths use the same {var} template syntax as files[].path:
scope:
write: ["{outputPath}/main.tf", "{outputPath}/variables.tf"]At execution time, resolved file paths are validated against the expanded scope patterns. Writes to paths not in scope.write are rejected with an error. Path traversal (..) in scope patterns is rejected at parse time.
When scope is omitted, the tool can write to any path (backward compatible with existing tools).
Risk — Tool Self-Classification
Tools declare their own risk level:
risk:
level: MEDIUM
rationale: "Infrastructure changes may affect cloud resources"| Level | Typical Use Cases |
|---|---|
LOW | CI/CD, monitoring, build automation (github-actions, makefile) |
MEDIUM | Infrastructure, containers, deployments (terraform, k8s) |
HIGH | Production resources, IAM, security configurations |
Default when not declared: LOW with rationale “No risk classification declared”. The risk level is exposed via DopsRuntime.metadata.riskLevel for use by planners and approval workflows.
Execution — Mutation Semantics
execution:
mode: generate # "generate" or "update"
deterministic: false # same input always produces same output?
idempotent: true # safe to re-run without side effects?All fields have defaults: mode: "generate", deterministic: false, idempotent: false.
Update — Structured Update Behavior
update:
strategy: replace # "replace" or "preserve_structure"
inputSource: file # where existing content comes from
injectAs: existingContent # variable name for existing content in promptsWhen strategy is preserve_structure, the prompt compiler injects additional instructions to maintain the existing configuration’s organization. The injectAs field controls the variable name used in ## Update Prompt sections (default: existingContent).
Meta Icon — Marketplace Display
meta:
name: terraform
version: 1.0.0
description: "Generates Terraform configurations"
icon: "https://registry.dojops.ai/icons/terraform.svg"The icon field is optional. When present, it must be an HTTPS URL (max 2048 characters). This URL is used by the DojOps marketplace to display a tool icon alongside the tool name.
Markdown Sections
After the closing --- delimiter, markdown sections define prompts:
## Prompt(required) — Main generation prompt with{var}template substitution## Update Prompt(optional) — Used instead of## Promptwhen updating existing content## Examples(optional) — Example outputs for the LLM## Constraints(optional) — Rules the LLM must follow## Keywords(required) — Comma-separated keywords for agent routing
Built-in Module Risk Levels
| Tool | Risk | Rationale |
|---|---|---|
| terraform | MEDIUM | Infrastructure changes may affect cloud resources |
| kubernetes | MEDIUM | Cluster configuration changes affect running services |
| helm | MEDIUM | Chart changes affect Kubernetes deployments |
| dockerfile | MEDIUM | Build image changes may affect production runtime |
| docker-compose | MEDIUM | Service stack changes affect running containers |
| ansible | MEDIUM | Playbook changes execute on remote hosts |
| nginx | MEDIUM | Web server config changes affect traffic routing |
| systemd | MEDIUM | Service unit changes affect system processes |
| github-actions | LOW | CI/CD workflow changes require PR review |
| gitlab-ci | LOW | CI/CD pipeline changes require MR review |
| makefile | LOW | Build automation changes are local |
| prometheus | LOW | Monitoring config changes are observable |
Custom Tool System
DojOps supports custom tools via the @dojops/tool-registry custom tool system. Custom tools are discovered automatically and behave exactly like built-in tools — they go through the same Planner, Executor, verification, and audit pipeline.
Custom Tool Discovery
Custom tools are discovered from two locations:
- Global:
~/.dojops/tools/<name>/tool.yaml - Project:
.dojops/tools/<name>/tool.yaml(overrides global if same name)
Custom tool discovery happens automatically on every command — no manual registration needed.
Tool Manifest (tool.yaml)
Each custom tool is a directory containing a tool.yaml manifest and a JSON Schema file:
spec: 1
name: my-tool
version: 1.0.0
type: tool
description: "Generates configuration for My Tool"
inputSchema: input.schema.json
generator:
strategy: llm
systemPrompt: |
You are a My Tool configuration expert.
Generate valid My Tool configuration based on the user's requirements.
updateMode: true
existingDelimiter: "--- EXISTING CONFIGURATION ---"
files:
- path: my-tool.yml
serializer: yaml
verification:
command: my-tool validate --config my-tool.yml
detector:
path: my-tool.yml
tags:
- configuration
- my-tool
permissions:
filesystem: project
network: none
child_process: requiredInput Schema (input.schema.json)
A JSON Schema file defining the custom tool’s input parameters:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"name": { "type": "string", "description": "Application name" },
"environment": {
"type": "string",
"enum": ["development", "staging", "production"],
"default": "development"
}
},
"required": ["name"]
}JSON Schema inputs are converted to Zod schemas at runtime, ensuring full compatibility with the Planner’s zodSchemaToText() and the Executor’s validation pipeline.
Custom Tool CLI Commands
# List all discovered custom tools (global + project)
dojops tools list
# Validate a tool manifest
dojops tools validate .dojops/tools/my-tool/
# Scaffold a new custom tool with template files
dojops tools init my-tool
# Publish a .dops tool to DojOps Hub (requires DOJOPS_HUB_TOKEN)
dojops tools publish my-tool.dops
dojops tools publish my-tool.dops --changelog "Added Docker support"
# Install a .dops tool from DojOps Hub
dojops tools install nginx-config
dojops tools install nginx-config --version 1.0.0 --global
# Search the DojOps Hub for tools
dojops tools search docker
dojops tools search terraform --limit 5
dojops tools search k8s --output jsonHub Integration
The publish and install commands connect to the DojOps Hub — a tool marketplace where users share .dops tools.
Authentication Setup
Publishing tools to the Hub requires an API token. Tokens follow the GitHub PAT model — shown once at creation, stored as SHA-256 hashes.
1. Sign in to the Hub — Go to hub.dojops.ai and sign in with your GitHub account.
2. Generate a token — Navigate to Settings → API Tokens (/settings/tokens), or click “Settings” in the navbar.
3. Create a named token — Give your token a descriptive name (e.g. “My laptop”, “CI/CD pipeline”) and choose an expiration:
| Expiration | Duration | Use Case |
|---|---|---|
| 1 month | 30 days | Short-lived tasks |
| 3 months | 90 days | Regular development |
| No expiration | Never | CI/CD pipelines |
4. Copy the token — The raw token (format: dojops_ + 40 hex chars) is displayed once. Copy it immediately — you won’t be able to see it again.
5. Set the environment variable:
# Add to your shell profile (~/.bashrc, ~/.zshrc)
export DOJOPS_HUB_TOKEN="dojops_a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2"You can manage up to 10 tokens per account. View active tokens, their last-used timestamps, and revoke compromised tokens from the Settings page at any time.
Publishing a Tool
The publish flow validates locally, computes a SHA-256 hash for integrity, and uploads to the Hub.
# Publish a .dops file
dojops tools publish my-tool.dops
# Publish with a changelog message
dojops tools publish my-tool.dops --changelog "Added Docker support"
# Publish by tool name (looks up in .dojops/tools/)
dojops tools publish my-toolWhat happens during publish:
- Local validation — The
.dopsfile is parsed and validated against the v1 spec (frontmatter, sections, Zod schemas) - SHA-256 hash — The CLI computes a SHA-256 hash of the file as a publisher attestation
- Upload — The file and hash are sent to the Hub via
POST /api/packageswithAuthorization: Bearer <token> - Server verification — The Hub recomputes the hash and compares it against the client-provided hash. Mismatches are rejected
- Storage — The Hub stores the file and the publisher’s hash for download integrity verification
Example output:
◇ Validated: my-tool v1.0.0
◇ SHA256: a1b2c3d4e5f6...
┌ Published new tool
│ Name: my-tool
│ Version: v1.0.0
│ Slug: my-tool
│ SHA256: a1b2c3d4e5f6...
│ URL: https://hub.dojops.ai/packages/my-tool
└Publishing a new version of an existing tool uses the same command — the Hub detects the existing package and adds the new version:
# Update version in .dops frontmatter, then:
dojops tools publish my-tool.dops --changelog "v1.1.0: Added Redis support"Installing a Tool
The install flow downloads the tool, verifies its integrity against the publisher’s hash, and places it in your tools directory.
# Install latest version (project-local)
dojops tools install nginx-config
# Install a specific version
dojops tools install nginx-config --version 1.0.0
# Install globally (~/.dojops/tools/)
dojops tools install nginx-config --globalWhat happens during install:
- Fetch metadata — The CLI queries
GET /api/packages/<slug>to resolve the latest version (unless--versionis specified) - Download — The
.dopsfile is downloaded fromGET /api/download/<slug>/<version>with the publisher’s SHA-256 hash in theX-Checksum-Sha256response header - Integrity check — The CLI recomputes the SHA-256 hash locally and compares it against the publisher’s hash. Mismatches abort the install with a tampering warning
- Validation — The downloaded file is parsed and validated as a
.dopsmodule - Write — The file is saved to
.dojops/tools/<name>.dops(project) or~/.dojops/tools/<name>.dops(global)
Example output:
◇ Downloading nginx-config v1.0.0...
┌ Tool installed
│ Name: nginx-config
│ Version: v1.0.0
│ Path: .dojops/tools/nginx-config.dops
│ Scope: project
│ SHA256: f9e8d7c6b5a4...
│ Verify: OK — matches publisher hash
└Integrity failure example (file tampered with):
✖ SHA256 integrity check failed! The downloaded file does not match the publisher's hash.
Publisher: f9e8d7c6b5a4...
Download: 0000aaaa1111...
This may indicate the file was tampered with. Aborting install.Environment Variables
| Variable | Description | Default |
|---|---|---|
DOJOPS_HUB_URL | Hub API base URL | https://hub.dojops.ai |
DOJOPS_HUB_TOKEN | API token for publishing (generated at /settings/tokens) | — |
Full Example: Creating and Publishing a Tool
Here’s a complete walkthrough — creating a .dops tool from scratch, publishing it to the Hub, and installing it.
1. Create the .dops file — docker-compose-generator.dops:
---
dops: v1
kind: tool
meta:
name: docker-compose-generator
version: 1.0.0
description: "Generates Docker Compose files for multi-service applications with networking, volumes, and health checks"
author: your-username
license: MIT
tags: [docker, compose, containers, devops]
input:
fields:
projectPath:
type: string
required: true
description: "Directory to write docker-compose.yml"
services:
type: string
required: true
description: "Comma-separated list of services (e.g. node-api, postgres, redis)"
output:
type: object
required: [version, services]
properties:
version:
type: string
services:
type: object
volumes:
type: object
networks:
type: object
files:
- path: "{projectPath}/docker-compose.yml"
format: yaml
source: llm
options:
sortKeys: false
indent: 2
noRefs: true
permissions:
filesystem: write
child_process: none
network: none
scope:
write: ["{projectPath}/docker-compose.yml"]
risk:
level: LOW
rationale: "Only generates a docker-compose.yml file, no execution"
execution:
mode: generate
deterministic: false
idempotent: true
---
## Prompt
You are a Docker Compose expert. Generate a production-ready docker-compose.yml
for the requested services.
Follow these best practices:
- Use specific image tags, never `latest`
- Add health checks for databases and caches
- Use named volumes for persistent data
- Create a dedicated bridge network
- Set restart policies to `unless-stopped`
- Add resource limits (memory)
- Use environment variables for configuration
- Expose only necessary ports
Respond with valid JSON matching the output schema.
## Examples
Input: "node-api, postgres, redis"
Output:
```json
{
"version": "3.8",
"services": {
"node-api": {
"build": { "context": ".", "dockerfile": "Dockerfile" },
"ports": ["3000:3000"],
"depends_on": {
"postgres": { "condition": "service_healthy" },
"redis": { "condition": "service_healthy" }
},
"restart": "unless-stopped",
"networks": ["app-network"]
},
"postgres": {
"image": "postgres:16-alpine",
"volumes": ["pgdata:/var/lib/postgresql/data"],
"healthcheck": {
"test": ["CMD-SHELL", "pg_isready -U app -d appdb"],
"interval": "10s",
"timeout": "5s",
"retries": 5
},
"restart": "unless-stopped",
"networks": ["app-network"]
},
"redis": {
"image": "redis:7-alpine",
"volumes": ["redisdata:/data"],
"healthcheck": {
"test": ["CMD", "redis-cli", "ping"],
"interval": "10s",
"timeout": "5s",
"retries": 5
},
"restart": "unless-stopped",
"networks": ["app-network"]
}
},
"volumes": { "pgdata": {}, "redisdata": {} },
"networks": { "app-network": { "driver": "bridge" } }
}Constraints
- Maximum 10 services per composition
- Never hardcode production secrets — use placeholder values
- Always include health checks for databases and caches
- Use Alpine-based images where available
Keywords
docker, compose, docker-compose, containers, services, volumes, networks, health-check, multi-container, orchestration
**2. Set up authentication:**
```bash
export DOJOPS_HUB_TOKEN="dojops_a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2"3. Publish:
dojops tools publish docker-compose-generator.dops --changelog "Initial release: Docker Compose generator with health checks, volumes, and networking"Expected output:
◇ Validated: docker-compose-generator v1.0.0
◇ SHA256: 7f3a2b1c9e8d...
┌ Published new tool
│ Name: docker-compose-generator
│ Version: v1.0.0
│ Slug: docker-compose-generator
│ SHA256: 7f3a2b1c9e8d...
│ URL: https://hub.dojops.ai/packages/docker-compose-generator
└4. Install and use (anyone can do this):
# Install the tool
dojops tools install docker-compose-generator
# Use it
dojops "Generate a docker-compose for node-api, postgres, redis"5. Publish a new version — bump meta.version in the frontmatter and publish again:
# After updating version to 1.1.0 in the .dops file:
dojops tools publish docker-compose-generator.dops --changelog "v1.1.0: Added MongoDB and RabbitMQ examples"Tool Policy
Control which custom tools are allowed via .dojops/policy.yaml:
# Only allow specific tools
allowedTools:
- my-tool
- another-tool
# Block specific tools (takes precedence over allowedTools)
blockedTools:
- untrusted-toolTool Isolation
Custom tools are sandboxed with the same guardrails as built-in tools, plus additional controls:
- Verification command whitelist — Only 16 known DevOps binaries are allowed (terraform, kubectl, helm, ansible-lint, docker, hadolint, yamllint, jsonlint, shellcheck, tflint, kubeval, conftest, checkov, trivy, kube-score, polaris). Non-whitelisted commands are rejected at runtime
- Permission enforcement — The
permissions.child_processfield must be"required"for verification commands to execute. Omitted or"none"means the command is silently skipped (default-safe) - Path traversal prevention — File paths in
files[].pathanddetector.pathcannot contain..segments, preventing writes outside the project directory - Execution guardrails — Custom tools execute through the same
SafeExecutorpipeline as built-in tools, inheritingmaxFileSize(1MB default),timeoutMs(30s default), DevOps write allowlist enforcement, and per-file audit logging
Tool Specification
The v1 custom tool contract is documented and frozen in Tool Specification v1. This spec covers manifest schema, discovery paths, input/output schemas, verification command whitelist, security constraints, and the compatibility promise.
Custom Tool Audit Trail
Custom tool executions include additional audit metadata:
toolType: "custom"— distinguishes from built-in toolstoolSource: "global" | "project"— where the custom tool was discoveredtoolVersion— version from the manifesttoolHash— SHA-256 hash of tool directory for integrity verificationsystemPromptHash— SHA-256 hash of the custom tool’s system prompt for reproducibility tracking
Supported Serializers
| Serializer | Description |
|---|---|
yaml | YAML via js-yaml |
json | JSON via JSON.stringify (indented) |
raw | Passthrough string |
hcl | Falls back to raw (v1) |
ini | Falls back to raw (v1) |
toml | Falls back to raw (v1) |