Skip to Content
DojOps — AI DevOps Automation Engine. Learn more →
Getting StartedConfiguration

Configuration

DojOps supports 6 LLM providers with flexible configuration via CLI flags, environment variables, config files, and named profiles.


Supported Providers

ProviderDOJOPS_PROVIDERRequired Env VarDefault ModelSDK
OpenAIopenaiOPENAI_API_KEYgpt-4o-miniopenai
AnthropicanthropicANTHROPIC_API_KEYclaude-sonnet-4-5-20250929@anthropic-ai/sdk
Ollamaollama(none — local)llama3ollama
DeepSeekdeepseekDEEPSEEK_API_KEYdeepseek-chatopenai (compatible)
GeminigeminiGEMINI_API_KEYgemini-2.5-flash@google/genai
GitHub Copilotgithub-copilot(OAuth Device Flow)gpt-4oopenai (compatible)

Provider Management

DojOps includes a dedicated provider command for managing LLM providers — adding, removing, switching, and listing them.

Adding Providers

# Add your first provider (auto-set as default) dojops provider add openai --token sk-... # Add a second provider (preserves existing default) dojops provider add anthropic --token sk-ant-... # Ollama (local, no token needed) dojops provider add ollama

Switching Providers

# Interactive picker — shows only configured providers dojops provider switch # Direct — set by name dojops provider default anthropic # Shortcut flag dojops provider --as-default openai

Listing Providers

# Table view (default) dojops provider # JSON output dojops provider list --output json

Output shows configured (*) vs unconfigured (o) providers, with the default marked:

┌ Providers ──────────────────────────────────────┐ │ * anthropic sk-***ntx │ │ * openai (default) sk-***ojx model: gpt-4o │ │ o deepseek (not set) │ │ o gemini (not set) │ │ * ollama (local) │ └─────────────────────────────────────────────────┘

Removing Providers

dojops provider remove openai

If the removed provider was the default, DojOps clears the default and suggests an alternative.


Configuration Methods

Interactive Setup

dojops config

The interactive wizard:

  1. Selects a provider from the list
  2. Prompts for the API key (skipped for Ollama and GitHub Copilot)
  3. For GitHub Copilot: runs the OAuth Device Flow automatically if not already authenticated
  4. Fetches available models from the provider’s API via listModels()
  5. Shows an interactive model picker
  6. Asks whether to switch default provider (if one already exists)
  7. Offers to configure another provider

Environment Variables

export DOJOPS_PROVIDER=openai export OPENAI_API_KEY=sk-... export DOJOPS_MODEL=gpt-4o # Optional model override export DOJOPS_TEMPERATURE=0.7 # Optional temperature override export DOJOPS_API_PORT=3000 # API server port (default: 3000)

CLI Flags

dojops --provider=anthropic "Create a Terraform config" dojops --model=gpt-4o "Create a Kubernetes deployment" dojops --temperature=0.2 "Create a Terraform config"

Config File

DojOps saves configuration to ~/.dojops/config.json:

{ "provider": "openai", "model": "gpt-4o-mini", "defaultTemperature": 0.7, "token": "sk-..." }

Configuration Precedence

Values are resolved in order (first match wins):

Provider: --provider flag > $DOJOPS_PROVIDER > config file > "openai" (default) Model: --model flag > $DOJOPS_MODEL > config file > provider default Temperature: --temperature flag > $DOJOPS_TEMPERATURE > config file > undefined (provider default) Note: `apply --replay` forces temperature=0 regardless of other settings Token: $OPENAI_API_KEY (etc.) > config file token

Model Selection

Each provider ships with a sensible default, but you can choose any model your provider supports:

# Interactive: fetches models from provider API, shows picker dojops config # Set directly in config dojops config --model=gpt-4o # One-off override (doesn't change saved config) dojops --model=deepseek-reasoner "Analyze this Terraform plan"

Dynamic Model Discovery

When running dojops config, DojOps calls the provider’s listModels() API to fetch available models:

  • OpenAI — Lists models from the OpenAI API
  • Anthropic — Lists supported Claude models
  • Ollama — Lists locally installed models
  • DeepSeek — Lists available DeepSeek models
  • Gemini — Lists available Gemini models
  • GitHub Copilot — Lists models available to your Copilot subscription tier

Profiles

Named profiles let you switch between different provider/environment configurations:

Create a Profile

# Save current config as a named profile dojops config profile create staging

Use a Profile

# Switch to a profile (updates active config) dojops config profile use staging # One-off profile override (doesn't change active config) dojops --profile=staging "Create an S3 bucket"

List Profiles

dojops config profile list

Example: Multi-Environment Setup

# Set up development profile (local Ollama) export DOJOPS_PROVIDER=ollama dojops config dojops config profile create dev # Set up production profile (OpenAI) export DOJOPS_PROVIDER=openai export OPENAI_API_KEY=sk-prod-... dojops config --model=gpt-4o dojops config profile create prod # Switch between them dojops config profile use dev # Uses local Ollama dojops config profile use prod # Uses OpenAI GPT-4o

Environment Variables Reference

VariableDescriptionDefault
DOJOPS_PROVIDERLLM provider nameopenai
DOJOPS_MODELModel overrideProvider default
DOJOPS_TEMPERATURETemperature overrideProvider default
OPENAI_API_KEYOpenAI API key
ANTHROPIC_API_KEYAnthropic API key
DEEPSEEK_API_KEYDeepSeek API key
GEMINI_API_KEYGoogle Gemini API key
DOJOPS_API_PORTAPI server port3000
OLLAMA_HOSTOllama server URLhttp://localhost:11434
OLLAMA_TLS_REJECT_UNAUTHORIZEDTLS cert verification for Ollamatrue
GITHUB_COPILOT_TOKENGitHub OAuth token (skip device flow)
DOJOPS_HUB_URLDojOps Hub API base URLhttps://hub.dojops.ai
DOJOPS_HUB_TOKENAPI token for hub publishing (generate one)

Ollama Setup

Ollama runs locally and doesn’t require an API key:

# Install Ollama curl -fsSL https://ollama.com/install.sh | sh # Pull a model ollama pull llama3 # Configure DojOps export DOJOPS_PROVIDER=ollama dojops "Create a Dockerfile for Node.js"

By default, DojOps connects to Ollama at http://localhost:11434. To use a remote or custom Ollama server:

# Interactive setup (prompts for URL + TLS settings) dojops config # Or set via environment variable export OLLAMA_HOST=https://ollama.corp.internal:8443 # For self-signed certificates behind proxies export OLLAMA_TLS_REJECT_UNAUTHORIZED=false

The Ollama host URL is resolved with this priority: OLLAMA_HOST env > ~/.dojops/config.json > http://localhost:11434.

GitHub Copilot Setup

GitHub Copilot uses an OAuth Device Flow instead of a static API key. You need an active GitHub Copilot subscription (Pro, Pro+, Business, or Enterprise).

# Authenticate via Device Flow (opens browser) dojops auth login --provider github-copilot # Or use the provider command dojops provider add github-copilot # Or configure interactively dojops config # Select "github-copilot" → Device Flow runs automatically → model picker shown

The Device Flow works as follows:

  1. DojOps requests a one-time device code from GitHub
  2. You open https://github.com/login/device in your browser and enter the code
  3. DojOps polls GitHub for authorization
  4. Once approved, DojOps exchanges the OAuth token for a short-lived Copilot JWT
  5. The JWT is cached and auto-refreshed before each API call (~30 min expiry)
  6. DojOps fetches available models and shows an interactive picker to select the default model

Tokens are stored in ~/.dojops/copilot-token.json (mode 0600). The Copilot API is OpenAI-compatible, so you can use models like gpt-4o, gpt-4o-mini, claude-3.5-sonnet, o1-mini, etc. depending on your subscription tier. After authentication, all three commands (auth login, provider add, and config) will present a model picker so you can choose your preferred model immediately.

For CI/CD, you can set GITHUB_COPILOT_TOKEN to a GitHub OAuth token (ghu_xxx) to skip the interactive Device Flow.


Viewing Configuration

# Show current config dojops config show # Show system health and config dojops doctor # Inspect detailed config state dojops inspect config

.env File

For development, create a .env file in the project root:

# .env DOJOPS_PROVIDER=openai OPENAI_API_KEY=sk-... DOJOPS_MODEL=gpt-4o-mini DOJOPS_API_PORT=3000

See .env.example in the repository for a template.