Inference Gateway CLI

The Inference Gateway CLI (infer) is a powerful Go-based command-line tool providing comprehensive access to the Inference Gateway with interactive chat, autonomous agents, Computer Use tools, and development workflows.

Current Version: v0.97.0 (Breaking changes expected until stable)

Key Features

  • 🚀 Zero-Configuration Setup - Add API keys and start chatting
  • 🤖 Autonomous Agent Mode - Delegate complex tasks with iterative execution
  • 🖥️ Computer Use Tools - GUI automation with screenshot, mouse, and keyboard control
  • 🛠️ Rich Tool Integration - File operations, code search, web access, GitHub integration
  • 🔒 Smart Safety System - Configurable approval workflow with diff visualization
  • 🎨 Beautiful TUI - Scrollable interface with syntax highlighting and multiple themes
  • 🌐 Web Terminal - Browser-based interface with tabbed sessions
  • 💰 Cost Tracking - Real-time token usage and cost calculation

Installation

Terminal
# Latest version
curl -fsSL https://raw.githubusercontent.com/inference-gateway/cli/main/install.sh | bash

# Specific version
curl -fsSL https://raw.githubusercontent.com/inference-gateway/cli/main/install.sh | bash -s -- --version v0.97.0

# Custom directory
curl -fsSL https://raw.githubusercontent.com/inference-gateway/cli/main/install.sh | bash -s -- --install-dir $HOME/.local/bin

Go Install

Terminal
go install github.com/inference-gateway/cli@latest

Manual Download

Download binaries from the GitHub releases page. Binaries are signed with Cosign for verification.

Build from Source

Terminal
git clone https://github.com/inference-gateway/cli.git
cd cli
go build -o infer

Quick Start

Inference Gateway TUI Interface

Terminal
# Initialize configuration
infer init

# Generate AGENTS.md documentation for AI agents (recommended for new projects)
infer chat
> /init

# Check gateway status
infer status

# Start interactive chat
infer chat

# Launch web terminal
infer chat --web

# Autonomous agent mode
infer agent "Analyze this codebase and suggest improvements"

# Get help
infer --help

Generating AGENTS.md

For new projects, use the /init shortcut to automatically generate an AGENTS.md file. This file provides structured documentation that helps AI agents understand your project:

Terminal
infer chat
> /init

The agent will:

  1. Analyze your project structure with the Tree tool
  2. Examine configuration files, build systems, and documentation
  3. Generate comprehensive AGENTS.md including:
    • Project overview and technologies
    • Architecture and structure
    • Development environment setup
    • Key commands (build, test, lint, run)
    • Testing instructions
    • Project conventions and coding standards
    • Important files and configurations

This documentation helps other AI agents (and developers) quickly understand how to work with your project.

Core Commands

CommandDescriptionKey Features
infer initInitialize project configurationCreates .infer/config.yaml with defaults
infer statusCheck gateway healthShows resource usage and connectivity
infer chatInteractive chat TUIStreaming, scrolling, tool expansion, mode switching
infer chat --webWeb-based terminalBrowser interface, tabbed sessions, remote access
infer agent <task>Autonomous task executionBackground operation, task planning, validation
infer config <cmd>Configuration managementModel, tools, safety, sandbox settings

Chat Interface Features

Navigation:

  • Shift + Arrow Down/Up: Scroll chat history
  • Ctrl+R: Toggle tool result expansion
  • Shift+Tab: Cycle agent modes (Standard → Plan → Auto-Accept)
  • Ctrl+K: Toggle model thinking blocks

Capabilities:

  • Real-time streaming with syntax highlighting
  • Mouse wheel and keyboard scrolling
  • Model switching during conversation
  • Tool result inspection
  • Cost tracking in status bar
  • Collapsible thinking blocks

Agent Modes

Toggle between modes anytime during chat using Shift+Tab.

ModeToolsApprovalBest For
Standard (Default)All configuredRequired for Write/Edit/Delete/BashGeneral development, collaborative coding
Plan (Read-Only)Read, Grep, Tree onlyNoneCode reviews, architecture analysis, planning
Auto-Accept (YOLO)All configuredNone - immediate executionTrusted environments, rapid prototyping, automation

Standard Mode

Full tool access with safety controls and approval prompts for sensitive operations.

Terminal
infer chat
> "Refactor the authentication module to use environment variables"
# Agent analyzes code, proposes changes, requests approval before modifying

Plan Mode

Analysis and planning without execution. Safe exploration of unfamiliar codebases.

Terminal
infer chat
# Press Shift+Tab to switch to Plan Mode
> "How should I implement user authentication with JWT tokens?"
# Agent explores code structure and provides detailed plan

Auto-Accept Mode

Zero approval prompts for maximum speed. Use with caution in version-controlled environments.

Terminal
infer chat
# Press Shift+Tab twice to switch to Auto-Accept Mode
> "Run the test suite, fix all failing tests, and commit the changes"
# Agent executes everything immediately

⚠️ Important for Auto-Accept: Ensure clean git working tree and backups.

Computer Use

GUI automation and visual understanding capabilities for interacting with applications and desktop environments.

Display Server Support

Automatic display server detection - no configuration needed:

PlatformSupported ServersNotes
macOSQuartz (native), X11 (XQuartz)Quartz automatically detected and used
LinuxX11, WaylandAuto-detection handles both protocols

Display server type is automatically detected at runtime. No manual configuration required.

Computer Use Tools

ToolDescriptionKey Capabilities
GetLatestScreenshotCapture screen regionsStreaming mode, region selection, circular buffer, JPEG format (configurable quality)
MouseMoveControl cursor positionAbsolute coordinates, relative movement
MouseClickPerform click actionsLeft/right/middle clicks, double-click support
MouseScrollScroll contentVertical and horizontal scrolling
KeyboardTypeType text and keysPlain text, key combinations (Ctrl+C, Cmd+V), configurable typing delay
GetFocusedAppIdentify active appReturns focused application name
ActivateAppSwitch applicationsFocus and activate specific apps

Screenshot Tool Features

Streaming Mode:

  • Maintains circular buffer of recent screenshots
  • Configurable buffer size (default: 5)
  • Configurable capture interval (default: 3 seconds)
  • Efficient memory management
  • Fast access to recent captures

Image Optimization:

  • Automatic resolution scaling (max: 1920x1080, target: 1024x768)
  • JPEG compression with configurable quality (default: 85%)
  • Reduces bandwidth and storage requirements
  • Optional capture overlay for debugging

Region Selection:

  • Full screen capture
  • Custom region coordinates (x, y, width, height)
  • Multiple monitor support

Floating Window

Real-time visualization of agent activity:

YAML
computer_use:
  floating_window:
    enabled: true
    respawn_on_close: true # Auto-restart if closed
    position: top-right # top-left, top-right, bottom-left, bottom-right
    always_on_top: true # Keep window above other apps

Features:

  • Always-on-top overlay
  • Shows agent actions in real-time
  • Configurable position
  • Auto-respawn option if accidentally closed
  • Non-intrusive design
  • Available on all platforms with GUI support

Configuration

YAML
computer_use:
  enabled: true
  floating_window:
    enabled: true
    respawn_on_close: true
    position: top-right
    always_on_top: true
  screenshot:
    enabled: true
    max_width: 1920 # Maximum capture width
    max_height: 1080 # Maximum capture height
    target_width: 1024 # Target resize width
    target_height: 768 # Target resize height
    format: jpeg # jpeg or png
    quality: 85 # JPEG quality (1-100)
    streaming_enabled: true
    capture_interval: 3 # Seconds between captures
    buffer_size: 5 # Number of screenshots to buffer
    temp_dir: '' # Temporary storage directory
    log_captures: false # Log each capture
    show_overlay: true # Show capture overlay
  rate_limit:
    enabled: true
    max_actions_per_minute: 60
    window_seconds: 60
  tools:
    mouse_move:
      enabled: true
    mouse_click:
      enabled: true
    mouse_scroll:
      enabled: true
    keyboard_type:
      enabled: true
      max_text_length: 1000
      typing_delay_ms: 100
    get_focused_app:
      enabled: true
    activate_app:
      enabled: true

Safety and Rate Limiting

Rate Limiting:

  • Default: 60 actions per minute
  • Prevents runaway automation
  • Configurable threshold

Safety Controls:

  • Approval prompts in Standard Mode
  • Auto-approve in YOLO mode
  • Activity logging for audit trails
  • Command execution monitoring

Best Practices:

  • Use Standard Mode for initial exploration
  • Enable logging for debugging
  • Set appropriate rate limits
  • Monitor activity logs
  • Test in safe environments first

Example Use Cases

Terminal
infer chat
> "Take a screenshot and analyze the error dialog"
> "Click the Submit button in the center of the screen"
> "Type 'Hello World' and press Enter"
> "Switch to the Terminal app and run ls command"
> "Find the Save button and click it"

Tools & Capabilities

When tools are enabled, LLMs have access to a comprehensive suite across multiple categories.

Tool Categories

CategoryToolsDescription
File SystemRead, Write, Edit, MultiEdit, Delete, TreeFile operations with safety controls
Code SearchGrep (ripgrep-powered)Fast code search with regex support
WebWebSearch, WebFetchInternet research with caching (15-min TTL)
DevelopmentBash (whitelisted), GitHub APICommand execution, repository integration
Task ManagementTodoWriteTrack complex workflows and planning
A2A IntegrationA2A_QueryAgent, A2A_SubmitTask, A2A_QueryTaskDelegate to specialized agents
Computer UseGetLatestScreenshot, MouseMove, MouseClick, MouseScroll, KeyboardType, GetFocusedApp, ActivateAppGUI automation and visual understanding

File System Tools

  • Read: Read file contents with line ranges
  • Write: Create or overwrite files (requires approval)
  • Edit: Modify files with string replacement (requires approval)
  • MultiEdit: Batch edit multiple files
  • Delete: Remove files (requires approval)
  • Tree: Directory structure visualization

Security Features

  • Command Whitelisting: Only approved patterns allowed for Bash tool
  • Approval Prompts: Safety confirmations for Write/Edit/Delete/Bash
  • Path Protection: Sensitive directories automatically excluded (.git/, *.env, .infer/)
  • Sandbox Controls: Restrict tool operations to allowed directories
  • Domain Whitelisting: Control web fetch access
  • Diff Preview: Visual diff before file modifications

Tool Configuration

Terminal
# Enable/disable tools
infer config tools enable
infer config tools disable

# Safety settings
infer config tools safety enable
infer config tools safety disable
infer config tools safety status

# Sandbox management
infer config tools sandbox add /protected/path
infer config tools sandbox remove /protected/path
infer config tools sandbox list

Configuration

Two-layer configuration system with precedence from highest to lowest:

Configuration Precedence

PrioritySourceExample
1 (Highest)Environment VariablesINFER_GATEWAY_URL, INFER_AGENT_MODEL
2Command Line Flags--model, --debug
3Project Config.infer/config.yaml
4User Config~/.infer/config.yaml
5 (Lowest)Built-in DefaultsInternal defaults

Key Configuration Areas

Gateway Settings:

  • Gateway URL and API key
  • Timeout and retry configuration
  • OCI image for auto-running gateway
  • Model filtering (include/exclude lists)

Agent Configuration:

  • Default model for operations
  • System prompts (main and plan mode)
  • System reminders interval
  • Max turns and tokens
  • Parallel tool execution (default: 5 concurrent)

Tool Settings:

  • Enable/disable individual tools
  • Approval requirements per tool
  • Command whitelists and patterns
  • Sandbox directories
  • Protected paths

Storage Backends:

  • SQLite (default) - local file storage
  • PostgreSQL - shared database for teams
  • Redis - high-performance caching
  • In-memory - temporary sessions

Conversation Features:

  • Automatic history with search
  • AI-generated titles
  • Token optimization and compaction
  • Export/import capabilities

Essential Environment Variables

Terminal
export INFER_GATEWAY_URL="http://localhost:8080"
export INFER_GATEWAY_API_KEY="your-api-key"
export INFER_AGENT_MODEL="openai/gpt-4"
export INFER_LOGGING_DEBUG="true"
export GITHUB_TOKEN="your-github-token"

Configuration Commands

Terminal
# Initialize configuration
infer config init

# Agent settings
infer config agent set-model openai/gpt-4
infer config agent set-system "You are a helpful coding assistant"

# View current configuration
infer config show

# Reset to defaults
infer config reset

See the full configuration reference for detailed options.

Shortcuts

The CLI provides built-in shortcuts and supports custom user-defined shortcuts.

Built-in Shortcuts

ShortcutDescriptionExample
/initGenerate AGENTS.md documentation/init
/init-github-actionSetup GitHub Action integration/init-github-action
/git <cmd>Git operations/git status, /git commit, /git push
/scm <cmd>GitHub operations/scm pr-create, /scm issue view 123
/a2aView connected A2A agents/a2a

Git Shortcuts

Terminal
# Execute git commands
/git status
/git branch

# AI-generated commit message
/git commit

# Push to remote
/git push origin main

SCM (GitHub) Shortcuts

Terminal
# List GitHub issues
/scm issues

# View issue details
/scm issue 123

# Create pull request with AI-powered plan
/scm pr-create

GitHub Action Setup

The /init-github-action shortcut launches an interactive wizard for setting up AI-powered issue automation using GitHub Apps and the infer-action GitHub Action. This wizard streamlines the process of creating GitHub Apps, managing credentials, configuring repository secrets, and generating workflows that respond to issue mentions with @infer.

Key Features:

  • Interactive wizard for creating or configuring GitHub Apps
  • Supports both personal and organization repositories
  • Automatic workflow file generation in .github/workflows/
  • Private key management with interactive file picker
  • GitHub App reusability across multiple repositories
  • Auto-opens browser with pre-filled app creation forms
  • Multi-step guided setup process

Prerequisites:

  • GitHub account with repository access
  • Admin permissions for creating GitHub Apps (required for organization repositories)
  • Downloaded private key file (.pem) from GitHub (after app creation)

Usage:

Terminal
infer chat
> /init-github-action

Wizard Flow:

  1. Check Existing Configuration: Detects if a GitHub App is already configured
  2. App ID Input: Enter existing App ID or create a new GitHub App
  3. Private Key Selection: Interactive file picker to select your .pem private key file
  4. Repository Configuration: Configure repository secrets and permissions
  5. Workflow Creation: Automatically generates GitHub Action workflow files

Creating a New GitHub App:

When creating a new app, the wizard opens GitHub with pre-configured settings:

  • App Name: infer-bot (customizable)
  • Required Permissions:
    • Contents: Write access
    • Pull Requests: Write access
    • Issues: Write access
    • Metadata: Read access
  • Webhooks: Disabled by default (can be enabled later if needed)

Steps for First-Time Setup:

  1. Run /init-github-action in chat mode
  2. Choose to create a new GitHub App
  3. Browser opens with pre-filled GitHub App creation form
  4. Complete the app creation on GitHub
  5. Download the private key (.pem file) from GitHub
  6. Return to CLI and enter the App ID shown on GitHub
  7. Use the file picker to select your downloaded .pem file
  8. Wizard creates workflow files in .github/workflows/

Reusing GitHub Apps:

The same GitHub App can be reused across multiple repositories:

Terminal
cd another-project
infer chat
> /init-github-action
# Enter the same App ID and use the same private key file

Generated Workflow Files:

The wizard creates GitHub Action workflows in .github/workflows/infer.yml that:

  • Trigger on issue events (opened, edited) and issue comments
  • Generate GitHub App tokens for authentication
  • Execute AI-powered agents via the @infer mention trigger
  • Support multiple LLM providers (OpenAI, Anthropic, DeepSeek, etc.)
  • Provide full repository access (issues, contents, pull requests)

Example Generated Workflow:

YAML
name: Infer

on:
  issues:
    types: [opened, edited]
  issue_comment:
    types: [created]

permissions:
  issues: write
  contents: write
  pull-requests: write

jobs:
  infer:
    runs-on: ubuntu-24.04
    steps:
      - name: Generate GitHub App Token
        id: generate-token
        uses: actions/[email protected]
        with:
          app-id: ${{ secrets.INFER_APP_ID }}
          private-key: ${{ secrets.INFER_APP_PRIVATE_KEY }}
          owner: ${{ github.repository_owner }}

      - name: Checkout Repository
        uses: actions/checkout@v5
        with:
          token: ${{ steps.generate-token.outputs.token }}

      - name: Run Infer Agent
        uses: inference-gateway/[email protected]
        with:
          github-token: ${{ steps.generate-token.outputs.token }}
          trigger-phrase: '@infer'
          model: 'deepseek/deepseek-chat'
          max-turns: 50
          anthropic-api-key: ${{ secrets.ANTHROPIC_API_KEY }}
          openai-api-key: ${{ secrets.OPENAI_API_KEY }}
          google-api-key: ${{ secrets.GOOGLE_API_KEY }}
          deepseek-api-key: ${{ secrets.DEEPSEEK_API_KEY }}

Repository Secrets Configuration:

After running the wizard, configure these secrets in your GitHub repository settings:

  • INFER_APP_ID - Your GitHub App ID
  • INFER_APP_PRIVATE_KEY - Your GitHub App private key (.pem file contents)
  • Provider API keys (ANTHROPIC_API_KEY, OPENAI_API_KEY, etc.)

Usage in Issues:

Once configured, mention @infer in any issue or issue comment to activate the agent:

@infer Please analyze this bug and suggest a fix

For more information on the infer-action GitHub Action, see the GitHub Action documentation.

Custom Shortcuts

Create YAML files in .infer/shortcuts/ directory. Shortcuts support three types:

1. Simple Commands

Execute a single command:

YAML
# .infer/shortcuts/simple.yaml
shortcuts:
  - name: hello
    description: 'Say hello'
    command: echo
    args:
      - 'Hello from Inference Gateway!'

2. Shortcuts with Subcommands

Group related commands under a parent shortcut:

YAML
# .infer/shortcuts/dev.yaml
shortcuts:
  - name: dev
    description: 'Development operations'
    command: bash
    subcommands:
      - name: test
        description: 'Run all tests'
        args:
          - -c
          - 'go test ./...'

      - name: build
        description: 'Build the project'
        args:
          - -c
          - 'go build -o app .'

Usage: /dev test, /dev build

3. AI-Powered Snippets

Use LLM to generate dynamic content based on command output. The snippet.prompt can reference JSON fields from command output using {fieldName} placeholders, and snippet.template uses {llm} for the AI-generated response:

YAML
# .infer/shortcuts/ai-commit.yaml
shortcuts:
  - name: ai-commit
    description: 'AI-generated commit message'
    command: bash
    args:
      - -c
      - |
        diff=$(git diff --cached)
        jq -n --arg diff "$diff" '{"diff": $diff}'
    snippet:
      prompt: "Generate commit message for:\n{diff}"
      template: '!git commit -m "{llm}"'

The command must output JSON. Fields are accessible in the prompt template via {fieldName} syntax. The LLM response is accessible via {llm} in the template.

Advanced Features

Cost Tracking

Real-time token usage and cost calculation displayed in the status bar.

Features:

  • Per-model pricing calculation
  • Cumulative session costs
  • Input and output token tracking
  • Status bar indicator (💰 $0.0234)
  • Custom pricing support

View Costs:

Terminal
# Costs displayed in status bar during chat
infer chat
# Status bar shows: 💰 $0.0234 | Model: openai/gpt-4o

# Export conversation with cost details
infer conversation export <conversation-id>

Model Thinking Visualization

Collapsible thinking blocks for models that support thinking (Claude, o1, etc.).

Features:

  • Collapsible blocks with first sentence preview
  • Ctrl+K keyboard shortcut to toggle
  • Theme-aware styling
  • Performance optimization (long thinking blocks collapsed by default)

Usage:

Terminal
infer chat
# Ask complex question requiring reasoning
> "Design a scalable microservices architecture for e-commerce"
# Model's thinking process displayed in collapsible blocks
# Press Ctrl+K to expand/collapse thinking

Conversation Management

Storage Backends:

  • SQLite (default): .infer/conversations.db
  • PostgreSQL: Shared team database
  • Redis: High-performance caching
  • In-memory: Temporary sessions

Features:

  • Automatic conversation history
  • AI-generated titles (batch: 10 messages)
  • Search across conversations
  • Export to JSON/Markdown
  • Token optimization with compaction

Commands:

Terminal
# List conversations
infer conversation list

# Show conversation
infer conversation show <id>

# Export conversation
infer conversation export <id>

# Delete conversation
infer conversation delete <id>

MCP Integration

Connect to Model Context Protocol servers for extended capabilities. MCP provides stateless tool execution for external services like databases, file systems, and APIs.

Setup:

Initialize project to create .infer/mcp.yaml:

Terminal
infer init

Configure MCP servers in .infer/mcp.yaml:

YAML
enabled: true
connection_timeout: 30
discovery_timeout: 30
liveness_probe_enabled: true
liveness_probe_interval: 10

servers:
  # Auto-start MCP server in container (recommended)
  - name: 'demo-server'
    enabled: true
    run: true
    oci: 'mcp-demo-server:latest'
    description: 'Demo MCP server'

  # Connect to external MCP server
  - name: 'filesystem'
    url: 'http://localhost:3000/sse'
    enabled: true
    description: 'File system operations'
    exclude_tools:
      - 'delete_file'

CLI Commands:

Terminal
# Add auto-start MCP server
infer mcp add my-server --run --oci=my-mcp:latest

# List MCP servers
infer mcp list

# Toggle server
infer mcp toggle my-server

# Remove server
infer mcp remove my-server

Using MCP Tools:

MCP tools appear as MCP_<server>_<tool> in chat. Example:

Terminal
infer chat
> "Use the MCP_demo-server_get_time tool to get current time"

See MCP documentation for detailed integration guide and server development.

A2A Integration

Delegate specialized tasks to Agent-to-Agent compatible agents.

Setup:

Terminal
# Initialize agents configuration
infer agents init

# Add remote agent
infer agents add calendar-agent http://calendar.example.com

# Add local agent with Docker
infer agents add my-agent http://localhost:8081 --oci ghcr.io/myorg/agent:latest --run

# List agents
infer agents list

# View agent details
infer agents show calendar-agent

Usage:

Terminal
infer chat
> "Schedule a meeting tomorrow at 2 PM using the calendar agent"
> /a2a  # View connected agents

See A2A documentation for creating custom agents.

Parallel Tool Execution

Execute up to 5 tools concurrently for improved performance.

Configuration:

YAML
agent:
  max_concurrent_tools: 5 # Default: 5

Benefits:

  • Faster multi-file operations
  • Concurrent web fetches
  • Parallel code searches
  • Reduced total execution time

Workflows

Bug Investigation and Fix

Terminal
infer chat
# Shift+Tab to Plan Mode
> "Analyze bug in issue #123 and create fix plan"

# Shift+Tab to Standard Mode
> "Implement the fix according to the plan"

# Test and commit
> "Run test suite to verify"
> "/git commit"

Feature Development

Terminal
infer chat
> "Read CONTRIBUTING.md and understand project structure"

# Shift+Tab to Plan Mode
> "Design implementation for user profile feature with avatar upload"

# Shift+Tab twice to Auto-Accept Mode
> "Implement the user profile feature according to the plan"

# Shift+Tab to Standard Mode
> "Review changes and run all tests"

Code Review and Refactoring

Terminal
infer chat
# Plan Mode for analysis
> "Review authentication module for security issues and code quality"

# Standard Mode for implementation
> "Refactor based on recommendations, prioritize security issues"

GitHub Issue Resolution

Terminal
infer agent "Fix the bug described in GitHub issue #456"

# Agent autonomously:
# 1. Fetches issue details
# 2. Analyzes relevant code
# 3. Implements fix
# 4. Runs tests
# 5. Creates commit referencing issue

Best Practices

For Beginners

  • Start with Plan Mode for unfamiliar code
  • Always work in git repositories
  • Review diff visualizations before approving
  • Begin with simple tasks

For Power Users

  • Use Auto-Accept for trusted, repetitive tasks
  • Create custom shortcuts for frequent commands
  • Combine with scripts for automation
  • Leverage A2A for specialized workflows

Performance Tips

  • Be specific with file paths and function names
  • Use Grep to narrow down relevant files first
  • Break large tasks into smaller subtasks
  • Provide context with references

Safety

  • Review diffs before approving modifications
  • Run tests after significant changes
  • Have backups before extensive Auto-Accept usage
  • Whitelist only trusted commands
  • Add sensitive directories to protected paths

Security

Command Whitelisting

Bash tool only executes whitelisted commands and patterns:

YAML
tools:
  bash:
    whitelist:
      commands: [ls, pwd, tree, git]
      patterns:
        - ^git status$
        - ^git branch.*$
        - ^npm test$

Protected Paths

Automatically excluded from tool access:

  • .git/ - Repository data
  • *.env - Environment files
  • .infer/ - Configuration directory
  • Custom paths via sandbox config

Approval Workflow

Enable safety confirmations:

Terminal
infer config tools safety enable

LLMs request approval before executing Write/Edit/Delete/Bash operations with real-time diff preview.

Troubleshooting

Connection Issues

Terminal
# Check configuration
infer config show

# Verify gateway status
infer status

# Debug mode
infer --debug chat

Permission Issues

Terminal
# Check configuration directory
ls -la ~/.infer/

# Reset configuration
infer config reset

# Re-initialize
infer init

Tool Execution Problems

Terminal
# Check tool status
infer config tools status

# Validate whitelist
infer config tools validate

# Enable debug logging
export INFER_LOGGING_DEBUG=true
infer agent "your task"

Computer Use Issues

Terminal
# Verify display server
echo $DISPLAY  # Linux/X11

# Check permissions (macOS)
# System Preferences > Security & Privacy > Accessibility

# Test screenshot
infer chat
> "Take a screenshot and describe what you see"

Command Reference

CommandDescription
infer initInitialize project configuration
infer statusCheck gateway health and resource usage
infer chatInteractive chat session (TUI)
infer chat --webWeb-based terminal interface
infer agent <task>Autonomous task execution
infer config <subcommand>Configuration management
infer agents <subcommand>A2A agent management
infer conversation <subcommand>Conversation history management
infer --versionShow version information
infer --helpDisplay help information

Support and Resources

The CLI is actively developed with regular updates and new features. Check the repository for the latest releases and announcements.