Asoba Ona Terminal

Troubleshooting Guide

Common issues and solutions for Terminal

This comprehensive guide covers common problems and their solutions when using Ona Terminal’s multi-provider AI architecture and MCP servers.


Quick Diagnostic Commands

Start troubleshooting with these essential commands:

# Check system status
ona-terminal status

# Verify server health
ona-terminal servers --health

# Test connectivity
ona-terminal ask "Test system connectivity and show available providers"

# Show configuration
ona-terminal config show

# Enable debug mode
export LOG_LEVEL=DEBUG
ona-terminal --debug ask "Debug test request"

Installation Issues

Command Not Found Error

Problem: ona-terminal: command not found

Cause: The installation directory is not in your PATH.

Solutions:

# Method 1: Check if ona-terminal exists
ls -la ~/.local/bin/ona-terminal

# If it exists, add to PATH:
export PATH=$PATH:$HOME/.local/bin

# Method 2: Make it permanent
echo 'export PATH=$PATH:$HOME/.local/bin' >> ~/.bashrc
source ~/.bashrc

# Method 3: Use full path directly
~/.local/bin/ona-terminal --help

# Method 4: Reinstall with different method
pip install --user -e .

Python Version Issues

Problem: ERROR: Python 3.10+ required

Cause: Ona Terminal requires Python 3.10 or higher for FastMCP compatibility.

Solutions:

# Check your Python version
python3 --version

# Install with specific Python version
python3.10 -m pip install -e .

# If Python 3.10+ not available, install it:

# Ubuntu/Debian:
sudo apt update
sudo apt install python3.10 python3.10-pip python3.10-venv

# macOS with Homebrew:
brew install python@3.10

# CentOS/RHEL:
sudo yum install python310 python310-pip

# Create virtual environment with correct Python version
python3.10 -m venv ona-terminal-env
source ona-terminal-env/bin/activate
pip install -e .

Dependency Conflicts

Problem: Package installation fails due to conflicting dependencies

Solutions:

# Method 1: Use virtual environment (recommended)
python3.10 -m venv ona-terminal-env
source ona-terminal-env/bin/activate
pip install -e .

# Method 2: Clear pip cache
pip cache purge
pip install --force-reinstall -e .

# Method 3: Use pip-tools for dependency management
pip install pip-tools
pip-compile requirements.in
pip-sync requirements.txt

# Method 4: Fresh installation
pip uninstall ona-terminal
pip install -e . --no-cache-dir

Configuration Problems

AWS Credentials Issues

Problem: AWS credentials not configured or NoCredentialsError

Diagnosis:

# Test AWS credentials
aws sts get-caller-identity

# Check credentials files
ls -la ~/.aws/
cat ~/.aws/credentials
cat ~/.aws/config

# Check environment variables
env | grep AWS

Solutions:

# Method 1: AWS CLI configuration
aws configure
# Enter: Access Key ID, Secret Access Key, Region (us-east-1)

# Method 2: Environment variables
export AWS_ACCESS_KEY_ID=your_access_key
export AWS_SECRET_ACCESS_KEY=your_secret_key
export AWS_DEFAULT_REGION=us-east-1

# Method 3: AWS profile
aws configure --profile ona-terminal
export AWS_PROFILE=ona-terminal

# Method 4: IAM roles (for EC2/ECS)
# Attach appropriate IAM role with Bedrock permissions

# Verify configuration
ona-terminal ask "Test AWS Bedrock connectivity"

Required AWS Permissions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "bedrock:InvokeModel",
        "bedrock:ListFoundationModels"
      ],
      "Resource": "*"
    }
  ]
}

GitHub Token Problems

Problem: GitHub integration not working

Diagnosis:

# Test GitHub token
curl -H "Authorization: token $GITHUB_TOKEN" https://api.github.com/user

# Check token permissions
curl -H "Authorization: token $GITHUB_TOKEN" https://api.github.com/user/repos

# Verify token scopes
curl -H "Authorization: token $GITHUB_TOKEN" -I https://api.github.com/user | grep -i scope

Solutions:

# Set GitHub token with proper permissions
export GITHUB_TOKEN=ghp_your_github_personal_access_token

# Required permissions (when creating token):
# ✓ repo (full repository access)
# ✓ write:issues (create/update issues)  
# ✓ read:org (for organization repositories)
# ✓ read:user (user information)

# Test integration
ona-terminal ask "List my GitHub repositories"
ona-terminal ask "Test GitHub connectivity"

Custom Model Configuration

Problem: Custom models not detected

Diagnosis:

# Check if model server is reachable
curl http://your-server:8000/status

# Check environment variables
echo $MISTRAL_STATUS_URL
echo $MISTRAL_FALLBACK_IP
echo $AI_PROVIDER_STRATEGY

# Test from Ona Terminal
ona-terminal servers --health
ona-terminal ask "Show available AI providers"

Solutions:

# Method 1: Basic configuration
export MISTRAL_STATUS_URL="http://your-server:8000/status"
export MISTRAL_FALLBACK_IP="your-server-ip"

# Method 2: Advanced configuration
export MISTRAL_ENABLED="true"
export AI_PROVIDER_STRATEGY="cost_optimized"
export AI_DEFAULT_PROVIDER="auto"

# Method 3: Load balancing setup
export MISTRAL_SERVER_IPS="10.0.1.50,10.0.1.51,10.0.1.52"

# Test detection and routing
ona-terminal ask "Generate simple Terraform configuration"
ona-terminal ask "Show me which provider was used and the cost breakdown"

AI Model Issues

Model Access Denied

Problem: AccessDeniedException when calling Bedrock models

Cause: Your AWS account doesn’t have access to the requested model.

Solutions:

# Method 1: Check available models in your region
aws bedrock list-foundation-models --region us-east-1 --output table

# Method 2: Request model access in AWS Console:
# 1. Go to AWS Bedrock Console
# 2. Navigate to "Model access"
# 3. Request access to desired models
# 4. Wait for approval (can take several hours)

# Method 3: Use available models
ona-terminal ask "What Bedrock models are available in my account?"

# Method 4: Configure alternative models
ona-terminal config set ai_models.preferred_models '["anthropic.claude-3-haiku-20240307-v1:0"]'

Throttling Errors

Problem: ThrottlingException: Too many requests

Cause: Hitting API rate limits.

Solutions:

# Method 1: Enable rate limiting in configuration
export AI_MAX_RETRIES=5
export AI_RETRY_DELAY=2
export AI_BACKOFF_MULTIPLIER=2

# Method 2: Use custom models to reduce Bedrock usage
export AI_PROVIDER_STRATEGY="cost_optimized"
export MISTRAL_STATUS_URL="http://your-server:8000/status"

# Method 3: Implement request batching
ona-terminal ask "Analyze all Python files in src/ directory in a single request"

# Method 4: Monitor usage
ona-terminal ask "Show my AI usage and rate limit status"

High AI Costs

Problem: AI generation costs are too high

Solutions:

# Method 1: Enable aggressive cost optimization
export AI_PROVIDER_STRATEGY="cost_optimized"
export AI_COST_BUDGET="1.00"  # Maximum per request
export AI_PREFER_LIGHT_MODELS="true"

# Method 2: Use custom models for routine tasks
export MISTRAL_STATUS_URL="http://your-server:8000/status"
export MISTRAL_FALLBACK_IP="your-server-ip"

# Method 3: Monitor and optimize
ona-terminal ask "Show cost breakdown for my last 10 requests"
ona-terminal ask "Suggest ways to reduce my AI costs"

# Method 4: Batch similar requests
ona-terminal ask "Analyze all infrastructure files and create comprehensive report"

Cost Optimization Strategies:


Provider Integration Issues

Custom Provider Not Loading

Problem: Custom provider shows as unavailable

Diagnosis:

# Test provider directly
from ona_terminal.servers.ai_models.providers.manager import ProviderManager
from ona_terminal.config.loader import ConfigLoader

config = ConfigLoader().load_config()
manager = ProviderManager(config)

status = manager.get_provider_status()
print(status)

Solutions:

# Method 1: Check provider configuration
cat configs/default.yaml | grep -A 10 "providers:"

# Method 2: Verify server is running
systemctl status your-model-server  # if using systemd
docker ps | grep your-model        # if using Docker
ps aux | grep your-model           # check process

# Method 3: Check server logs
tail -f /var/log/your-model-server.log
journalctl -u your-model-server -f

# Method 4: Test server health directly
curl -v http://your-server:8000/status
curl -X POST http://your-server:8000/generate \
  -H "Content-Type: application/json" \
  -d '{"prompt": "test", "max_tokens": 10}'

Provider Routing Issues

Problem: Tasks not routing to expected provider

Diagnosis:

# Test routing logic
from ona_terminal.servers.ai_models.routing import ModelRouter

config = {"ai_models": {"fallback_strategy": "cost_optimized"}}
router = ModelRouter(config)

task = {
    "language": "terraform",
    "complexity": "medium", 
    "description": "test task"
}

provider, model = router.select_provider_and_model(task)
print(f"Routed to: {provider} / {model}")

Solutions:

# Method 1: Update routing configuration in configs/default.yaml
ai_models:
  routing:
    languages:
      terraform: "mistral"  # Force terraform to custom model
      python: "auto"        # Let system decide
      rust: "bedrock"       # Complex languages use Bedrock
    
    complexity:
      low: "auto"           # Allow cost optimization
      medium: "auto"        # Balanced selection
      high: "bedrock"       # High complexity uses quality models
      
    custom_rules:
      - condition: "description contains 'security'"
        provider: "bedrock"
      - condition: "language == 'terraform' AND complexity == 'high'"
        provider: "bedrock"
# Method 2: Override provider selection
ona-terminal ask "Using Bedrock models: analyze this code for security"
ona-terminal ask "Using custom models: generate Terraform configuration"

# Method 3: Test routing decisions
ona-terminal ask "Show me which provider would be used for terraform code generation"
ona-terminal ask "Explain the routing decision for my last request"

Performance Issues

Slow Response Times

Problem: AI requests take too long

Diagnosis:

# Enable performance monitoring
export LOG_LEVEL=DEBUG
time ona-terminal ask "Simple test request"

# Check system resources
htop
free -h
df -h

# Test network latency
ping api.anthropic.com
ping bedrock.us-east-1.amazonaws.com
curl -w "@curl-format.txt" -o /dev/null -s "https://bedrock.us-east-1.amazonaws.com"

Solutions:

# Method 1: Increase timeouts
export AI_TIMEOUT=120
export AI_CONNECT_TIMEOUT=30

# Method 2: Use faster models for simple tasks
export AI_PROVIDER_STRATEGY="performance_first"
export AI_PREFER_LIGHT_MODELS="true"

# Method 3: Enable caching
export AI_CACHE_ENABLED=true
export AI_CACHE_TTL=3600

# Method 4: Optimize custom model server
# - Use GPU acceleration
# - Enable model caching
# - Add connection pooling

Performance Optimization Tips:

# Custom model server optimizations
import torch

# Enable CUDA if available
device = "cuda" if torch.cuda.is_available() else "cpu"

# Use optimized inference
with torch.no_grad():
    outputs = model.generate(inputs, do_sample=False, num_beams=1)

# Implement caching
from functools import lru_cache

@lru_cache(maxsize=128)
def generate_with_cache(prompt_hash):
    return model.generate(prompt)

Memory Issues

Problem: High memory usage or out-of-memory errors

Diagnosis:

# Check memory usage
free -h
ps aux --sort=-%mem | head -10

# Monitor Ona Terminal memory usage
ps -p $(pgrep -f ona-terminal) -o pid,ppid,cmd,%mem,%cpu

# Check for memory leaks
valgrind --leak-check=full ona-terminal ask "test"

Solutions:

# Method 1: Limit concurrent operations
export AI_MAX_CONCURRENT=1
export AI_BATCH_SIZE=1

# Method 2: Disable caching if memory constrained
export AI_CACHE_ENABLED=false

# Method 3: Use lighter models
export AI_PREFER_LIGHT_MODELS=true
export AI_MAX_TOKENS=512

# Method 4: Increase system memory or swap
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

Connection Timeouts

Problem: Requests timing out

Solutions:

# Method 1: Increase timeouts
export AI_TIMEOUT=120
export AI_CONNECT_TIMEOUT=30
export AI_READ_TIMEOUT=60

# Method 2: Check network connectivity
ping api.anthropic.com
nslookup bedrock.us-east-1.amazonaws.com
traceroute bedrock.us-east-1.amazonaws.com

# Method 3: Use fallback providers
export AI_ENABLE_FALLBACK=true
export AI_FALLBACK_TIMEOUT=30

# Method 4: Configure retry logic
export AI_MAX_RETRIES=3
export AI_RETRY_DELAY=5

GitHub Integration Issues

Repository Access Issues

Problem: Can’t access repositories or create issues

Diagnosis:

# Test repository access
curl -H "Authorization: token $GITHUB_TOKEN" \
  https://api.github.com/repos/your-org/your-repo

# Check organization membership
curl -H "Authorization: token $GITHUB_TOKEN" \
  https://api.github.com/user/memberships/orgs

# Test specific permissions
curl -H "Authorization: token $GITHUB_TOKEN" \
  https://api.github.com/repos/your-org/your-repo/issues

Solutions:

# Method 1: Ensure token has correct permissions
# Required scopes when creating GitHub token:
# ✓ repo (full repository access)
# ✓ write:issues (create and update issues)
# ✓ read:org (for organization repositories)

# Method 2: For private repositories, ensure token access
# Go to GitHub → Settings → Developer settings → Personal access tokens
# Ensure token has access to the private repository

# Method 3: For organization repositories
# Organization may need to approve the token
# Go to Organization → Settings → Third-party access

# Method 4: Test with specific repository
ona-terminal ask "Analyze repository structure for your-org/your-repo"

Webhook Issues

Problem: GitHub webhooks not working

Solutions:

# Method 1: Check webhook configuration
# In GitHub repository settings:
# - Webhooks section
# - Ensure webhook URL is accessible from GitHub
# - Verify webhook secret matches configuration

# Method 2: Test webhook endpoint
curl -X POST your-webhook-url/github \
  -H "Content-Type: application/json" \
  -H "X-GitHub-Event: push" \
  -d '{"test": "payload"}'

# Method 3: Check webhook logs
tail -f /var/log/webhook.log
journalctl -u webhook-service -f

# Method 4: Validate webhook configuration
ona-terminal config show --section github
ona-terminal ask "Test GitHub webhook connectivity"

Advanced Troubleshooting

Debug Mode and Logging

Enable comprehensive debugging:

# Method 1: Environment variables
export LOG_LEVEL=DEBUG
export ONA_TERMINAL_DEBUG=true

# Method 2: Command line flags
ona-terminal --debug ask "your query"
ona-terminal --verbose status

# Method 3: Detailed system status
ona-terminal status --verbose --debug

Log Analysis

# Collect comprehensive logs
mkdir troubleshooting-logs

# System information
uname -a > troubleshooting-logs/system.txt
python3 --version >> troubleshooting-logs/system.txt
pip list > troubleshooting-logs/packages.txt
env | grep -E "(AWS|GITHUB|MISTRAL|AI_)" > troubleshooting-logs/env.txt

# Ona Terminal logs
ona-terminal --debug status > troubleshooting-logs/status.log 2>&1
ona-terminal --debug servers --health > troubleshooting-logs/servers.log 2>&1

# Configuration (remove sensitive data)
ona-terminal config show > troubleshooting-logs/config.yaml
sed -i 's/[A-Za-z0-9+/=]{20,}/***REDACTED***/g' troubleshooting-logs/config.yaml

# Test logs
ona-terminal --debug ask "Test request for troubleshooting" > troubleshooting-logs/test.log 2>&1

Network Diagnostics

# Test connectivity to all services
echo "Testing AWS Bedrock..."
curl -I https://bedrock.us-east-1.amazonaws.com

echo "Testing GitHub API..."
curl -I https://api.github.com

echo "Testing custom model server..."
curl -I http://your-server:8000/status

# DNS resolution
nslookup api.anthropic.com
nslookup bedrock.us-east-1.amazonaws.com

# Port connectivity
nc -zv api.anthropic.com 443
nc -zv your-server 8000

# SSL certificate check
openssl s_client -connect api.anthropic.com:443 -servername api.anthropic.com

Performance Profiling

# Profile Ona Terminal performance
import cProfile
import pstats
from ona_terminal.client.manager import MCPClientManager

def profile_ona-terminal():
    client = MCPClientManager()
    
    # Profile a typical operation
    pr = cProfile.Profile()
    pr.enable()
    
    result = client.call_server_tool(
        "ai-models-server",
        "generate_code",
        {"description": "Create a Python function", "language": "python"}
    )
    
    pr.disable()
    
    # Save profile results
    pr.dump_stats('ona-terminal_profile.stats')
    
    # Print top time consumers
    stats = pstats.Stats('ona-terminal_profile.stats')
    stats.sort_stats('cumulative').print_stats(10)

if __name__ == "__main__":
    profile_ona-terminal()

Error Code Reference

Error Code Description Solution
E001 Configuration file not found Check config path and file existence
E002 Invalid provider configuration Validate YAML syntax and required fields
E003 Model access denied Check AWS/API credentials and permissions
E004 Provider unavailable Check network connectivity and server status
E005 Rate limit exceeded Reduce request frequency or upgrade plan
E006 Timeout error Increase timeout settings or check network
E007 Memory limit exceeded Reduce concurrent operations or add memory
E008 Invalid input format Check input data format and encoding
E009 Authentication failed Verify credentials and permissions
E010 Resource not found Check resource path and availability

Performance Benchmarks

Expected Performance:

If performance significantly differs, check:

  1. Network latency to API endpoints
  2. System resources (CPU, memory, disk)
  3. Provider configuration and availability
  4. Request complexity and size
  5. Concurrent request limits

Getting Support

Self-Help Resources

# Built-in help
ona-terminal --help
ona-terminal config --help
ona-terminal servers --help

# System diagnostics
ona-terminal status --verbose
ona-terminal servers --health --debug
ona-terminal config validate

Community Support

Professional Support

Reporting Issues

When reporting issues, please include:

# Generate comprehensive diagnostic report
ona-terminal --debug status > diagnostic-report.txt 2>&1
echo "--- Environment ---" >> diagnostic-report.txt
env | grep -E "(AWS|GITHUB|MISTRAL|AI_)" >> diagnostic-report.txt
echo "--- System ---" >> diagnostic-report.txt
uname -a >> diagnostic-report.txt
python3 --version >> diagnostic-report.txt
pip list | grep -E "(asoba|fastmcp)" >> diagnostic-report.txt

Include this diagnostic report when asking for help to expedite troubleshooting.


This troubleshooting guide covers the most common issues. For complex problems or enterprise deployments, please contact our support team with the diagnostic information above.


Get Help & Stay Updated

Contact Support

For technical assistance, feature requests, or any other questions, please reach out to our dedicated support team.

Email Support Join Our Discord

Subscribe to Updates

* indicates required