Ona Terminal CLI Deployment
Complete deployment guide for the Ona Terminal CLI tool and related services.
Overview
This guide covers deploying the Ona Terminal CLI tool and its supporting infrastructure. The Ona Terminal is an AI-powered command-line interface for energy asset management, forecasting, and automation.
Current Production State
Last Verified: 2025-01-10 (Status: PRODUCTION VERIFIED - Version 1.6.0)
Regional Architecture Overview
- af-south-1: Primary production environment (15 Lambda functions)
- us-east-1: RAG services and global monitoring (3 Lambda functions)
- eu-central-1: Status unknown - not verified
af-south-1 (Primary Production) - ✅ ACTIVE
Lambda Functions
The platform uses containerized Lambda functions for various services including:
- Data ingestion (historical and real-time)
- ML model training
- Data processing and interpolation
- Weather data collection
- Authentication and authorization
- Logging and monitoring
API Gateway REST APIs
The platform uses AWS API Gateway to expose RESTful endpoints:
- Unified API endpoint:
https://api.asoba.co - All services accessible through the unified base URL
- Regional deployment in
af-south-1
S3 Buckets
The platform uses S3 buckets for data storage:
- Input data bucket for historical and real-time data
- Output data bucket for processed results and models
- Client-facing bucket for user-accessible outputs
- Logs bucket for CloudFront logs
DynamoDB Tables
The platform uses DynamoDB for metadata storage:
- API key management table
- User and session data tables
- Configuration and metadata tables
us-east-1 (Global Services) - ✅ ACTIVE
Lambda Functions (3 Deployed)
| Function Name | Purpose | Status |
|—————|———|——–|
| cloudwatch-monitoring-agent-api | CloudWatch monitoring | ✅ Active |
| ona-front-end-prod-api-reques-RulePriorityFunction-* | ALB rules | ✅ Active |
| ona-front-end-prod-api-reque-EnvControllerFunction-* | Environment control | ✅ Active |
Security Considerations
Best Practices
Before deploying to production, ensure you have:
- Secrets Management: Use AWS Secrets Manager or Parameter Store for sensitive configuration
- Security Scanning: Implement pre-commit security hooks and automated vulnerability scanning
- Access Control: Follow least-privilege access principles
- Monitoring: Set up security monitoring and alerting
Production Security
- Use HTTPS for all communications
- Implement rate limiting
- Set up monitoring and alerting
- Regular security updates
- Access logging and audit trails
Service Coverage Analysis
Fully Deployed Services ✅ (95% Coverage)
- Data Ingestion: Historical + Real-time (
ingestHistoricalLoadData,ingestNowcastData) - Data Processing: Interpolation + Weather (6 Lambda functions)
- Model Training:
trainForecasterwith API Gateway - Authentication: Auth0 integration operational
Partially Deployed Services ⚠️ (40-60% Coverage)
- ML Inference: Code exists, SageMaker endpoints failed
- Forecast Generation: Core module ready, no API Gateway
- User Management: Auth0 only, missing full CRUD
Missing Services ❌ (0-20% Coverage)
- Dispatch Optimization: Code exists, no Lambda deployment
- Market Price Forecasting: Code exists, no API deployment
- Freemium Services: 3 Lambda functions implemented but not deployed
Quick Start
Prerequisites
- Python 3.10+: CRITICAL - Use python3.10, not python3 (system version is 3.9)
- AWS CLI: Configured with appropriate credentials and permissions
- Internet Access: Required for AWS Bedrock API calls and package installation
- Storage: Local storage for upload tracking (
~/.asoba/uploads/) - Memory: Minimum 4GB RAM for local development
- Network: Access to AWS services in target regions
Development Installation
# Install Ona Terminal CLI
pip3.10 install ona-terminal
# Verify installation
ona --version
# Configure API key
ona configure --api-key YOUR_API_KEY
Production Installation
# Install with production dependencies
pip3.10 install ona-terminal[production]
# Set up environment variables
export ONA_API_KEY="your-production-api-key"
export ONA_REGION="af-south-1"
export ONA_ENVIRONMENT="production"
# Test connection
ona status
Docker Deployment
Local Development
# Build development image
docker build -f docker/Dockerfile.dev -t ona-terminal-dev .
# Run with volume mounting
docker run -it --rm \
-v $(pwd):/app \
-p 8000:8000 \
-e ONA_API_KEY=your-dev-key \
ona-terminal-dev
Production Docker
# Build production image
docker build -f docker/Dockerfile -t ona-terminal .
# Run production container
docker run -d \
-p 8000:8000 \
-e ONA_API_KEY=your-production-key \
-e ONA_REGION=af-south-1 \
--name ona-terminal-prod \
ona-terminal
Docker Compose
# docker-compose.yml
version: '3.8'
services:
ona-terminal:
build: .
ports:
- "8000:8000"
environment:
- ONA_API_KEY=${ONA_API_KEY}
- ONA_REGION=${ONA_REGION}
volumes:
- ./data:/app/data
restart: unless-stopped
Systemd Service Deployment
Create Service File
# /etc/systemd/system/ona-terminal.service
[Unit]
Description=Ona Terminal CLI Service
After=network.target
[Service]
Type=simple
User=ona
WorkingDirectory=/opt/ona-terminal
Environment=ONA_API_KEY=your-api-key
Environment=ONA_REGION=af-south-1
ExecStart=/usr/local/bin/ona serve --host 0.0.0.0 --port 8000
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
Deploy Service
# Create user
sudo useradd -r -s /bin/false ona
# Create directory
sudo mkdir -p /opt/ona-terminal
sudo chown ona:ona /opt/ona-terminal
# Install service
sudo systemctl daemon-reload
sudo systemctl enable ona-terminal
sudo systemctl start ona-terminal
# Check status
sudo systemctl status ona-terminal
PolicyAnalyst LLM Deployment (AWS EC2)
Prerequisites
- AWS EC2 instance with GPU (g5.2xlarge recommended)
- NVIDIA drivers and CUDA toolkit
- Python 3.10+ environment
Step-by-Step Setup
# Update system
sudo apt update && sudo apt upgrade -y
# Install Python 3.10
sudo apt install -y python3.10 python3.10-pip python3.10-venv
# Install NVIDIA drivers
sudo apt install -y nvidia-driver-535
# Install CUDA toolkit
wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run
sudo sh cuda_11.8.0_520.61.05_linux.run --silent --driver --toolkit --samples
# Set environment variables
echo 'export PATH=/usr/local/cuda-11.8/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda-11.8/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc
# Install vLLM and Ona Terminal
pip3.10 install vllm ona-terminal
# Test GPU
nvidia-smi
python3.10 -c "import torch; print(torch.cuda.is_available())"
Load Model
# Create model directory
mkdir -p /home/ubuntu/models
cd /home/ubuntu/models
# Download and load model
python3.10 -c "
from vllm import LLM
llm = LLM(model='mistralai/Mistral-7B-Instruct-v0.2', gpu_memory_utilization=0.9)
print('Model loaded successfully')
"
Monitoring and Logging
CloudWatch Integration
# Install CloudWatch agent
sudo apt install -y amazon-cloudwatch-agent
# Configure monitoring
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
# Start agent
sudo systemctl enable amazon-cloudwatch-agent
sudo systemctl start amazon-cloudwatch-agent
Health Checks
# Check service health
curl http://localhost:8000/health
# Check API endpoints
curl -H "x-api-key: YOUR_API_KEY" \
https://api.asoba.co/health
# Monitor logs
sudo journalctl -u ona-terminal -f
Security Considerations
Before Deployment
- Address Critical Security Issues:
- Remove all hardcoded credentials
- Fix 14 dependency vulnerabilities
- Reduce wildcard imports from 950+ to <50
- Implement AWS safety protocols
- Implement Secrets Management:
- Use AWS Secrets Manager or Parameter Store
- Rotate API keys regularly
- Implement least-privilege access
- Security Scanning:
- Set up pre-commit security hooks
- Implement automated vulnerability scanning
- Regular security audits
Production Security
- Use HTTPS for all communications
- Implement rate limiting
- Set up monitoring and alerting
- Regular security updates
- Access logging and audit trails
Troubleshooting
Common Issues
API Connection Errors
# Check API key
ona configure --list
# Test API connectivity
curl -H "x-api-key: YOUR_API_KEY" \
https://api.asoba.co/health
# Check AWS credentials
aws sts get-caller-identity
GPU Issues (PolicyAnalyst)
# Check NVIDIA drivers
nvidia-smi
# Check CUDA installation
nvcc --version
# Test PyTorch GPU
python3.10 -c "import torch; print(torch.cuda.is_available())"
Service Issues
# Check service status
sudo systemctl status ona-terminal
# View logs
sudo journalctl -u ona-terminal -f
# Restart service
sudo systemctl restart ona-terminal
Performance Issues
# Monitor system resources
htop
nvidia-smi # For GPU monitoring
df -h # Disk usage
# Check API performance
ab -n 100 -c 10 -H "x-api-key: YOUR_API_KEY" \
https://api.asoba.co/health
Support
- 📧 Technical Support: support@asoba.co
- 💬 Discord Community: Join our Discord
- 📖 API Reference: Complete API documentation
- 🔗 Integration Guide: SDK and webhook integration
Get Help & Stay Updated
Contact Support
For technical assistance, feature requests, or any other questions, please reach out to our dedicated support team.
Email Support Join Our Discord