Edge Layer
The Edge Layer hosts Predictive AI models on low-power compute devices attached directly to energy assets, performing short-term forecasts and maintaining local data resilience.
Overview
The Edge Layer is the foundation of the Ona Platform’s distributed intelligence architecture. It runs on low-power compute devices attached directly to energy assets, enabling real-time forecasting and fault prediction at the source. This layer ensures operational continuity even when connectivity to the central control layer is interrupted.
Key Capabilities:
- Hosts Predictive AI models for short-term forecasting (0-48 hours)
- Performs local model execution on each node
- Stores 48 hours of data locally for resilience
- Operates independently during connectivity loss
- Provides real-time predictions for each asset node
Forecasting Capabilities
Short-Term Forecasts (0-48 Hours)
The Edge Layer generates energy production forecasts with a horizon of up to 48 hours. These forecasts are critical for:
- Operational Planning: Enabling operators to anticipate energy production
- Grid Integration: Supporting dispatch decisions and grid stability
- Maintenance Scheduling: Optimizing maintenance windows based on predicted production
- Financial Planning: Supporting energy trading and revenue forecasting
Forecast API Endpoint
GET /forecast
Generate energy production forecast for a customer’s assets.
Query Parameters:
customer_id(required): Customer identifiersite_id(optional): Specific site identifierhorizon_hours(optional): Forecast horizon in hours (default: 48)
Example Request:
curl -X GET "https://api.asoba.co/forecast?customer_id=demo-customer&horizon_hours=48" \
-H "X-API-Key: your-api-key"
Example Response:
{
"forecast_id": "fc-demo-12345678",
"customer_id": "demo-customer",
"site_id": "demo-site-cape-town-01",
"generated_at": "2025-01-15T12:00:00Z",
"horizon_hours": 48,
"forecasts": [
{
"timestamp": "2025-01-15T13:00:00Z",
"predicted_power_kw": 1250.5,
"confidence_interval": {
"lower": 1180.2,
"upper": 1320.8
},
"weather_conditions": {
"temperature_c": 25.3,
"irradiance_w_m2": 850.2,
"wind_speed_m_s": 3.2
}
}
],
"metadata": {
"model_version": "v2.1.0",
"training_data_points": 8760,
"model_accuracy": 0.94
}
}
Lambda Services
The Edge Layer is powered by several AWS Lambda functions that handle forecasting and data processing:
Forecasting API Service
ona-forecastingApi-prod
- Memory: 3008MB
- Timeout: 60 seconds
- Purpose: Generates energy production forecasts using trained ML models
- Capabilities:
- Loads trained LSTM models from S3
- Processes nowcast data for forecast generation
- Integrates weather forecast data
- Returns structured forecast responses with confidence intervals
Interpolation Service
ona-interpolationService-prod
- Memory: 3008MB
- Timeout: 900 seconds (15 minutes)
- Purpose: Fills data gaps and enriches time series data
- Capabilities:
- ML-powered gap filling using adaptive multi-output methods
- Data enrichment with weather features
- Performance metrics calculation (RMSE, MAE, R²)
- Handles missing telemetry data gracefully
Data Standardization Service
ona-dataStandardizationService-prod
- Memory: 1024MB
- Timeout: 300 seconds
- Purpose: Normalizes and standardizes data from various OEM sources
- Capabilities:
- Converts OEM-specific formats to standardized schema
- Handles multiple inverter manufacturers (Huawei, Enphase, etc.)
- Validates data quality and completeness
Data Storage and Resilience
Local Data Buffer
Each edge node maintains a 48-hour local data buffer to ensure:
- Resilience: Continued operation during connectivity loss
- Performance: Reduced latency for local predictions
- Reliability: No data loss during network interruptions
Cloud Storage (S3)
Input Bucket: sa-api-client-input
observations/: Raw sensor data from assetsnowcast/: Real-time data for forecastinghistorical/: Historical data for model training
Output Bucket: sa-api-client-output
forecasts/: Generated forecast resultsmodels/: Trained ML model artifactstraining_data/: Processed training datasets
DynamoDB Tables
ona-platform-locations
- Stores location and customer data
- Supports geographic data queries
- Enables location-based weather data retrieval
ona-platform-weather-cache
- Caches weather data to reduce API calls
- Improves forecast generation performance
- Reduces external API dependencies
Local Model Execution
Model Deployment
Predictive AI models are deployed to edge devices and execute locally:
- LSTM Models: Long Short-Term Memory networks for time series forecasting
- Model Versioning: Supports multiple model versions for A/B testing
- Model Updates: Can be updated remotely without device downtime
- Resource Efficiency: Optimized for low-power compute devices
Model Performance
Typical Model Metrics:
- Training RMSE: 0.072-0.082
- Validation RMSE: 0.089-0.096
- MAPE: 5.4-6.8%
- SMAPE: 6.8-8.2%
- Model Accuracy: 94%+
Model Artifacts
Models are stored in S3 with the following structure:
s3://sa-api-client-output/customer_tailored/{customer_id}/models/{version}/
├── model.h5 # Trained model weights
├── encoders.pkl # Feature encoders
└── config.json # Model configuration
Resilience and Independence
Graceful Degradation
Each edge node operates independently if the central connection fails:
- Forecasting Continues: Local models continue generating predictions
- Fault Detection Active: Anomaly detection runs locally
- Data Queuing: Decisions and data are queued for transmission
- Automatic Recovery: Queued data transmits automatically when connectivity is restored
Connectivity Requirements
- Minimum: Intermittent connectivity sufficient for data synchronization
- Optimal: Continuous connectivity for real-time updates
- Fallback: 48-hour local buffer ensures operation during outages
Integration with Control Layer
The Edge Layer communicates with the Control Layer through:
- Encrypted Channels: TLS 1.3 encryption for all communications
- Certificate-Based Authentication: Secure device authentication
- API Gateway: Centralized API endpoint management
- Data Synchronization: Automatic sync when connectivity is restored
Performance Characteristics
Latency
- Forecast Generation: < 5 seconds per node
- Data Processing: < 1 second per data point
- Model Loading: < 2 seconds (cached after first load)
Throughput
- Forecasts per Hour: 1000+ per node
- Data Points Processed: 10,000+ per minute
- Concurrent Requests: 20+ simultaneous forecasts
Resource Usage
- CPU: Optimized for low-power ARM processors
- Memory: 512MB-3GB depending on model complexity
- Storage: 48-hour buffer requires ~100MB per node
Next Steps
- Control Layer - Learn about the central coordination layer
- Interface Layer - Explore the user interface and dashboards
- User Guide - Get started with Ona Platform deployment
Get Help & Stay Updated
Contact Support
For technical assistance, feature requests, or any other questions, please reach out to our dedicated support team.
Email Support Join Our Discord