Asoba Ona Documentation

Edge Layer

The Edge Layer hosts Predictive AI models on low-power compute devices attached directly to energy assets, performing short-term forecasts and maintaining local data resilience.


Overview

The Edge Layer is the foundation of the Ona Platform’s distributed intelligence architecture. It runs on low-power compute devices attached directly to energy assets, enabling real-time forecasting and fault prediction at the source. This layer ensures operational continuity even when connectivity to the central control layer is interrupted.

Key Capabilities:


Forecasting Capabilities

Short-Term Forecasts (0-48 Hours)

The Edge Layer generates energy production forecasts with a horizon of up to 48 hours. These forecasts are critical for:

Forecast API Endpoint

GET /forecast

Generate energy production forecast for a customer’s assets.

Query Parameters:

Example Request:

curl -X GET "https://api.asoba.co/forecast?customer_id=demo-customer&horizon_hours=48" \
  -H "X-API-Key: your-api-key"

Example Response:

{
  "forecast_id": "fc-demo-12345678",
  "customer_id": "demo-customer",
  "site_id": "demo-site-cape-town-01",
  "generated_at": "2025-01-15T12:00:00Z",
  "horizon_hours": 48,
  "forecasts": [
    {
      "timestamp": "2025-01-15T13:00:00Z",
      "predicted_power_kw": 1250.5,
      "confidence_interval": {
        "lower": 1180.2,
        "upper": 1320.8
      },
      "weather_conditions": {
        "temperature_c": 25.3,
        "irradiance_w_m2": 850.2,
        "wind_speed_m_s": 3.2
      }
    }
  ],
  "metadata": {
    "model_version": "v2.1.0",
    "training_data_points": 8760,
    "model_accuracy": 0.94
  }
}

Lambda Services

The Edge Layer is powered by several AWS Lambda functions that handle forecasting and data processing:

Forecasting API Service

ona-forecastingApi-prod

Interpolation Service

ona-interpolationService-prod

Data Standardization Service

ona-dataStandardizationService-prod


Data Storage and Resilience

Local Data Buffer

Each edge node maintains a 48-hour local data buffer to ensure:

Cloud Storage (S3)

Input Bucket: sa-api-client-input

Output Bucket: sa-api-client-output

DynamoDB Tables

ona-platform-locations

ona-platform-weather-cache


Local Model Execution

Model Deployment

Predictive AI models are deployed to edge devices and execute locally:

Model Performance

Typical Model Metrics:

Model Artifacts

Models are stored in S3 with the following structure:

s3://sa-api-client-output/customer_tailored/{customer_id}/models/{version}/
├── model.h5              # Trained model weights
├── encoders.pkl          # Feature encoders
└── config.json           # Model configuration

Resilience and Independence

Graceful Degradation

Each edge node operates independently if the central connection fails:

Connectivity Requirements


Integration with Control Layer

The Edge Layer communicates with the Control Layer through:


Performance Characteristics

Latency

Throughput

Resource Usage


Next Steps


Get Help & Stay Updated

Contact Support

For technical assistance, feature requests, or any other questions, please reach out to our dedicated support team.

Email Support Join Our Discord

Subscribe to Updates

* indicates required