Asoba Zorora Documentation

Getting Started with Nehanda

Nehanda v1 is a specialized 7B parameter language model fine-tuned for intelligence assessment, signal detection, and global systems analysis. There are two ways to run it: locally using LM Studio or in the cloud using Hugging Face Inference Endpoints.

This guide walks you through both options, starting with getting access to the model.

Prerequisites

Step 1: Request Access

Nehanda v1 is a gated model. You need whitelist access before you can download or deploy it.

  1. Visit the model page at asoba/nehanda-v1-7b on Hugging Face.
  2. Request Whitelist Access using the Google Form.
  3. Once approved, you will be able to download the model weights or deploy to an inference endpoint.

Nehanda v1 model page on Hugging Face

The Nehanda v1 model page on Hugging Face — click Deploy > Inference Endpoints to deploy to the cloud


Option A: Run Locally with LM Studio

For local inference on your own hardware. This uses the GGUF quantized version of Nehanda (~4.4 GB), which runs on most machines with a modern GPU or Apple Silicon.

Download the Model

  1. Open LM Studio and go to My Models.
  2. Search for nehanda in the search bar.
  3. Select asoba/nehanda-v1-7b-GGUF from the results.
  4. Download the Q4_K_M quantization (4.37 GB). This is the recommended quantization for the best balance of quality and performance.

Nehanda GGUF model in LM Studio

The Nehanda v1 GGUF model in LM Studio — select Q4_K_M for the best quality/performance balance

Load the Model

  1. Once downloaded, Nehanda will appear in your My Models list.

Nehanda in LM Studio models list

Nehanda v1 appears in your local models list after download

  1. Click Load Model to configure and load it.
  2. Set the recommended parameters:
    • Context Length: 4096 tokens
    • GPU Offload: 32 layers (full offload)
  3. Estimated memory usage: ~5 GB GPU.
  4. Click Load Model to start.

LM Studio load model dialog

Load model settings — 4096 context length and full GPU offload recommended

Start a Chat

Once loaded, click Use in New Chat. Use the following system prompt for intelligence analysis:

You are an intelligence assessment specialist. Your role is to analyze
provided documents for indicators of structural shifts, regulatory
capture, and network dependencies. Always cite specific evidence from
the provided context. State clearly when information is insufficient
to draw a conclusion.

Example query:

Analyze the following report for indicators of regulatory capture
and identify any entities with undisclosed financial dependencies.

Option B: Deploy to HF Inference Endpoints

For production or team use. Runs on cloud GPUs with full API access, auto-scaling, and scale-to-zero billing.

Create the Endpoint

  1. Go to Hugging Face Inference Endpoints.
  2. Click New Endpoint and search for nehanda.
  3. Select asoba/nehanda-v1-7b from the Hub Models results.

Searching for Nehanda on HF Inference Endpoints

Search for "nehanda" and select asoba/nehanda-v1-7b from Hub Models

Configure Hardware

  1. Select the following configuration:
    • Cloud Provider: Amazon Web Services
    • GPU: Nvidia L40S (1x GPU, 48 GB VRAM)
    • Region: us-east-1 (N. Virginia)
    • Cost: ~$1.80/hour per running replica
    • Authentication: Private (recommended)
  2. Enable Scale-to-zero — the endpoint will automatically stop after 1 hour of inactivity, so you only pay while it’s running.
  3. Click Create Endpoint.

Create Endpoint configuration

Recommended configuration: AWS, Nvidia L40S, Private authentication, scale-to-zero enabled

Wait for Initialization

  1. The endpoint will take a few minutes to start. The status will show Initializing while the model weights are loaded.
  2. Once the status changes to Running, copy the Endpoint URL — you will need it for API calls.

Nehanda endpoint initializing

The endpoint initializing — once Running, copy the Endpoint URL for API calls

Make Your First API Call

Once the endpoint is running, you can send requests using cURL or Python.

cURL:

curl https://your-endpoint-url.endpoints.huggingface.cloud/v1/chat/completions \
  -H "Authorization: Bearer $HF_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "asoba/nehanda-v1-7b",
    "messages": [
      {
        "role": "system",
        "content": "You are an intelligence assessment specialist."
      },
      {
        "role": "user",
        "content": "Analyze the following for indicators of regulatory capture..."
      }
    ],
    "max_tokens": 2048,
    "temperature": 0.3
  }'

Python (with huggingface_hub):

from huggingface_hub import InferenceClient

client = InferenceClient(
    model="https://your-endpoint-url.endpoints.huggingface.cloud",
    token="hf_your_token_here",
)

response = client.chat.completions.create(
    messages=[
        {
            "role": "system",
            "content": "You are an intelligence assessment specialist.",
        },
        {
            "role": "user",
            "content": "Analyze the following for indicators of regulatory capture...",
        },
    ],
    max_tokens=2048,
    temperature=0.3,
)

print(response.choices[0].message.content)

Replace your-endpoint-url with the Endpoint URL from the dashboard, and hf_your_token_here with your Hugging Face API token.


Next Steps