Asoba Zorora Documentation

Nehanda v1

Nehanda v1 is a specialized 7B parameter language model fine-tuned for intelligence assessment, signal detection, and global systems analysis.

Built on the Mistral-7B architecture, Nehanda departs from standard “chat” behaviors to focus on forensic analysis. It is designed to trace multi-hop citations, detect operator signatures in noisy datasets, and provide evidence-based assessments of geopolitical and financial networks.

Named after the ancestral spirit of resistance and prophecy, Nehanda is built to see through hegemonic narratives and expose the structural realities beneath complex data.

Purpose & Capabilities

Unlike general-purpose LLMs optimized for fluency, Nehanda is optimized for provenance and structure. It is trained to reject fabrication and explicitly state when information is unknown.

Core Functions

Integration with Zorora

Nehanda v1 is the default synthesis engine for the Zorora intelligence platform.

When operating within Zorora, Nehanda drives the synthesis layer for the /search and /research commands. It does not just summarize search results; it acts as an analyst that:

  1. Ingests the raw context curated by Zorora’s search tools.
  2. Triages the information based on credibility and relevance.
  3. Synthesizes a final answer that highlights information gaps, conflicting accounts, and consensus points.

Example Workflow

When a user runs a /research query in Zorora:

“Map the financial dependencies between the new energy consortium in Malta and verified state-owned entities.”

Zorora retrieves the raw documents, and Nehanda performs the analysis—flagging specific shell company structures or regulatory anomalies that match patterns learned during its “Systems Analysis” training phase.

Model Details

Download & Usage

The model weights are available for internal deployment via Hugging Face.

Download Nehanda v1 (Hugging Face)

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "asoba/nehanda-v1-7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)

# Example Intelligence Prompt
prompt = """You are an intelligence assessment specialist.
### Instruction:
Analyze the provided cable for indicators of regulatory capture.
...
"""