Asoba Zorora Documentation

Research Workflow Guide

Deep dive into Zorora’s 6-phase research pipeline and deep research capabilities.

Overview

Zorora’s deep research workflow searches across academic databases, web sources, and newsroom articles, then synthesizes findings with credibility scoring and citation graphs. The workflow is designed to provide comprehensive, well-sourced answers to research questions.

Research Results with Citations Screenshot
(Placeholder - Add screenshot showing research results with citations, credibility scores, and citation graph)

6-Phase Research Pipeline

Phase 1: Parallel Source Aggregation

What happens:

Output: Raw sources from all three categories

Phase 2: Citation Following

What happens:

Depth Levels:

Output: Extended source set with citation relationships

Phase 3: Cross-Referencing

What happens:

Output: Grouped claims with agreement counts

Phase 4: Credibility Scoring

What happens:

Credibility Categories:

Output: Sources with credibility scores and categories

Phase 5: Citation Graph Building

What happens:

Output: Citation graph structure

Phase 6: Synthesis

What happens:

Output: Final synthesis with citations

Using the Research Workflow

Terminal Interface

Automatic Detection:

[1] ⚙ > What are the latest developments in large language model architectures?

The system automatically detects research intent and executes the deep research workflow.

Force Research:

[2] ⚙ > /search latest developments in renewable energy policy

Web UI

  1. Open http://localhost:5000
  2. Enter research question
  3. Select depth level (Quick/Balanced/Thorough)
  4. Click “Start Research”
  5. View results with synthesis, sources, and credibility scores

API (Programmatic Access)

from engine.research_engine import ResearchEngine

engine = ResearchEngine()
state = engine.deep_research("Your research question", depth=1)
print(state.synthesis)
print(f"Total sources: {state.total_sources}")

Research Depth Levels

Quick (depth=1)

When to use:

What it does:

Time: ~25-35 seconds

Balanced (depth=2)

When to use:

What it does:

Time: ~35-50 seconds

Status: Coming soon

Thorough (depth=3)

When to use:

What it does:

Time: ~50-70 seconds

Status: Coming soon

Research Storage

Automatic Storage

Research is automatically saved to:

SQLite Database:

JSON Files:

Accessing Saved Research

Terminal:

from engine.research_engine import ResearchEngine

engine = ResearchEngine()
# Search past research
results = engine.search_research(query="LLM architectures", limit=10)
# Load specific research
research_data = engine.load_research(results[0]['research_id'])

Web UI API:

# Get research history
curl http://localhost:5000/api/research/history?limit=10

# Get specific research
curl http://localhost:5000/api/research/<research_id>

Understanding Results

Synthesis

The synthesis provides:

Sources

Each source includes:

Credibility Scores

High (0.7-1.0):

Medium (0.4-0.7):

Low (0.0-0.4):

Best Practices

Writing Research Queries

Be Specific:

Include Context:

Use Research Keywords:

Choosing Depth Levels

Quick:

Balanced:

Thorough:

Interpreting Results

Check Credibility:

Follow Citations:

Consider Context:

Troubleshooting

Research Not Triggering

Problem: Query doesn’t trigger research workflow

Solution:

Slow Research

Problem: Research takes too long

Solution:

No Sources Found

Problem: Research returns no sources

Solution:

Low Credibility Scores

Problem: All sources have low credibility

Solution:

See Also