Asoba Zorora Documentation

Multi-Source Analysis Use Case

Cross-reference claims across academic papers, web sources, and newsroom articles to verify information and identify consensus or disagreement.

Scenario

You’re researching “renewable energy policy developments in 2025” and need to:

Multi-Source Analysis Example Screenshot
(Placeholder - Add screenshot showing cross-referenced claims and credibility scores)

Step-by-Step Guide

Step 1: Start Research Query

Terminal:

zorora
[1] ⚙ > What are the latest renewable energy policy developments in 2025?

Web UI:

  1. Open http://localhost:5000
  2. Enter query: “latest renewable energy policy developments in 2025”
  3. Select depth: Balanced (for thorough cross-referencing)
  4. Click “Start Research”

API:

from engine.research_engine import ResearchEngine

engine = ResearchEngine()
state = engine.deep_research(
    "latest renewable energy policy developments in 2025",
    depth=2  # Balanced depth for cross-referencing
)

Step 2: Review Sources

Source Types:

Source Credibility:

Review Sources:

# Group sources by type
academic_sources = [s for s in state.sources_checked if s.source_type == "academic"]
web_sources = [s for s in state.sources_checked if s.source_type == "web"]
newsroom_sources = [s for s in state.sources_checked if s.source_type == "newsroom"]

print(f"Academic: {len(academic_sources)}")
print(f"Web: {len(web_sources)}")
print(f"Newsroom: {len(newsroom_sources)}")

Step 3: Analyze Cross-References

Grouped Claims:

Review Findings:

# High agreement claims (consensus)
high_agreement = [f for f in state.findings if f.agreement_count >= 5]
print(f"Consensus claims: {len(high_agreement)}")

# Low agreement claims (disagreement)
low_agreement = [f for f in state.findings if f.agreement_count <= 2]
print(f"Disputed claims: {len(low_agreement)}")

Step 4: Verify Claims

Check High-Agreement Claims:

for finding in high_agreement:
    print(f"Claim: {finding.claim}")
    print(f"Agreement: {finding.agreement_count} sources")
    print(f"Sources: {finding.sources}")
    # Verify by checking source URLs

Investigate Disagreements:

for finding in low_agreement:
    print(f"Claim: {finding.claim}")
    print(f"Agreement: {finding.agreement_count} sources")
    print(f"Sources: {finding.sources}")
    # Investigate why sources disagree

Step 5: Evaluate Credibility

Credibility Analysis:

# High credibility sources
high_cred = [s for s in state.sources_checked if s.credibility_score >= 0.7]
print(f"High credibility sources: {len(high_cred)}")

# Medium credibility sources
med_cred = [s for s in state.sources_checked if 0.4 <= s.credibility_score < 0.7]
print(f"Medium credibility sources: {len(med_cred)}")

# Low credibility sources
low_cred = [s for s in state.sources_checked if s.credibility_score < 0.4]
print(f"Low credibility sources: {len(low_cred)}")

Step 6: Synthesize Analysis

Synthesis Includes:

Access Synthesis:

print(state.synthesis)

Example Analysis

Claim: “Solar energy costs decreased 50% in 2024”

Cross-Reference Results:

Claim: “Battery storage is cost-effective for grid-scale applications”

Cross-Reference Results:

Claim: “Offshore wind is the future of renewable energy”

Cross-Reference Results:

Best Practices

Source Evaluation

Prioritize High-Credibility Sources:

Verify Claims:

Consider Context:

Cross-Reference Analysis

Identify Consensus:

Investigate Disagreements:

Credibility Assessment

High Credibility (0.7-1.0):

Medium Credibility (0.4-0.7):

Low Credibility (0.0-0.4):

Advanced Usage

Comparative Analysis

Compare Multiple Queries:

[1] ⚙ > What are renewable energy policies in Europe?
[2] ⚙ > What are renewable energy policies in North America?
[3] ⚙ > What are renewable energy policies in Asia?

Compare Results:

Trend Analysis

Track Over Time:

# Research same topic multiple times
[1] ⚙ > What are renewable energy policy developments in Q1 2025?
[2] ⚙ > What are renewable energy policy developments in Q2 2025?

Analyze Trends:

Source Credibility Tracking

Monitor Source Credibility:

# Track credibility scores over time
for research_id in research_history:
    research_data = engine.load_research(research_id)
    sources = research_data['sources_checked']
    avg_credibility = sum(s['credibility_score'] for s in sources) / len(sources)
    print(f"{research_id}: {avg_credibility:.2f}")

Troubleshooting

No Consensus Found

Problem: All claims have low agreement

Solution:

Conflicting Sources

Problem: Sources strongly disagree

Solution:

Low Credibility Scores

Problem: All sources have low credibility

Solution:

See Also