Skip to content

Agent-Native · Concurrent DAG · LLM

Empowering the New Era of Agent-Native Programming

Say goodbye to verbose glue code, complex prompt concatenation, and fragile JSON parsing. Nexa elevates intent routing, multi-agent collaboration, and pipeline streaming into core syntax, enabling you to build hardcore LLM concurrency graphs with ultimate elegance.


🔥 Core Advantage: Code Comparison

Nexa simplifies complex multi-agent collaboration into elegant declarative syntax. Click the button below to experience the difference between Nexa and traditional approaches:

Example 1 Agent Definition & Invocation
🐍 Traditional Python + LangChain 12 lines
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser

# Define chain
llm = ChatOpenAI(model="gpt-4", temperature=0.7)
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a professional English-Chinese translator"),
    ("human", "{input}")
])
chain = prompt | llm | StrOutputParser()

# Invoke
result = chain.invoke({"input": "Hello, World!"})
print(result)
✨ Nexa 4 lines
agent Translator {
    role: "English-Chinese Translator",
    model: "gpt-4"
}

result = Translator.run("Hello, World!")
Key Advantage: From 12 lines down to 4, no need to understand Chain, PromptTemplate, StrOutputParser and other complex concepts. Agent definition is configuration, invocation is execution.67% reduction
---
Example 2 Pipeline Orchestration
🐍 Traditional Python + LangChain 18 lines
import asyncio
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(model="gpt-4")

async def pipeline(topic: str):
    # Step 1: Writing
    writer_prompt = f"Write an article about {topic}"
    draft = await llm.ainvoke(writer_prompt)

    # Step 2: Review
    reviewer_prompt = f"Review and identify issues: {draft.content}"
    review = await llm.ainvoke(reviewer_prompt)

    # Step 3: Polish
    editor_prompt = f"Polish based on review: {draft.content}"
    final = await llm.ainvoke(editor_prompt)

    return final.content

result = asyncio.run(pipeline("Artificial Intelligence"))
✨ Nexa 5 lines
agent Writer { role: "Writer", prompt: "Write articles" }
agent Reviewer { role: "Reviewer", prompt: "Review articles" }
agent Editor { role: "Editor", prompt: "Polish articles" }

flow main {
    result = "Artificial Intelligence" >> Writer >> Reviewer >> Editor;
}
Key Advantage: Pipeline operator >> makes data flow crystal clear, no need to manually pass intermediate variables or handle async context. The compiler automatically optimizes execution order.72% reduction
---
Example 3 Intent Routing
🐍 Traditional Python + re 17 lines
import re
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(model="gpt-4")

def route_request(user_input: str):
    # Hand-written regex - fragile and hard to maintain
    if re.search(r'weather|temperature|forecast', user_input, re.I):
        prompt = f"Answer weather question: {user_input}"
        return llm.invoke(prompt).content
    elif re.search(r'news|headline|latest', user_input, re.I):
        prompt = f"Answer news question: {user_input}"
        return llm.invoke(prompt).content
    elif re.search(r'translate|translation', user_input, re.I):
        prompt = f"Translate: {user_input}"
        return llm.invoke(prompt).content
    else:
        return llm.invoke(f"General chat: {user_input}").content

result = route_request("What's the weather like in Beijing?")
✨ Nexa 10 lines
agent WeatherBot { role: "Weather Assistant" }
agent NewsBot { role: "News Assistant" }
agent Translator { role: "Translation Assistant" }
agent ChatBot { role: "Chat Assistant" }

flow main {
    result = match user_input {
        intent("Check weather") => WeatherBot.run(user_input),
        intent("Check news") => NewsBot.run(user_input),
        intent("Translate content") => Translator.run(user_input),
        _ => ChatBot.run(user_input)
    };
}
Key Advantage: intent() semantic matching replaces fragile regex expressions, using embedding vectors for semantic similarity matching - smarter and more flexible.41% reduction
---
Example 4 Concurrent DAG Execution
🐍 Traditional Python + asyncio 23 lines
import asyncio
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(model="gpt-4")

async def researcher(task: str, name: str):
    prompt = f"{name} analysis: {task}"
    return {name: await llm.ainvoke(prompt)}

async def parallel_research(topic: str):
    # Execute 3 researchers in parallel
    tasks = [
        researcher(topic, "Tech Researcher"),
        researcher(topic, "Market Researcher"),
        researcher(topic, "Finance Researcher")
    ]
    results = await asyncio.gather(*tasks)

    # Summarize results
    combined = "\n".join([str(r) for r in results])
    summary_prompt = f"Summarize these reports:\n{combined}"
    final = await llm.ainvoke(summary_prompt)

    return final.content

result = asyncio.run(parallel_research("AI Industry Outlook"))
✨ Nexa 6 lines
agent TechResearcher { role: "Tech Researcher" }
agent MarketResearcher { role: "Market Researcher" }
agent FinanceResearcher { role: "Finance Researcher" }
agent Summarizer { role: "Report Summarizer" }

flow main {
    result = "AI Industry Outlook" 
        |>> [TechResearcher, MarketResearcher, FinanceResearcher] 
        &>> Summarizer;
}
Key Advantage: DAG operators |>> (fan-out) and &>> (merge) implement concurrent orchestration in a single line, without needing to understand asyncio, gather, coroutines.74% reduction

📊 Code Volume Comparison Summary

Scenario Traditional Nexa Reduction
Agent Definition & Invocation 12 lines 4 lines 67%
Pipeline Orchestration 18 lines 5 lines 72%
Intent Routing 17 lines 10 lines 41%
Concurrent DAG Execution 23 lines 6 lines 74%
Average 17.5 lines 6.25 lines 63%

🆕 v1.0-alpha Revolutionary Update: The AVM Era

Nexa v1.0-alpha introduces the revolutionary Agent Virtual Machine (AVM) — a high-performance, securely isolated agent execution engine written in Rust:

🦀 Rust AVM Foundation

Transitioning from Python script transpilation to a standalone compiled Agent Virtual Machine written in Rust:

Feature Description
High-Performance Bytecode Interpreter Native execution of compiled Nexa bytecode
Complete Compiler Frontend Lexer → Parser → AST → Bytecode
110+ Test Coverage Full-link testing ensuring stability

🔒 WASM Security Sandbox

Introducing WebAssembly in AVM to provide strong isolation for external tool execution:

  • wasmtime Integration - High-performance WASM runtime
  • Permission Grading - Four-tier model: None/Standard/Elevated/Full
  • Resource Limits - Constraints on memory, CPU, and execution time
  • Audit Logs - Complete operation audit tracking

⚡ Smart Scheduler

Dynamic allocation of concurrency resources at the AVM layer based on system load:

  • Priority Queue - Task scheduling based on Agent priority
  • Load Balancing - Strategies: RoundRobin, LeastLoaded, Adaptive
  • DAG Topological Sorting - Automatic dependency resolution and parallelism analysis

📄 Vector Virtual Memory Paging

AVM manages memory, automatically performing vectorized swapping of conversation history:

  • LRU/LFU/Hybrid Eviction Policies - Intelligent page replacement
  • Embedding Similarity Search - Loading based on semantic relevance
  • Transparent Page Loading - Seamless memory management

Performance Comparison

Metric Python Transpiler Rust AVM
Compile Time ~100ms ~5ms
Startup Time ~500ms ~10ms
Memory Usage ~100MB ~10MB
Concurrent Agents ~100 ~10000

🚀 v1.0.1 - v1.0.4 Continuous Evolution

Since the v1.0-alpha release, Nexa has been rapidly iterating with powerful language features:

🔀 v1.0.1-beta: Traditional Control Flow & Python Escape Hatch

Providing more flexible programming capabilities for Agent development:

Feature Description
if/else if/else Traditional conditional statements
for each Collection iteration loop
while Conditional loop
break/continue Loop control statements
python! """...""" Python code embedding escape hatch
// Traditional control flow example
tasks = ["task1", "task2", "task3"];
for each task in tasks {
    if task == "critical" {
        HighPriorityAgent.run(task);
    } else {
        NormalAgent.run(task);
    }
}

// Python escape hatch example
result = python! """
    import statistics
    data = json.loads(raw_data)
    return statistics.mean(data)
"""

🎯 v1.0.2-beta: Semantic Types

Revolutionary type system that makes types carry semantic constraints:

// Types are not just formats, but include semantic meaning
type Email = string @ "valid email address format"
type PositiveInt = int @ "must be greater than 0"

protocol UserProfile {
    name: UserName,
    email: Email  // Automatically validates email format
}

🐄 v1.0.3-beta: COW Memory & Work-Stealing

Providing foundational support for advanced reasoning patterns:

Feature Description
COW Memory O(1) state branching, enabling Tree-of-Thoughts
Work-Stealing Scheduler Efficient concurrent scheduling based on Actor model
// Tree-of-Thought exploration
agent Thinker {
    memory: "cow"  // Enable COW memory
}

// Multi-path reasoning
branch1 = Thinker.run(problem) |>> "technical perspective";
branch2 = Thinker.run(problem) |>> "business perspective";
best = branch1 && branch2;  // Consensus merge

🐍 v1.0.4-beta: Python SDK COW Agent State

Python SDK adds COW Agent state management, enabling cross-language state branching:

# Using COW in Python SDK
from nexa import CowAgent

agent = CowAgent("analyzer")
branch1 = agent.branch()  # O(1) branch creation
branch2 = agent.branch()

🎯 More Core Features

Beyond code simplicity, Nexa provides these powerful language-level features:

Strong Type Protocol Constraints (protocol & implements)

No more uncontrollable model string outputs! Native support for contract-based programming:

protocol ReviewResult {
    score: "int",
    summary: "string"
}

agent Reviewer implements ReviewResult { 
    prompt: "Review the code..."
}

Semantic Control Flow (loop until)

Control loop termination with natural language:

loop {
    draft = Writer.run(feedback);
    feedback = Critic.run(draft);
} until ("Article quality is excellent")

Native Test Framework (test & assert)

test "Translation test" {
    result = Translator.run("Hello, World!");
    assert "Contains Chinese translation" against result;
}

🎯 Design Philosophy: Write Flows, Not Glue

Developers reading this documentation have likely endured the torment of handling model hallucinations through complex HTTP requests and nested if-else statements in traditional languages.

Nexa treats "language model prediction" as a native computational beat, isolating "uncertainty" within syntactic boundaries.

Comparison with Traditional Frameworks

Feature Traditional Python/LangChain Nexa
Agent Definition Instantiate class + config dict Native agent keyword
Flow Orchestration Manual calls + state management flow + pipeline operators
Intent Routing if-else + regex match intent semantic matching
Output Constraints Hand-written JSON Schema protocol declarative constraints
Concurrency Control asyncio + locks DAG operators auto-scheduling
Error Retry try-except + loops Built-in auto-retry mechanism

📚 Learning Path

Getting Started

  1. Quickstart - Master Nexa basics in 30 minutes
  2. Basic Syntax - Deep dive into all Agent properties
  3. Examples - View real-world code for various scenarios

Advanced Learning

  1. Advanced Features - DAG operators, concurrent processing
  2. Syntax Extensions - Advanced Protocol usage
  3. Best Practices - Enterprise development experience

Deep Dive

  1. Compiler Design - Full pipeline from AST to bytecode
  2. Architecture Evolution - Rust/WASM technology roadmap

Troubleshooting


🌟 Start Your Nexa Journey

快来问问agent吧!

Nexa Agent

Nexa 文档助手

我是Nexa文档AI助手,可以问我有关文档的一切!

由AI Hub提供支持