Getting Started with VEX
Get up and running with VEX in under 5 minutes.
Prerequisites
- Rust 1.75+ (stable toolchain)
- Git
Installation
Add to Your Project
# Cargo.toml
[dependencies]
vex-core = { git = "https://github.com/provnai/vex" }
vex-adversarial = { git = "https://github.com/provnai/vex" }
vex-llm = { git = "https://github.com/provnai/vex" }
Clone and Build
git clone https://github.com/provnai/vex.git
cd vex
cargo build --workspace --release
Quick Example
use vex_core::{Agent, AgentConfig};
use vex_llm::MockProvider;
#[tokio::main]
async fn main() {
// Create an agent
let agent = Agent::new(AgentConfig {
name: "Researcher".to_string(),
role: "You are a helpful research assistant".to_string(),
max_depth: 3,
spawn_shadow: true,
});
// Use with an LLM provider
let llm = MockProvider::smart();
let response = llm.ask("What is quantum computing?").await.unwrap();
println!("Response: {}", response);
}
Running the Demo
# Set up API key (optional, uses mock if not set)
export DEEPSEEK_API_KEY="sk-..."
# Run the research agent demo
cargo run -p vex-demo
# Run fraud detection demo
cargo run -p vex-demo --bin fraud-detector
# Interactive chat
cargo run -p vex-demo --bin interactive
Running the API Server
export VEX_JWT_SECRET="your-32-char-secret-here"
cargo run -p vex-api
# Server starts on 0.0.0.0:3000
Environment Variables
| Variable | Purpose | Required |
|---|---|---|
DEEPSEEK_API_KEY | DeepSeek LLM API key | For DeepSeek |
OPENAI_API_KEY | OpenAI API key | For OpenAI |
OLLAMA_URL | Ollama server URL | For Ollama |
VEX_JWT_SECRET | JWT secret (32+ chars) | For API |
Next Steps
- API Reference — Full Rustdoc documentation
- Examples — More code examples
- Architecture — System design