Crate vex_llm

Crate vex_llm 

Source
Expand description

§VEX LLM

LLM provider integrations for VEX agents.

§Supported Backends

ProviderTypeKey Required
DeepSeekAPIDEEPSEEK_API_KEY
MistralAPIMISTRAL_API_KEY
OpenAIAPIOPENAI_API_KEY
OllamaLocalNone
MockTestingNone

§Quick Start

use vex_llm::{MockProvider, LlmProvider};

#[tokio::main]
async fn main() {
    // Use mock provider for testing
    let llm = MockProvider::smart();
     
    // Ask a question
    let response = llm.ask("What is quantum computing?").await.unwrap();
    println!("{}", response);
}

§With DeepSeek

use vex_llm::DeepSeekProvider;

let api_key = std::env::var("DEEPSEEK_API_KEY").unwrap();
let llm = DeepSeekProvider::new(api_key);

let response = llm.ask("Explain Merkle trees").await.unwrap();

§With Mistral

use vex_llm::MistralProvider;

let api_key = std::env::var("MISTRAL_API_KEY").unwrap();
let llm = MistralProvider::small(&api_key); // or large(), medium(), codestral()

let response = llm.ask("Explain Merkle trees").await.unwrap();

§Rate Limiting

use vex_llm::{RateLimiter, RateLimitConfig};

let limiter = RateLimiter::new(RateLimitConfig::default());

// Check if request is allowed (in async context)
// limiter.try_acquire("user123").await.unwrap();

Re-exports§

pub use cached_provider::CachedProvider;
pub use cached_provider::LlmCacheConfig;
pub use config::ConfigError;
pub use config::LlmConfig;
pub use config::VexConfig;
pub use deepseek::DeepSeekProvider;
pub use metrics::global_metrics;
pub use metrics::Metrics;
pub use metrics::MetricsSnapshot;
pub use metrics::Span;
pub use metrics::Timer;
pub use mistral::MistralProvider;
pub use mock::MockProvider;
pub use ollama::OllamaProvider;
pub use openai::OpenAIProvider;
pub use provider::LlmError;
pub use provider::LlmProvider;
pub use provider::LlmRequest;
pub use provider::LlmResponse;
pub use rate_limit::RateLimitConfig;
pub use rate_limit::RateLimitError;
pub use rate_limit::RateLimitedProvider;
pub use rate_limit::RateLimiter;
pub use resilient_provider::CircuitState;
pub use resilient_provider::LlmCircuitConfig;
pub use resilient_provider::ResilientProvider;
pub use streaming_tool::StreamConfig;
pub use streaming_tool::StreamingTool;
pub use streaming_tool::ToolChunk;
pub use streaming_tool::ToolStream;
pub use tool::Capability;
pub use tool::Tool;
pub use tool::ToolDefinition;
pub use tool::ToolRegistry;
pub use tool_error::ToolError;
pub use tool_executor::ToolExecutor;
pub use tool_result::ToolResult;
pub use tools::CalculatorTool;
pub use tools::DateTimeTool;
pub use tools::HashTool;
pub use tools::JsonPathTool;
pub use tools::RegexTool;
pub use tools::UuidTool;

Modules§

cached_provider
Cached LLM provider wrapper using Moka
config
Configuration management for VEX
deepseek
DeepSeek LLM provider (OpenAI-compatible API)
mcp
MCP (Model Context Protocol) client integration
metrics
Metrics and tracing for VEX
mistral
Mistral AI LLM provider (OpenAI-compatible API)
mock
Mock LLM provider for testing
ollama
Ollama LLM provider for local inference
openai
OpenAI LLM provider
provider
LLM Provider trait and common types
rate_limit
Rate limiting for LLM API calls
resilient_provider
Resilient LLM provider wrapper with circuit breaker pattern
streaming_tool
Streaming tool support for long-running operations
tool
Tool definitions and execution framework for LLM function calling
tool_error
Structured error types for tool execution
tool_executor
Tool Executor with Merkle audit integration
tool_result
Tool execution result with cryptographic verification
tools
Built-in tools for VEX agents