Expand description
§VEX LLM
LLM provider integrations for VEX agents.
§Supported Backends
| Provider | Type | Key Required |
|---|---|---|
| DeepSeek | API | DEEPSEEK_API_KEY |
| Mistral | API | MISTRAL_API_KEY |
| OpenAI | API | OPENAI_API_KEY |
| Ollama | Local | None |
| Mock | Testing | None |
§Quick Start
use vex_llm::{MockProvider, LlmProvider};
#[tokio::main]
async fn main() {
// Use mock provider for testing
let llm = MockProvider::smart();
// Ask a question
let response = llm.ask("What is quantum computing?").await.unwrap();
println!("{}", response);
}§With DeepSeek
ⓘ
use vex_llm::DeepSeekProvider;
let api_key = std::env::var("DEEPSEEK_API_KEY").unwrap();
let llm = DeepSeekProvider::new(api_key);
let response = llm.ask("Explain Merkle trees").await.unwrap();§With Mistral
ⓘ
use vex_llm::MistralProvider;
let api_key = std::env::var("MISTRAL_API_KEY").unwrap();
let llm = MistralProvider::small(&api_key); // or large(), medium(), codestral()
let response = llm.ask("Explain Merkle trees").await.unwrap();§Rate Limiting
use vex_llm::{RateLimiter, RateLimitConfig};
let limiter = RateLimiter::new(RateLimitConfig::default());
// Check if request is allowed (in async context)
// limiter.try_acquire("user123").await.unwrap();Re-exports§
pub use config::ConfigError;pub use config::LlmConfig;pub use config::VexConfig;pub use deepseek::DeepSeekProvider;pub use metrics::global_metrics;pub use metrics::Metrics;pub use metrics::MetricsSnapshot;pub use metrics::Span;pub use metrics::Timer;pub use mistral::MistralProvider;pub use mock::MockProvider;pub use ollama::OllamaProvider;pub use openai::OpenAIProvider;pub use provider::LlmError;pub use provider::LlmProvider;pub use provider::LlmRequest;pub use provider::LlmResponse;pub use rate_limit::RateLimitConfig;pub use rate_limit::RateLimitError;pub use rate_limit::RateLimitedProvider;pub use rate_limit::RateLimiter;pub use tool::ToolDefinition;
Modules§
- config
- Configuration management for VEX
- deepseek
- DeepSeek LLM provider (OpenAI-compatible API)
- metrics
- Metrics and tracing for VEX
- mistral
- Mistral AI LLM provider (OpenAI-compatible API)
- mock
- Mock LLM provider for testing
- ollama
- Ollama LLM provider for local inference
- openai
- OpenAI LLM provider
- provider
- LLM Provider trait and common types
- rate_
limit - Rate limiting for LLM API calls
- tool
- Tool definitions for LLM function calling