client.chat.completions.create() (or equivalent) in @weave.op or manual instrumentation, which is tedious and easy to miss something.
Weave automatically intercepts (patches) supported LLM client libraries. Your application code stays unchanged: you use the provider SDK as usual, and each request is recorded as a Weave Call. You get full tracing with minimal setup.
LLM Providers
LLM providers are the vendors that offer access to large language models for generating predictions. Weave integrates with these providers to log and trace the interactions with their APIs:- W&B Inference Service
- Amazon Bedrock
- Anthropic
- Cerebras
- Cohere
- Groq
- Hugging Face Hub
- LiteLLM
- Microsoft Azure
- MistralAI
- NVIDIA NIM
- OpenAI
- OpenRouter
- Together AI
Frameworks
Frameworks help orchestrate the actual execution pipelines in AI applications. They provide tools and abstractions for building complex workflows. Weave integrates with these frameworks to trace the entire pipeline:- OpenAI Agents SDK
- LangChain
- LlamaIndex
- DSPy
- Instructor
- CrewAI
- Smolagents
- PydanticAI
- Google Agent Development Kit (ADK)
- AutoGen
- Verdict
- TypeScript SDK
- Agno
- Koog