Migration Guide
Migrate to AgentiCraft from LangChain, CrewAI, or custom LLM integrations with minimal disruption.
2 min read
Why Migrate
AgentiCraft focuses on infrastructure, not abstractions. You get:
- Unified provider interface — one API for 14 LLM providers
- Built-in resilience — circuit breakers, rate limiting, key rotation out of the box
- Cost optimization — automatic cost-aware routing across providers
- Type safety — Pydantic v2 models and Protocol-based interfaces
- No lock-in — standard Python, no custom DSLs or proprietary formats
From Direct API Calls
If you're calling provider SDKs directly:
Before
from openai import AsyncOpenAI
client = AsyncOpenAI(api_key="sk-...")
response = await client.chat.completions.create(
model="gpt-5-mini",
messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)After
from agenticraft_llm import LLMProviderConfig, create_provider
config = LLMProviderConfig(provider="openai", model="gpt-5-mini")
provider = create_provider(config)
response = await provider.complete(
messages=[{"role": "user", "content": "Hello"}],
)
print(response.content)The benefit: swap provider="openai" for provider="anthropic" and your code still works.
From LangChain
LLM Calls
LangChain wraps providers in ChatModel classes. AgentiCraft uses a simpler provider pattern:
# LangChain
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-5-mini")
result = await llm.ainvoke("Hello")
# AgentiCraft
from agenticraft_llm import LLMProviderConfig, create_provider
provider = create_provider(LLMProviderConfig(provider="openai", model="gpt-5-mini"))
result = await provider.complete(messages=[{"role": "user", "content": "Hello"}])Chains
LangChain chains map to sequential function calls. No special abstraction needed:
# LangChain
chain = prompt | llm | parser
result = await chain.ainvoke({"input": "..."})
# AgentiCraft — just use async functions
response = await provider.complete(messages=format_prompt(input_data))
result = parse_response(response.content)From CrewAI
CrewAI's agent/task model maps to AgentiCraft's provider + routing layer:
# CrewAI
from crewai import Agent, Task, Crew
agent = Agent(role="Researcher", llm="gpt-5.4")
task = Task(description="Research topic X", agent=agent)
crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()
# AgentiCraft — focus on the LLM infrastructure
from agenticraft_llm import CostAwareRouter, LLMProviderConfig
router = CostAwareRouter(
providers=[
LLMProviderConfig(provider="openai", model="gpt-5.4"),
LLMProviderConfig(provider="anthropic", model="claude-sonnet-4-6"),
]
)
# Your agent logic is plain Python — no framework required
result = await router.complete(messages=research_prompt)Migration Checklist
- Run
uv add agenticraft-llm agenticraft-types - Replace direct provider SDK calls with
create_provider() - Configure key pool for each provider
- Add circuit breaker and rate limiting
- Set up cost-aware routing if using multiple providers
- Remove old provider SDK dependencies (optional — they're still used internally)
- Update tests to use AgentiCraft's provider interface
Next Steps
- Getting Started — full installation guide
- Architecture Overview — understand the package structure
- LLM API Reference — complete provider configuration