Security Best Practices
Authentication, secret management, secure agent communication, and operational security for AgentiCraft deployments.
1 min read
API Key Management
Never Hardcode Keys
Store API keys in a secrets manager, not in code or environment variables:
# Bad - hardcoded
config = LLMProviderConfig(provider="openai", api_key="sk-abc123...")
# Better - environment variable
import os
config = LLMProviderConfig(provider="openai", api_key=os.environ["OPENAI_API_KEY"])
# Best - secrets manager
from your_secret_manager import get_secret
config = LLMProviderConfig(provider="openai", api_key=get_secret("openai-api-key"))Key Rotation
Rotate API keys regularly. The key pool supports zero-downtime rotation:
from agenticraft_llm import KeyPoolManager
pool = KeyPoolManager(
keys=["sk-old-key", "sk-new-key"],
strategy="failover", # Use new key first, fall back to old
)
# After verifying the new key works, remove the old one
pool.remove_key("sk-old-key")Key Scoping
Use the minimum permissions required. Most LLM providers support scoped API keys:
- OpenAI — Create project-specific keys with model restrictions
- Anthropic — Use workspace-scoped keys
- Google — Use service accounts with IAM roles
Network Security
TLS Everywhere
All AgentiCraft HTTP clients use TLS by default. Do not disable certificate verification:
# Never do this in production
config = LLMProviderConfig(provider="openai", verify_ssl=False) # Don't do thisRequest Validation
Validate inputs before sending to LLM providers:
from agenticraft_types import CompletionRequest
# Pydantic validates the request shape
request = CompletionRequest(
model="gpt-5-mini",
messages=[{"role": "user", "content": user_input}],
max_tokens=1000,
)Agent Communication Security
When building multi-agent systems:
- Validate all inter-agent messages — don't trust agent outputs without validation
- Scope agent permissions — each agent should only access the tools and data it needs
- Log agent actions — maintain an audit trail of agent decisions
- Set output limits — configure
max_tokensto prevent runaway generation - Monitor for prompt injection — validate that agent inputs don't contain adversarial prompts
Rate Limiting
Protect your infrastructure from excessive usage:
from agenticraft_llm import TokenBucketRateLimiter
limiter = TokenBucketRateLimiter(
rate=100, # 100 requests per second
burst=20, # Allow bursts of 20
)
# Use with router
router = CostAwareRouter(
providers=[...],
rate_limiter=limiter,
)Operational Security
- Audit logging — Log all LLM API calls with timestamps, models used, and token counts (never log prompt content in production)
- Cost alerts — Set budget alerts per provider to catch runaway costs
- Access control — Restrict who can modify provider configurations and API keys
- Incident response — Have a playbook for provider outages and key compromises
Next Steps
- Production Deployment — deployment checklists
- Migration Guide — secure migration from other frameworks
- Types API Reference — error handling and validation