Framework Support
Khaos supports major frameworks through a single @khaosagent entrypoint. You keep your framework code; Khaos handles instrumentation, telemetry, faults, and security testing.
Supported Frameworks
| Framework | Sync | Async | Streaming |
|---|---|---|---|
| OpenAI SDK | Yes | Yes | Yes |
| Anthropic SDK | Yes | Yes | Yes |
| Google Gemini | Yes | Yes | Yes |
| Mistral SDK | Yes | Yes | Yes |
| Cohere SDK | Yes | - | Yes |
| LangChain | Yes | Yes | Yes |
| LangGraph | Yes | Yes | Yes |
| CrewAI | Yes | Yes | - |
| AutoGen | Yes | Yes | - |
| LlamaIndex | Yes | Yes | Yes |
| Instructor | Yes | Yes | - |
How It Works
Khaos uses runtime shimming to intercept LLM API calls from your decorated handler:
- Decorator entrypoint - Your handler receives user messages and returns a response
- Shim injection - Lightweight wrappers capture API calls during a Khaos run
- Telemetry collection - Prompts, completions, tokens, latency, costs
- Security testing - Adversarial inputs injected through same channel
# Just run your agent - Khaos handles the rest
khaos discover
khaos run <agent-name> --syncOpenAI
Full support for the OpenAI Python SDK including chat completions, function calling, and streaming.
# agent.py
from khaos import khaosagent
from openai import OpenAI
client = OpenAI()
@khaosagent(name="openai-agent", version="1.0.0", framework="openai")
def handle(message):
prompt = (message.get("payload") or {}).get("text", "")
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}],
)
return {"text": response.choices[0].message.content}khaos discover
khaos run openai-agent --syncAnthropic
Full support for the Anthropic Python SDK including Claude models and streaming.
# agent.py
from khaos import khaosagent
from anthropic import Anthropic
client = Anthropic()
@khaosagent(name="anthropic-agent", version="1.0.0", framework="anthropic")
def handle(message):
prompt = (message.get("payload") or {}).get("text", "")
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": prompt}],
)
return {"text": message.content[0].text}Google Gemini
Full support for the Google Generative AI (Gemini) SDK including content generation and streaming.
# agent.py
from khaos import khaosagent
import google.generativeai as genai
genai.configure(api_key="...")
model = genai.GenerativeModel("gemini-1.5-pro")
@khaosagent(name="gemini-agent", version="1.0.0", framework="google")
def handle(message):
prompt = (message.get("payload") or {}).get("text", "")
response = model.generate_content(prompt)
return {"text": response.text}Mistral
Full support for the Mistral AI SDK including chat completions and streaming.
# agent.py
from khaos import khaosagent
from mistralai import Mistral
client = Mistral(api_key="...")
@khaosagent(name="mistral-agent", version="1.0.0", framework="mistral")
def handle(message):
prompt = (message.get("payload") or {}).get("text", "")
response = client.chat.complete(
model="mistral-large-latest",
messages=[{"role": "user", "content": prompt}],
)
return {"text": response.choices[0].message.content}Cohere
Support for the Cohere SDK including chat and streaming responses.
# agent.py
from khaos import khaosagent
import cohere
client = cohere.Client("...")
@khaosagent(name="cohere-agent", version="1.0.0", framework="cohere")
def handle(message):
prompt = (message.get("payload") or {}).get("text", "")
response = client.chat(
model="command-r-plus",
message=prompt,
)
return {"text": response.text}LangChain
Support for LangChain chains, agents, and tools. Khaos captures all LLM calls made through LangChain's model interfaces.
# agent.py
from khaos import khaosagent
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o")
@khaosagent(name="langchain-agent", version="1.0.0", framework="langchain")
def handle(message):
prompt = (message.get("payload") or {}).get("text", "")
response = llm.invoke(prompt)
return {"text": response.content}CrewAI
Support for CrewAI multi-agent orchestration. Khaos captures all LLM calls across crew members.
# agent.py
from khaos import khaosagent
from crewai import Agent, Task, Crew
researcher = Agent(
role="Researcher",
goal="Research the topic",
backstory="You are an expert researcher.",
)
task = Task(
description="Research AI safety",
expected_output="A summary of AI safety research",
agent=researcher,
)
crew = Crew(agents=[researcher], tasks=[task])
@khaosagent(name="crewai-agent", version="1.0.0", framework="crewai")
def handle(message):
result = crew.kickoff()
return {"text": str(result)}AutoGen
Support for Microsoft AutoGen conversation patterns.
# agent.py
from khaos import khaosagent
import autogen
config_list = [{"model": "gpt-4o", "api_key": "..."}]
assistant = autogen.AssistantAgent(
name="assistant",
llm_config={"config_list": config_list},
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
)
@khaosagent(name="autogen-agent", version="1.0.0", framework="autogen")
def handle(message):
prompt = (message.get("payload") or {}).get("text", "")
user_proxy.initiate_chat(assistant, message=prompt)
# Get the last response from the conversation
last_message = assistant.last_message()
return {"text": last_message["content"] if last_message else ""}LlamaIndex
Support for LlamaIndex RAG pipelines and query engines.
# agent.py
from khaos import khaosagent
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms.openai import OpenAI
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
@khaosagent(name="llamaindex-agent", version="1.0.0", framework="llamaindex")
def handle(message):
prompt = (message.get("payload") or {}).get("text", "")
response = query_engine.query(prompt)
return {"text": str(response)}Instructor
Support for Instructor structured output extraction.
# agent.py
from khaos import khaosagent
import instructor
from openai import OpenAI
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
client = instructor.from_openai(OpenAI())
@khaosagent(name="instructor-agent", version="1.0.0", framework="instructor")
def handle(message):
prompt = (message.get("payload") or {}).get("text", "")
user = client.chat.completions.create(
model="gpt-4o",
response_model=User,
messages=[{"role": "user", "content": prompt}],
)
return {"text": f"Name: {user.name}, Age: {user.age}"}Automatic Output Extraction
Khaos automatically extracts output text from framework-native objects. You don't need to manually convert responses—just return what's natural:
# LangChain - return the AIMessage directly
@khaosagent(name="langchain-agent", version="1.0.0")
def handle(message):
prompt = (message.get("payload") or {}).get("text", "")
response = llm.invoke(prompt)
return response # AIMessage - Khaos extracts .content automatically
# CrewAI - return the crew output
@khaosagent(name="crewai-agent", version="1.0.0")
def handle(message):
result = crew.kickoff()
return result # CrewOutput - Khaos extracts .raw automatically
# Or use any common key name
return {"content": "..."} # OpenAI-style
return {"result": "..."} # Tool-style
return {"answer": "..."} # Q&A-stylecontent, text, message, value, result, response, output, answer.Telemetry Captured
For all frameworks, Khaos captures:
- Request data - Model, messages, parameters
- Response data - Completions, tool calls, finish reasons
- Token usage - Prompt tokens, completion tokens, total
- Latency - Time to first token, total duration
- Cost - USD cost based on token pricing
- Tool calls - Function calls and their results
Running Framework Agents
After adding the @khaosagent decorator, discover and run your agent:
# Discover agents in your project
khaos discover
# Run evaluation
khaos run <agent-name> --eval quickstart
# Run with cloud sync
khaos run <agent-name> --eval quickstart --syncSee @khaosagent Decorator for all decorator parameters and advanced patterns.