OpenAI Adapter
AgenticAssure includes a built-in adapter for OpenAI’s Chat Completions API with function calling support. This adapter handles the translation between AgenticAssure’s scenario format and OpenAI’s API, including tool call extraction and token usage tracking.
What It Provides
The OpenAIAdapter wraps OpenAI’s chat.completions.create endpoint and returns a fully populated AgentResult with:
- output: The assistant’s text response (from
message.content). - tool_calls: Any function calls the model made, parsed into
ToolCallobjects with name and arguments. - latency_ms: Wall-clock time for the API call.
- token_usage: Prompt and completion token counts from the API response.
- raw_response: The full API response as a dictionary, useful for debugging.
This adapter is designed as a starter and reference implementation. It covers the most common use case — a single-turn chat completion with optional function calling. If your agent involves multi-turn conversations, assistants threads, streaming, or custom orchestration logic, you will likely need to write a custom adapter. See the Writing Adapters guide.
Installation
The OpenAI adapter requires the openai Python package. Install it with the optional dependency group:
pip install agenticassure[openai]Or install the openai package separately:
pip install agenticassure openaiConfiguration and Usage
Basic Usage (No Tools)
from agenticassure.adapters.openai import OpenAIAdapter
adapter = OpenAIAdapter(
model="gpt-4",
system_prompt="You are a helpful customer support agent.",
)The adapter uses the OPENAI_API_KEY environment variable by default (via the openai library’s standard behavior). You can also pass the key explicitly:
adapter = OpenAIAdapter(
model="gpt-4",
api_key="sk-...",
)With Tool Definitions
To test function calling, pass your tool definitions in OpenAI’s tool format:
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name",
}
},
"required": ["location"],
},
},
},
{
"type": "function",
"function": {
"name": "lookup_order",
"description": "Look up an order by ID",
"parameters": {
"type": "object",
"properties": {
"order_id": {
"type": "string",
"description": "The order ID",
}
},
"required": ["order_id"],
},
},
},
]
adapter = OpenAIAdapter(
model="gpt-4",
system_prompt="You are a helpful assistant.",
tools=tools,
)Constructor Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
model | str | "gpt-4" | The OpenAI model to use (e.g., "gpt-4", "gpt-4o", "gpt-3.5-turbo"). |
tools | list[dict] | None | Tool/function definitions in OpenAI’s tool format. |
system_prompt | str | None | System message prepended to the conversation. |
api_key | str | None | OpenAI API key. If not provided, uses the OPENAI_API_KEY environment variable. |
**kwargs | Additional keyword arguments passed directly to chat.completions.create (e.g., temperature, max_tokens). |
Passing Additional API Parameters
Any extra keyword arguments are forwarded to the OpenAI API call:
adapter = OpenAIAdapter(
model="gpt-4",
temperature=0.0, # Deterministic outputs
max_tokens=500, # Limit response length
top_p=1.0,
)Using with the CLI
Create a wrapper class that can be instantiated without arguments:
# myproject/adapter.py
from agenticassure.adapters.openai import OpenAIAdapter
class MyOpenAIAgent(OpenAIAdapter):
def __init__(self):
super().__init__(
model="gpt-4",
system_prompt="You are a customer support agent for Acme Corp.",
tools=[
{
"type": "function",
"function": {
"name": "lookup_order",
"description": "Look up order status",
"parameters": {
"type": "object",
"properties": {
"order_id": {"type": "string"}
},
"required": ["order_id"],
},
},
}
],
temperature=0.0,
)Then reference it from the CLI or config file:
agenticassure run scenarios/ --adapter myproject.adapter.MyOpenAIAgentOr in agenticassure.yaml:
adapter: myproject.adapter.MyOpenAIAgentImportant Notes
- Single-turn only: The adapter sends a single user message and returns the response. It does not handle multi-turn conversations or tool result submission. If the model requests a function call, the adapter captures it in
tool_callsbut does not execute the function or send the result back. - Tool calls without text output: When a model decides to call a function, it may return an empty
contentfield. In this case,outputwill be an empty string. Thepassfailscorer will mark the scenario as failed due to empty output unless you are only checkingexpected_tools. - API key management: In CI environments, set the
OPENAI_API_KEYenvironment variable via your CI system’s secrets management. Do not hardcode API keys. - Cost awareness: Each scenario execution makes at least one API call. Monitor your usage, especially when running large suites or using expensive models.
- Customization: If you need to handle tool execution, multi-turn conversations, or any logic beyond a single chat completion, write a custom adapter. The
OpenAIAdaptersource code is a good starting point — see the Writing Adapters guide.