LangChain Adapter
AgenticAssure includes a built-in adapter for LangChain’s AgentExecutor. This adapter translates between AgenticAssure’s scenario format and LangChain’s agent execution interface, capturing tool calls, reasoning traces, and latency from intermediate steps.
What It Provides
The LangChainAdapter wraps a LangChain AgentExecutor instance and returns a populated AgentResult with:
- output: The agent’s final answer (from the
"output"key in the result). - tool_calls: Each intermediate tool invocation, parsed into
ToolCallobjects with the tool name, arguments, and result. - reasoning_trace: A list of strings showing each tool call and its observation, useful for debugging agent behavior.
- latency_ms: Wall-clock time for the full agent execution.
This adapter is designed as a starter and reference implementation. It works with the standard AgentExecutor pattern in LangChain. If your agent uses LangGraph, custom chains, or non-standard output formats, you will likely need to write a custom adapter. See the Writing Adapters guide.
Installation
The LangChain adapter requires the langchain package. Install it with the optional dependency group:
pip install agenticassure[langchain]Or install the langchain package separately:
pip install agenticassure langchainDepending on your LLM provider, you may also need provider-specific packages like langchain-openai, langchain-anthropic, or others.
Configuration and Usage
Creating the Adapter
The LangChainAdapter takes a pre-configured AgentExecutor instance. You build the agent using LangChain’s standard patterns and then wrap it:
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI
from langchain.tools import Tool
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from agenticassure.adapters.langchain import LangChainAdapter
# 1. Define your tools
tools = [
Tool(
name="lookup_order",
description="Look up an order by order ID",
func=lambda order_id: f"Order {order_id}: shipped, arriving Tuesday",
),
Tool(
name="search_kb",
description="Search the knowledge base for an answer",
func=lambda query: f"KB result for '{query}': See article #42",
),
]
# 2. Create the LLM and prompt
llm = ChatOpenAI(model="gpt-4", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful customer support agent."),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
# 3. Build the agent and executor
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
return_intermediate_steps=True, # Required for tool call capture
)
# 4. Wrap with AgenticAssure adapter
adapter = LangChainAdapter(agent_executor=agent_executor)Important: return_intermediate_steps=True
You must set return_intermediate_steps=True on the AgentExecutor for the adapter to capture tool calls. Without this flag, tool_calls and reasoning_trace will be empty.
Constructor Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
agent_executor | AgentExecutor | Yes | A configured LangChain AgentExecutor instance. |
How Tool Calls Are Captured
The adapter reads intermediate_steps from the agent executor’s result. Each step is a tuple of (AgentAction, observation):
AgentAction.toolbecomesToolCall.name.AgentAction.tool_inputbecomesToolCall.arguments. If the input is a string rather than a dict, it is wrapped as{"input": value}.- The observation (the tool’s return value) becomes
ToolCall.result.
Each step also generates a reasoning trace entry in the format "Tool: tool_name -> observation".
How Context Is Handled
The context parameter from AgenticAssure is merged into the input dictionary passed to the agent executor’s invoke method. This means context keys become available alongside the input key:
# When AgenticAssure calls:
adapter.run("What is my balance?", context={"user_id": "123"})
# The agent executor receives:
{"input": "What is my balance?", "user_id": "123"}This is useful if your LangChain agent’s prompt template includes variables beyond {input}.
Using with the CLI
Since the LangChainAdapter requires a pre-built AgentExecutor in its constructor, you need to create a wrapper class for CLI usage:
# myproject/adapter.py
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI
from langchain.tools import Tool
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from agenticassure.adapters.langchain import LangChainAdapter
class MyCLIAgent(LangChainAdapter):
def __init__(self):
tools = [
Tool(
name="lookup_order",
description="Look up an order by ID",
func=lambda order_id: f"Order {order_id}: shipped",
),
]
llm = ChatOpenAI(model="gpt-4", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(
agent=agent,
tools=tools,
return_intermediate_steps=True,
)
super().__init__(agent_executor=executor)Then use it from the CLI:
agenticassure run scenarios/ --adapter myproject.adapter.MyCLIAgentOr in agenticassure.yaml:
adapter: myproject.adapter.MyCLIAgentExample Scenarios for a LangChain Agent
suite:
name: langchain-agent-tests
description: Tests for a LangChain-based support agent
scenarios:
- name: basic_greeting
input: "Hello!"
expected_output: "help"
scorers:
- passfail
tags:
- happy-path
- name: order_lookup
input: "What is the status of order #ORD-555?"
expected_tools:
- lookup_order
scorers:
- passfail
tags:
- tools
- orders
- name: knowledge_base_search
input: "How do I reset my password?"
expected_tools:
- search_kb
expected_output: "password"
scorers:
- passfail
tags:
- tools
- faqImportant Notes
- AgentExecutor only: This adapter is designed for LangChain’s
AgentExecutorclass. It does not support LangGraph, customRunnablechains, or other LangChain patterns directly. For those, write a custom adapter. - Token usage not tracked: The LangChain adapter does not extract token usage from the executor’s result. If you need token tracking, implement a custom adapter that uses LangChain’s callback system to capture token counts.
- Tool input format: LangChain tools can receive string or dict inputs. The adapter normalizes string inputs to
{"input": value}for consistency with AgenticAssure’sexpected_tool_argsmatching. - Intermediate steps required: Without
return_intermediate_steps=True, the adapter cannot see which tools were called. This is the most common setup mistake. - Customization: If your LangChain setup is more complex (e.g., uses memory, callbacks, or custom output parsers), start with the
LangChainAdaptersource code and adapt it. See the Writing Adapters guide for details.