Skip to Content
Getting StartedAI-Assisted Setup Prompts

AI-Assisted Setup Prompts

Use these prompts to have an AI coding agent set up AgenticAssure in your project automatically. Copy the prompt, fill in the blank describing your agent, and paste it into your tool of choice.


Claude Code

Paste this into Claude Code from your project’s root directory:

Read the AgenticAssure documentation at https://docs.agenticassure.com to understand how the SDK works, including the adapter protocol, scenario YAML format, available scorers, and CLI commands. Then inspect this codebase to find my AI agent implementation. Understand what it does, what LLM provider it uses, what tools it has access to, and how it's invoked. My agent: [DESCRIBE YOUR AGENT IN 1-2 SENTENCES — e.g. "A customer support agent built with the Anthropic API that can look up orders and process refunds" or "A coding assistant that runs as a CLI tool and generates Python files"] Once you understand both AgenticAssure and my agent, do the following: 1. Install AgenticAssure with the appropriate extras for my agent's LLM provider: - pip install agenticassure (base) - pip install agenticassure[anthropic] (if using Anthropic) - pip install agenticassure[similarity] (if I want semantic similarity scoring) 2. Create an adapter class that wraps my agent and implements the AgentAdapter protocol. If my agent is a CLI tool or autonomous agent, use the SubprocessAdapter instead of writing a custom adapter. Put this in a file called agenticassure_adapter.py in my project root. 3. Create a test scenarios YAML file at tests/scenarios/test_suite.yaml with 5-8 realistic test scenarios for my specific agent. Each scenario should: - Test a different capability or edge case of my agent - Use the appropriate scorers (passfail, exact, regex, or similarity) based on what's being tested - Include expected_tools if my agent uses tool calling - Include expected_files if my agent creates or modifies files - Have descriptive names and relevant tags 4. Create an agenticassure.yaml config file in my project root pointing to my adapter. 5. Run `agenticassure validate tests/scenarios/test_suite.yaml` to verify the scenarios are valid. 6. Run `agenticassure run tests/scenarios/test_suite.yaml --adapter agenticassure_adapter.MyAgentAdapter` (adjust the class name to match what you created) and show me the results. 7. If any tests fail due to adapter issues or scenario misconfiguration (not actual agent failures), fix them and rerun. After everything passes, give me a summary of what was set up and how to run tests going forward.

Codex / OpenAI Codex CLI

Paste this into Codex from your project’s root directory:

Read the AgenticAssure documentation at https://docs.agenticassure.com to understand how the SDK works, including the adapter protocol, scenario YAML format, available scorers, and CLI commands. Then inspect this codebase to find my AI agent implementation. Understand what it does, what LLM provider it uses, what tools it has access to, and how it's invoked. My agent: [DESCRIBE YOUR AGENT IN 1-2 SENTENCES — e.g. "A customer support agent built with the OpenAI API that can search a knowledge base and create tickets" or "A data analysis agent using LangChain that queries databases and generates reports"] Once you understand both AgenticAssure and my agent, do the following: 1. Install AgenticAssure with the appropriate extras for my agent's LLM provider: - pip install agenticassure (base) - pip install agenticassure[anthropic] (if using Anthropic) - pip install agenticassure[similarity] (if I want semantic similarity scoring) 2. Create an adapter class that wraps my agent and implements the AgentAdapter protocol. If my agent is a CLI tool or autonomous agent, use the SubprocessAdapter instead of writing a custom adapter. Put this in a file called agenticassure_adapter.py in my project root. 3. Create a test scenarios YAML file at tests/scenarios/test_suite.yaml with 5-8 realistic test scenarios for my specific agent. Each scenario should: - Test a different capability or edge case of my agent - Use the appropriate scorers (passfail, exact, regex, or similarity) based on what's being tested - Include expected_tools if my agent uses tool calling - Include expected_files if my agent creates or modifies files - Have descriptive names and relevant tags 4. Create an agenticassure.yaml config file in my project root pointing to my adapter. 5. Run `agenticassure validate tests/scenarios/test_suite.yaml` to verify the scenarios are valid. 6. Run `agenticassure run tests/scenarios/test_suite.yaml --adapter agenticassure_adapter.MyAgentAdapter` (adjust the class name to match what you created) and show me the results. 7. If any tests fail due to adapter issues or scenario misconfiguration (not actual agent failures), fix them and rerun. After everything passes, give me a summary of what was set up and how to run tests going forward.

Tips

  • Be specific in your agent description. The more detail you give about what your agent does, the better the generated scenarios will be. Mention the tools it uses, the types of queries it handles, and any edge cases you care about.
  • Review the generated scenarios. The AI will make reasonable guesses about what to test, but you know your agent best. Add, remove, or adjust scenarios after the initial setup.
  • Run with different output formats. After setup, try agenticassure run tests/scenarios/ -o html for a shareable report or -o json for programmatic results.
  • Iterate. Start with the generated scenarios, then add more as you discover edge cases in production. The goal is to build a comprehensive test suite over time.
Last updated on