Skip to content

Commit 895162f

Browse files
committed
feat: add implementation guides to assist in AI-driven development
1 parent b19a2f9 commit 895162f

File tree

3 files changed

+411
-0
lines changed

3 files changed

+411
-0
lines changed

implementation-guides/evals.md

Lines changed: 144 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,144 @@
1+
# Evaluation Implementation Guide
2+
3+
This guide explains how to create evaluation tests (`.eval.ts` files) for testing AI model interactions with specific tools or systems, such as Cloudflare Worker bindings or container environments.
4+
5+
## What are Evals?
6+
7+
Evals are automated tests designed to verify if an AI model correctly understands instructions and utilizes its available "tools" (functions, API calls, environment interactions) to achieve a desired outcome. They assess the model's ability to follow instructions, select appropriate tools, and provide correct arguments to those tools.
8+
9+
## Core Concepts
10+
11+
Evals are typically built using a testing framework like `vitest` combined with specialized evaluation libraries like `vitest-evals`. The main structure revolves around `describeEval`:
12+
13+
```typescript
14+
import { expect } from 'vitest'
15+
import { describeEval } from 'vitest-evals'
16+
17+
import { checkFactuality } from '@repo/eval-tools/src/scorers'
18+
import { eachModel } from '@repo/eval-tools/src/test-models'
19+
20+
import { initializeClient, runTask } from './utils' // Helper functions
21+
22+
eachModel('$modelName', ({ model }) => {
23+
// Optional: Run tests for multiple models
24+
describeEval('A descriptive name for the evaluation suite', {
25+
data: async () => [
26+
/* Test cases */
27+
],
28+
task: async (input) => {
29+
/* Test logic */
30+
},
31+
scorers: [
32+
/* Scoring functions */
33+
],
34+
threshold: 1, // Passing score threshold
35+
timeout: 60000, // Test timeout
36+
})
37+
})
38+
```
39+
40+
### Key Parts:
41+
42+
1. **`describeEval(name, options)`**: Defines a suite of evaluation tests.
43+
44+
- `name`: A string describing the purpose of the eval suite.
45+
- `options`: An object containing the configuration for the eval:
46+
- **`data`**: An async function returning an array of test case objects. Each object typically contains:
47+
- `input`: (string) The instruction given to the AI model.
48+
- `expected`: (string) A natural language description of the _expected_ sequence of actions or outcome. This is used by scorers.
49+
- **`task`**: An async function that executes the actual test logic for a given `input`. It orchestrates the interaction with the AI/system and performs assertions.
50+
- **`scorers`**: An array of scoring functions (e.g., `checkFactuality`) that evaluate the test outcome based on the `promptOutput` from the `task` and the `expected` string from the `data`.
51+
- **`threshold`**: (number, usually between 0 and 1) The minimum score required from the scorers for the test case to pass. A threshold of `1` means a perfect score is required.
52+
- **`timeout`**: (number) Maximum time in milliseconds allowed for a single test case.
53+
54+
2. **`task(input)` Function**: The heart of the eval. It typically involves:
55+
56+
- **Setup**: Initializing a client or test environment (`initializeClient`). This prepares the system for the test, configuring available tools or connections.
57+
- **Execution**: Running the actual interaction (`runTask`). This function sends the `input` instruction to the AI model via the client and captures the results, which usually include:
58+
- `promptOutput`: The textual response from the AI model.
59+
- `toolCalls`: A structured list of the tools the AI invoked, along with the arguments passed to each tool.
60+
- **Assertions (`expect`)**: Using the testing framework's assertion library (`vitest`'s `expect` in the examples) to verify that the correct tools were called with the correct arguments based on the `toolCalls` data. Sometimes, this involves direct interaction with the system state (e.g., reading a file created by a tool) to confirm the outcome.
61+
- **Return Value**: The `task` function usually returns the `promptOutput` to be evaluated by the `scorers`.
62+
63+
3. **Scoring (`checkFactuality`, etc.)**: Automated functions that compare the actual outcome (represented by the `promptOutput` and implicitly by the assertions passed within the `task`) against the `expected` description.
64+
65+
4. **Helper Utilities (`./utils`)**:
66+
- `initializeClient()`: Sets up the testing environment, connects to the system under test, and configures the available tools for the AI model.
67+
- `runTask(client, model, input)`: Sends the input prompt to the specified AI model using the configured client, executes the model's reasoning and tool use, and returns the results (`promptOutput`, `toolCalls`).
68+
- `eachModel()`: (Optional) A utility to run the same evaluation suite against multiple different AI models.
69+
70+
## Steps to Implement Evals
71+
72+
1. **Identify Tools:** Define the specific actions or functions (the "tools") that the AI should be able to use within the system you're testing (e.g., `kv_write`, `d1_query`, `container_exec`).
73+
2. **Create Helper Functions:** Implement your `initializeClient` and `runTask` (or similarly named) functions.
74+
- `initializeClient`: Should set up the necessary context, potentially using test environments like `vitest-environment-miniflare` for workers. It needs to make the defined tools available to the AI model simulation.
75+
- `runTask`: Needs to simulate the AI processing: take an input prompt, interact with an LLM (or a mock) configured with the tools, capture which tools are called and with what arguments, and capture the final text output.
76+
3. **Create Eval File (`*.eval.ts`):** Create a new file (e.g., `kv-operations.eval.ts`).
77+
4. **Import Dependencies:** Import `describeEval`, scorers, helpers, `expect`, etc.
78+
5. **Structure with `describeEval`:** Define your evaluation suite.
79+
6. **Define Test Cases (`data`):** Write specific test scenarios:
80+
- Provide clear, unambiguous `input` prompts that target the tools you want to test.
81+
- Write concise `expected` descriptions detailing the primary tool calls or outcomes anticipated.
82+
7. **Implement the `task` Function:**
83+
- Call `initializeClient`.
84+
- Call `runTask` with the `input`.
85+
- Write `expect` assertions to rigorously check:
86+
- Were the correct tools called? (`toolName`)
87+
- Were they called in the expected order (if applicable)?
88+
- Were the arguments passed to the tools correct? (`args`)
89+
- (Optional) Interact with the system state if necessary to verify side effects.
90+
- Return the `promptOutput`.
91+
8. **Configure Scorers and Threshold:** Choose appropriate scorers (often `checkFactuality`) and set a `threshold`.
92+
9. **Run Tests:** Execute the evals using your test runner (e.g., `vitest run`).
93+
94+
## Example Structure (Simplified)
95+
96+
```typescript
97+
// my-feature.eval.ts
98+
import { expect } from 'vitest'
99+
import { describeEval } from 'vitest-evals'
100+
101+
import { checkFactuality } from '@repo/eval-tools/src/scorers'
102+
103+
import { initializeClient, runTask } from './utils'
104+
105+
describeEval('Tests My Feature Tool Interactions', {
106+
data: async () => [
107+
{
108+
input: 'Use my_tool to process the data "example"',
109+
expected: 'The my_tool tool was called with data set to "example"',
110+
},
111+
// ... more test cases
112+
],
113+
task: async (input) => {
114+
const client = await initializeClient() // Sets up environment with my_tool
115+
const { promptOutput, toolCalls } = await runTask(client, 'your-model', input)
116+
117+
// Check if my_tool was called
118+
const myToolCall = toolCalls.find((call) => call.toolName === 'my_tool')
119+
expect(myToolCall).toBeDefined()
120+
121+
// Check arguments passed to my_tool
122+
expect(myToolCall?.args).toEqual(
123+
expect.objectContaining({
124+
data: 'example',
125+
// ... other expected args
126+
})
127+
)
128+
129+
return promptOutput // Return AI output for scoring
130+
},
131+
scorers: [checkFactuality],
132+
threshold: 1,
133+
})
134+
```
135+
136+
## Best Practices
137+
138+
- **Clear Inputs:** Write inputs as clear, actionable instructions.
139+
- **Specific Expected Outcomes:** Make `expected` descriptions precise enough for scorers but focus on the key actions.
140+
- **Targeted Assertions:** Use `expect` to verify the most critical aspects of tool calls (tool name, key arguments). Don't over-assert on trivial details unless necessary.
141+
- **Isolate Tests:** Ensure each test case in `data` tests a specific interaction or a small sequence of interactions.
142+
- **Helper Functions:** Keep `initializeClient` and `runTask` generic enough to be reused across different eval files for the same system.
143+
- **Use `expect.objectContaining` or `expect.stringContaining`:** Often, you only need to verify _parts_ of the arguments, not the entire structure, making tests less brittle.
144+
- **Descriptive Names:** Use clear names for `describeEval` blocks and meaningful `input`/`expected` strings.

implementation-guides/tools.md

Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
# MCP Tool Implementation Guide
2+
3+
This guide explains how to implement and register tools within an MCP (Model Context Protocol) server, enabling AI models to interact with external systems, APIs, or specific functionalities like Cloudflare services.
4+
5+
## Purpose of Tools
6+
7+
Tools are the mechanism by which an MCP agent (powered by an LLM) can perform actions beyond generating text. They allow the agent to accomplish many tasks, including:
8+
9+
- Interact with APIs (e.g., Cloudflare API, other REST APIs).
10+
- Query databases or vector stores (like Autorag).
11+
- Access environment resources (KV, R2, D1, Service Bindings).
12+
- Perform specific computations or data transformations.
13+
14+
## Registering a Tool
15+
16+
Tools are registered using the `agent.server.tool()` method.
17+
18+
```typescript
19+
// Import your Zod schemas
20+
import { z } from 'zod'
21+
22+
import { getCloudflareClient } from '../cloudflare-api'
23+
import { MISSING_ACCOUNT_ID_RESPONSE } from '../constants'
24+
import { type CloudflareMcpAgent } from '../types/cloudflare-mcp-agent'
25+
import { KvNamespaceIdSchema, KvNamespaceTitleSchema } from '../types/kv_namespace'
26+
27+
export function registerMyServiceTools(agent: CloudflareMcpAgent) {
28+
agent.server.tool(
29+
'tool_name', // String: Unique name for the tool
30+
'Detailed description', // String: Description for the LLM (CRITICAL!)
31+
{
32+
// Object: Parameter definitions using Zod schemas
33+
param1: MyParam1Schema,
34+
param2: MyParam2Schema.optional(),
35+
// ... other parameters
36+
},
37+
async (params) => {
38+
// Async Function: The implementation logic
39+
// params contains the validated parameters { param1, param2, ... }
40+
41+
// --- Tool Logic Start ---
42+
try {
43+
// Access agent context if needed (e.g., account ID, credentials)
44+
const account_id = await agent.getActiveAccountId()
45+
if (!account_id) {
46+
return MISSING_ACCOUNT_ID_RESPONSE // Handle missing context
47+
}
48+
49+
// Perform the action (e.g., call SDK, query DB)
50+
// const client = getCloudflareClient(agent.props.accessToken);
51+
// const result = await client.someService.someAction(...);
52+
53+
// Format the successful response
54+
return {
55+
content: [
56+
{
57+
type: 'text',
58+
text: JSON.stringify({ success: true /*, result */ }),
59+
},
60+
// Or potentially EmbeddedResource for richer data
61+
],
62+
}
63+
} catch (error) {
64+
// Format the error response
65+
return {
66+
content: [
67+
{
68+
type: 'text',
69+
text: `Error performing action: ${error instanceof Error ? error.message : String(error)}`,
70+
},
71+
],
72+
}
73+
}
74+
// --- Tool Logic End ---
75+
}
76+
)
77+
78+
// ... register other tools ...
79+
}
80+
```
81+
82+
### Key Components:
83+
84+
1. **`toolName` (string):**
85+
86+
- A unique identifier for the tool.
87+
- **Convention:** Use `snake_case`. Typically `service_noun_verb` (e.g., `kv_namespace_create`, `hyperdrive_config_list`, `docs_search`).
88+
89+
2. **`description` (string - Max 1024 chars):**
90+
91+
- **This is the MOST CRITICAL part for LLM interaction.** The LLM uses this description _exclusively_ to decide _when_ to use the tool and _what_ it does.
92+
- **A good description should include:**
93+
- **Core Purpose:** What does the tool _do_? (e.g., "List Hyperdrive configurations", "Search Cloudflare documentation").
94+
- **When to Use:** Provide clear scenarios or user intents that should trigger this tool. Use bullet points or clear instructions. (e.g., "Use this when a user asks to see their Hyperdrive setups", "Use this tool when: a user asks about Cloudflare products; you need info on a feature; you are unsure how to use Cloudflare functionality; you are writing Workers code and need docs").
95+
- **Inputs:** Briefly mention key inputs if not obvious from parameter names.
96+
- **Outputs:** Briefly describe what the tool returns (e.g., "Returns a list of namespace objects", "Returns search results as embedded resources").
97+
- **Example Workflows/Follow-ups (Optional but helpful):** Suggest how this tool fits into a larger task or what tools might be used next (e.g., "After creating a namespace with `kv_namespace_create`, you might bind it to a Worker.", "Use `hyperdrive_config_get` to view details before using `hyperdrive_config_edit`.").
98+
- **Be specific and unambiguous.** Avoid jargon unless it's essential domain terminology the LLM should understand.
99+
- **Keep it concise** while conveying necessary information.
100+
101+
3. **`parameters` (object):**
102+
103+
- An object mapping parameter names (keys) to their corresponding Zod schemas (values).
104+
- Follow the principles outlined in the `implementation-guides/type-validators.md` guide
105+
106+
4. **`handlerFunction` (async function):**
107+
- The asynchronous function that executes the tool's logic.
108+
- It receives a single argument: an object (`params`) containing the validated parameters passed by the LLM, matching the keys defined in the `parameters` object.
109+
- **Implementation Details:**
110+
- **Access Context:** Use `agent.getActiveAccountId()`, `agent.props.accessToken`, `agent.env` (for worker bindings like AI, D1, R2) to get necessary credentials, environment variables, or bindings.
111+
- **Error Handling:** Wrap the core logic in a `try...catch` block to gracefully handle failures (e.g., API errors, network issues, invalid inputs not caught by Zod).
112+
- **Perform Action:** Interact with the relevant service (Cloudflare SDK, database, vector store, etc.).
113+
- **Format Response:** Return an object with a `content` property, which is an array of `ContentBlock` objects (usually `type: 'text'` or `type: 'resource'`).
114+
- For simple success/failure or structured data, `JSON.stringify` the result in a text block.
115+
- For richer data like search results, use `EmbeddedResource` (`type: 'resource'`) as seen in `docs.ts`.
116+
- Return clear error messages in the `text` property of a content block upon failure.
117+
118+
## Best Practices
119+
120+
- **Clear Descriptions are Paramount:** Invest time in writing excellent tool descriptions. This has the biggest impact on the LLM's ability to use tools effectively.
121+
- **Granular Tools:** Prefer smaller, focused tools over monolithic ones. (e.g., separate `_create`, `_list`, `_get`, `_update`, `_delete` tools for a resource).
122+
- **Robust Error Handling:** Anticipate potential failures and return informative error messages to the LLM.
123+
- **Consistent Naming:** Follow naming conventions for tools and parameters.
124+
- **Use Zod Validators:** Leverage Zod for input validation as described in the validator guide.
125+
- **Leverage Agent Context:** Use `agent.props`, `agent.env`, and helper methods like `agent.getActiveAccountId()` appropriately.
126+
- **Statelessness:** Aim for tools to be stateless where possible. Rely on parameters and agent context for necessary information.
127+
- **Security:** Be mindful of the actions tools perform, especially destructive ones (`delete`, `update`). Ensure proper authentication and authorization context is used (e.g., checking the active account ID).

0 commit comments

Comments
 (0)