Tool Reference
All tools require a valid Authorization: Bearer sk_live_... header. Credits are deducted atomically before execution — if balance is insufficient, a 402 error is returned.
The gateway exposes 10 tools — creating a site, listing/analyzing sites, generating and managing A/B test ideas, generating variant code from natural-language prompts, and creating / launching / inspecting experiments.
Typical flows
Before calling any site-scoped tool (everything except list_sites and create_site) make sure the workspace has a registered site. A tool call against a missing site_id returns an actionable error telling the agent what to do next.
One-shot: prompt → running test (site already registered, 15 credits)
When speed matters more than iteration:
create_experiment({
site_id, name,
prompt: "Change hero price from $99 to $499 and update the Stripe link",
target_url: "https://mysite.com/pricing",
auto_launch: true
}) → 5 create + 10 inline generate = 15 credits
Server authors the variant, creates the experiment, and launches it in one call. If the page can't be resolved, returns needs_clarification and nothing is created.
Two-step: prompt → preview → create (site already registered, 17 credits)
When the agent wants to show the variant to the user before committing:
generate_variant({ site_id, prompt, target_url }) → 10 credits (mutations + hypothesis)
create_experiment({ site_id, name,
variants: [{ changes: variant.mutations }], goals, hypothesis })
→ 5 credits (draft)
launch_experiment({ experiment_id }) → 2 credits (running)
New-site path (22 credits, one-shot variant)
list_sites → 1 credit (confirm what's there)
create_site({ url, reuse_if_exists: true }) → 5 credits (idempotent for automation)
create_experiment({ site_id, name, prompt, target_url, auto_launch: true })
→ 15 credits
From a pre-generated idea (legacy path, 7 credits)
list_ideas({ site_id }) → 1 credit
create_experiment({ site_id, idea_id, name, goals }) → 5 credits (variant.mutations pulled from the idea automatically)
launch_experiment({ experiment_id }) → 2 credits
Guardrails the service enforces
- Missing
site_idon a site-scoped tool → 400site_id is required. - Unknown
site_id→ 404 with either "No sites registered in this workspace. Callcreate_sitefirst." (zero-sites case) or "Calllist_sites… orcreate_siteto add a new one." (some sites, just not this one). - Duplicate domain on
create_site→ 409 withexisting_siteincluded in the error so the agent can reuse it without a second round-trip. Passreuse_if_exists: trueto convert that to a 200 withreused: true. - Target page on a different domain than the registered site → the variant still returns, but
context.warningsincludes a mismatch notice so the agent can flag it to the user (most often a typo or wrongsite_id). - Launch preconditions — experiment needs ≥2 variants (control + ≥1 test) and ≥1 goal. Otherwise
launch_experimentreturns a 400 telling you exactly what's missing.
create_site
Register a new website in your workspace for A/B testing. Returns the site ID along with the SplitKit tracking snippet and install instructions — the snippet must be added to the <head> of the target site before experiments can run.
Subject to the workspace's plan-based site limit (Starter: 1, Pro: 10, Enterprise: 25). Duplicate domains return a conflict response that includes the existing site — the agent can reuse it without a second call. Pass reuse_if_exists: true to treat duplicates as successful no-ops (idempotent behaviour, recommended for automation).
Credits: 5 Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
url |
string | yes | The website URL (e.g. https://example.com). Protocol is added if missing. |
name |
string | no | Display name. Defaults to the domain (e.g. example.com). |
reuse_if_exists |
boolean | no | Default false. When true, a duplicate-domain call succeeds (200) with { reused: true, site } instead of returning a conflict. |
Duplicate response (conflict, default behaviour):
{
"error": "A site for mystore.com is already registered (ID: abc-123). Reuse it via list_sites / its id, or call create_site again with reuse_if_exists: true for idempotent behavior.",
"code": "conflict",
"existing_site": { "id": "abc-123", "name": "My Store", "url": "https://mystore.com/", "public_key": "39cfc..." }
}
Duplicate response (reuse_if_exists: true):
{
"reused": true,
"site": { "id": "abc-123", "name": "My Store", "url": "https://mystore.com/", "public_key": "39cfc..." },
"snippet": "<script src=\"https://splitkit.dev/s/39cfc...js\"></script>",
"instructions": "Site \"My Store\" (https://mystore.com/) is already registered (ID: abc-123). Reusing it."
}
Example (MCP):
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "create_site",
"arguments": { "url": "https://mystore.com", "name": "My Store" }
}
}
Response:
{
"site": {
"id": "ca7ee226-ebf1-4485-b993-a61c82464b35",
"name": "My Store",
"url": "https://mystore.com/",
"public_key": "39cfc3dd2e26d9452e545073b7fdb29f"
},
"snippet": "<script src=\"https://splitkit.dev/s/39cfc3dd2e26d9452e545073b7fdb29f.js\"></script>",
"instructions": "Site \"My Store\" created successfully (ID: ca7ee226-...).\n\nAdd this snippet to the <head> of https://mystore.com/, before any other scripts:\n\n<script src=\"https://splitkit.dev/s/39cfc3dd2e26d9452e545073b7fdb29f.js\"></script>\n\nThe script is synchronous and <1KB — placing it first in <head> prevents A/B variant flicker.\nOnce installed, call generate_ideas to generate AI-powered test ideas for this site."
}
After creating a site, the typical flow is: call generate_ideas with the returned site.id to produce AI-informed test ideas, then create_experiment to launch a test from one of them.
list_sites
List all websites registered in your workspace.
Credits: 1
Parameters: none
Example (MCP):
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "list_sites",
"arguments": {}
}
}
Response:
{
"sites": [
{
"id": "uuid",
"name": "My Store",
"url": "https://mystore.com",
"platform": "shopify",
"public_key": "39cfc3dd2e26d9452e545073b7fdb29f",
"crawl_status": "complete",
"created_at": "2025-01-01T00:00:00Z"
}
]
}
get_site_intelligence
Get AI-analyzed intelligence for a site: industry, business model, target audience, value propositions, competitors, and top keywords.
Credits: 2
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
site_id |
string | yes | From list_sites |
Response:
{
"intelligence": {
"industry": "e-commerce",
"business_model": "DTC",
"target_audience": "...",
"value_propositions": ["..."],
"brand_tone": "friendly",
"pain_points": ["..."],
"updated_at": "..."
},
"keywords": [
{ "keyword": "natural skincare", "tier": "primary", "relevance_score": 0.95 }
],
"competitors": [
{ "domain": "competitor.com", "name": "Competitor", "confidence_score": 0.8 }
]
}
list_ideas
List A/B test ideas for a site.
Credits: 1
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
site_id |
string | yes | From list_sites |
status |
string | no | new | approved | implemented | rejected | archived |
limit |
number | no | Max results, default 20, max 100 |
Response:
{
"ideas": [
{
"id": "uuid",
"title": "Test CTA button color",
"hypothesis": "Changing from grey to green will increase CTR by 15%",
"category": "cta",
"priority": "high",
"status": "new",
"estimated_impact": "high",
"created_at": "..."
}
]
}
generate_ideas
Use AI to analyze a site and generate new CRO-informed A/B test ideas. Makes a live AI call — allow 15–30 seconds.
Credits: 10
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
site_id |
string | yes | From list_sites |
num_ideas |
number | no | Ideas to generate, default 5, max 10 |
Response:
{
"success": true,
"ideas": [
{
"id": "uuid",
"title": "...",
"hypothesis": "...",
"category": "layout",
"priority": "medium"
}
]
}
generate_variant
Generate A/B test variant code from a natural-language prompt. This is the path for creating experiments without going through the idea-generation flow — useful when you (or the user behind the agent) already know what you want to test.
The service:
- Resolves which page to target (from
target_url, or extracted from the prompt, or a homepage hint) - Fetches the live DOM (JS-rendered via our browser worker, cached for 7 days)
- Detects the tech stack (Shopify, Next.js, Webflow, Stripe Checkout, etc.)
- Pulls your site intelligence, top keywords, and competitors
- Calls an LLM in JSON mode with all of that context + your prompt
- Validates the output (selectors are checked against the fetched HTML; unknown selectors emit warnings)
Pass the returned variant.mutations directly into create_experiment as variants[0].changes.
Credits: 10 (same class as generate_ideas — includes HTML fetch + LLM call). Charged whether the response is a variant or a needs_clarification. Enterprise workspaces: unmetered.
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
site_id |
string | yes | From list_sites or create_site |
prompt |
string | yes | Plain-English description of the test (min 10 chars). Include full URLs for any Stripe products, competitor references, etc. |
target_url |
string | no | Specific page URL the change applies to. If omitted, we try to extract a URL from the prompt; if still ambiguous, the response is needs_clarification instead of a variant. |
Example (MCP):
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "generate_variant",
"arguments": {
"site_id": "e0e3f741-3314-4e88-925f-327b08ecf9d2",
"target_url": "https://mysite.com/pricing",
"prompt": "Change the hero price from $99 to $499 and update the CTA link to the Stripe Pro-plan checkout at https://buy.stripe.com/pro_abc123"
}
}
}
Response (happy path):
{
"variant": {
"name": "Premium Pricing with Pro Stripe Link",
"mutations": [
{
"selector": ".plan--pro .price",
"action": "set_content",
"payload": "$499",
"description": "Replace the displayed price"
},
{
"selector": ".plan--pro a.cta",
"action": "set_attribute",
"attribute_name": "href",
"payload": "https://buy.stripe.com/pro_abc123",
"description": "Swap to the higher-tier Stripe product"
}
],
"redirect_url": null
},
"suggested_goals": [
{ "name": "Pro checkout click", "type": "click", "selector": ".plan--pro a.cta" }
],
"hypothesis": "Raising the price from $99 to $499 positions the product as premium; if the lift in ACV exceeds the conversion drop, revenue per visitor increases.",
"context": {
"page_analyzed": "https://mysite.com/pricing",
"page_fetched_at": "2026-04-24T...",
"tech_stack": "Next.js, Stripe",
"url_pattern": null,
"assumptions": ["Targeted .plan--pro as the primary pricing card"],
"warnings": []
}
}
context.url_pattern is the LLM's inferred glob scope. It's null when the test targets a single page. When the prompt implies a class of pages ("all report pages", "every /checkout/*", "the pricing page and its variants"), the LLM fills this — pass it through to create_experiment so the snippet only fires on matching pages. You can also override it explicitly when calling create_experiment.
Response (clarification needed — no URL, no homepage hint):
The clarification is a structured object so an agent can render a form field, dropdown, or suggestion chips without parsing English:
{
"needs_clarification": {
"retry_hint": "Re-send the same call with target_url set to one of the suggestions below (or any specific URL on your site where this change should apply).",
"questions": [
{
"field": "target_url",
"prompt": "Which page URL should this test target?",
"reason": "The prompt didn't include a URL and the service couldn't infer one from context (no 'homepage' / 'landing' hint either). We need the exact page so we can fetch its DOM and author selectors that actually exist.",
"input_type": "url",
"required": true,
"suggestions": [
"https://mysite.com/",
"https://mysite.com/pricing",
"https://mysite.com/checkout",
"https://mysite.com/about"
]
}
]
}
}
ClarificationQuestion shape:
| Field | Type | Description |
|---|---|---|
field |
string | The API parameter the answer should be set on when retrying (e.g. target_url). |
prompt |
string | Human-readable question — safe to show as-is in a chat bubble. |
reason |
string | Why the server is asking — render as secondary/muted text. |
input_type |
"url" | "text" | "select" | "boolean" |
Hint for rendering the input control. |
required |
boolean | Whether an answer is strictly needed to proceed. |
suggestions |
string[] |
Optional pre-computed options the user can pick from. For target_url, these are mined from the site's cached homepage links plus common CRO paths. |
When you get needs_clarification, ask the user (or pick a suggestion yourself) and call generate_variant again with the field set.
Chaining into create_experiment:
{
"name": "create_experiment",
"arguments": {
"site_id": "<uuid>",
"name": "Pro pricing $499",
"hypothesis": "<paste from generate_variant.hypothesis>",
"variants": [
{
"name": "<paste variant.name>",
"changes": [/* paste variant.mutations verbatim */],
"redirect_url": null
}
],
"goals": [/* paste suggested_goals verbatim */]
}
}
list_experiments
List A/B experiments for a site.
Credits: 1
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
site_id |
string | yes | From list_sites |
status |
string | no | draft | active | paused | completed | archived |
limit |
number | no | Max results, default 20 |
get_experiment
Get full details for an experiment including all variants and goals.
Credits: 2
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
experiment_id |
string | yes | From list_experiments |
create_experiment
Create a new A/B experiment. There are four ways to supply the variant:
- One-shot from a prompt — pass
prompt(and optionaltarget_url). The server runsgenerate_variantinternally, authors the variant, and creates the experiment in one call. Extra 10 credits on top of the 5 for create. Returnsneeds_clarification(no experiment created) if the target page can't be resolved. - From a previous
generate_variantcall — passvariants[0].changes. Lets the agent preview / edit the mutations before committing. - From an existing idea — pass
idea_id(the server pullsmockup_data.mutationsfrom that idea). - Draft — pass none of the above; create an empty experiment to fill in via the dashboard.
Set auto_launch: true to transition the experiment to running right after creation (requires ≥2 variants and ≥1 goal; the response includes a launch field with the outcome). The control variant is always created automatically; traffic_split controls how much traffic goes to the variant (default 50).
Credits: 5 base (10 more if prompt is used). Enterprise plan: unmetered.
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
site_id |
string | yes | From list_sites |
name |
string | yes | Experiment name |
prompt |
string | no | Plain-English description. Mutually exclusive with idea_id and variants[].changes. |
target_url |
string | no | Used with prompt to tell the LLM which DOM to analyze, and stored on the experiment to build preview URLs. |
url_pattern |
string | no | Glob scoping the experiment to specific pages. Omit for site-wide. See syntax below. When prompt is used and phrases like "all report pages" / "every product page" appear, the LLM populates this automatically. |
auto_launch |
boolean | no | Default false. If true, launch on create when preconditions pass. |
url_pattern syntax (used here and on goals):
| Pattern | Matches | Doesn't match |
|---|---|---|
| (omit) | every page | — |
/checkout |
/checkout, /checkout/step-2 |
/checkouts |
/report/* |
/report/42, /report/abc |
/report, /report/42/details |
/product/*/reviews |
/product/x/reviews |
/product/reviews |
/blog/** |
anything under /blog/ at any depth |
/blogs |
* = one path segment, ** = any depth. Evaluated in the browser against location.pathname — scheme/host stripped. The snippet skips the experiment entirely on pages that don't match, so you can run homepage tests and /report/* tests side-by-side without them interfering.
| idea_id | string | no | ID of an idea to source mutations from |
| hypothesis | string | no | One-line statement of what you expect and why |
| traffic_split | number | no | % of traffic on variant (default 50) |
| variants | array | no | Up to one variant: [{ name, changes?, redirect_url? }] |
| variants[].changes | array | no | VariantMutation[] — pass generate_variant's variant.mutations verbatim |
| variants[].redirect_url | string | no | For split-URL tests — send variant traffic here instead of mutating DOM |
| goals | array | no | [{ name, type, selector?, url_pattern? }] — first is primary |
Example — one-shot prompt + auto-launch (Claude Desktop):
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "create_experiment",
"arguments": {
"site_id": "e0e3f741-...",
"name": "Pro pricing $499",
"prompt": "Change hero price from $99 to $499 and link the CTA to https://buy.stripe.com/pro_abc123",
"target_url": "https://mysite.com/pricing",
"auto_launch": true
}
}
}
One-shot response when the prompt needs clarification:
Same structured shape as generate_variant — each question has field, prompt, reason, input_type, required, and optional suggestions. Example:
{
"needs_clarification": {
"retry_hint": "Re-send the same call with target_url set to one of the suggestions below (or any specific URL on your site where this change should apply).",
"questions": [
{
"field": "target_url",
"prompt": "Which page URL should this test target?",
"reason": "The prompt didn't include a URL and the service couldn't infer one from context.",
"input_type": "url",
"required": true,
"suggestions": ["https://mysite.com/", "https://mysite.com/pricing"]
}
]
}
}
No experiment is created; 10 credits are still consumed for the HTML fetch + preflight. Retry with the field set to one of the suggestions.
Auto-launch response when preconditions aren't met:
{
"experiment": { "id": "...", "status": "draft", ... },
"launch": { "status": "draft", "error": "auto_launch skipped: experiment has 2 variant(s) and 0 goal(s); launch requires ≥2 variants (control + ≥1 test) and ≥1 goal." }
}
Preview URLs + activation
Every create response includes:
preview_urls[]— one entry per non-control variant, shape{ variant_id, variant_name, url }. The URL is<target_url>?sk_preview=<variant_id>; opening it in a browser applies that variant's mutations to the live page without counting as a tracked visit. Perfect for sharing with a stakeholder before launch.activation_hint— a short string telling the agent how to move the experiment from draft → running.
{
"experiment": { "id": "...", "status": "draft", "target_url": "https://mysite.com/pricing", "variants": [...], "goals": [...] },
"preview_urls": [
{ "variant_id": "var-b-uuid", "variant_name": "Variant B", "url": "https://mysite.com/pricing?sk_preview=var-b-uuid" }
],
"activation_hint": "To launch: pass `auto_launch: true` on create, OR call launch_experiment({ experiment_id }) ... after reviewing the preview URLs below."
}
get_experiment also returns preview_urls so you can fetch them later.
Three ways to activate (draft → running):
| Path | Tool call | When to use |
|---|---|---|
| One-shot | create_experiment({ ..., auto_launch: true }) |
Agent is confident enough to ship directly. If preconditions fail, experiment stays draft and the reason is in launch.error. |
| Review then launch | create_experiment(...) → share preview_urls[0].url with the user → launch_experiment({ experiment_id }) |
Standard agent flow — lets the user see the change live before real traffic splits. |
| Dashboard | Open https://abtestbot.com/experiments/{id} |
Human review + manual launch. |
VariantMutation shape:
{
"selector": "<CSS selector>",
"action": "set_style" | "set_content" | "set_attribute" | "insert_before" | "insert_after" | "add_class" | "remove_class",
"payload": "<value — text, CSS, class name, or attribute value>",
"attribute_name": "<required when action=set_attribute>",
"description": "<optional human-readable note>"
}
Example (chaining from generate_variant):
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "create_experiment",
"arguments": {
"site_id": "<uuid>",
"name": "Premium pricing $499",
"hypothesis": "Raising the price signals quality; lift in ACV exceeds conversion drop.",
"variants": [
{
"name": "Premium Pricing with Pro Stripe Link",
"changes": [
{ "selector": ".plan--pro .price", "action": "set_content", "payload": "$499" },
{ "selector": ".plan--pro a.cta", "action": "set_attribute", "attribute_name": "href", "payload": "https://buy.stripe.com/pro_abc123" }
]
}
],
"goals": [
{ "name": "Pro checkout click", "type": "click", "selector": ".plan--pro a.cta" }
]
}
}
}
Response: full experiment object with generated variants[] (control + your variant) and goals[]. Experiment is created in draft status — call launch_experiment next to start splitting traffic, or let the user launch from the dashboard.
launch_experiment
Transition a draft experiment to running — pushes the variant config to SplitKit's edge KV so the tracking snippet starts serving split traffic immediately.
Credits: 2 Preconditions:
- Experiment must have at least 2 variants (control + ≥1 test variant)
- Experiment must have at least 1 goal
- Experiment must currently be in
draftstatus
Parameters:
| Name | Type | Required | Description |
|---|---|---|---|
experiment_id |
string | yes | ID returned by create_experiment |
Example (MCP):
{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "launch_experiment",
"arguments": { "experiment_id": "86773746-b2ff-486d-8846-3957c45fc625" }
}
}
Response:
{
"experiment_id": "86773746-b2ff-486d-8846-3957c45fc625",
"status": "running",
"message": "Experiment launched successfully"
}
Typical three-call flow for prompt-driven testing:
generate_variant(site_id, prompt, target_url)— 10 creditscreate_experiment(site_id, name, variants:[{name, changes: variant.mutations}], goals, hypothesis)— 5 creditslaunch_experiment(experiment_id)— 2 credits
Total: 17 credits from prompt → live test.
Error Responses
| Status | Meaning |
|---|---|
401 |
Invalid or missing API key |
402 |
Insufficient credits |
403 |
API access not enabled for workspace |
404 |
Tool or resource not found |
500 |
Upstream error |
For MCP, errors are returned as isError: true in the tool result content. For A2A, status.state is "failed".
SplitKit Runtime Integration
Once a site is created and experiments are running, the SplitKit snippet exposes a synchronous JavaScript API that lets any code on the page read which variant the current visitor was assigned to.
How the snippet works
The snippet (<script src="https://splitkit.dev/s/{public_key}.js"></script>) runs synchronously in <head> before first paint. By the time DOMContentLoaded fires, the window.__sk object is fully populated:
window.__sk = {
v: 'visitor-uuid', // persistent visitor ID (localStorage)
s: 'session-uuid', // session ID (sessionStorage)
dt: 'desktop', // 'desktop' | 'tablet' | 'mobile'
sk: 'site-public-key',
a: [ // one entry per running experiment
{
eid: 'experiment-uuid', // experiment ID
vid: 'variant-uuid', // assigned variant ID
ic: false, // true if this is the control variant
gs: [...], // goals
ch: [...] // DOM changes (mutations)
}
],
preview: false // true when ?sk_preview=<variantId> is in the URL
}
Reading the assigned variant
Use window.__sk.getVariant(experimentId) — it returns { variantId, isControl } or null if the experiment isn't running:
var assignment = window.__sk && window.__sk.getVariant('YOUR_EXPERIMENT_UUID')
// { variantId: 'abc123...', isControl: false } — or null
Find your experiment UUID in the dashboard URL (/experiments/{uuid}) or via the get_experiment tool.
Recipe: dynamic price text + Stripe checkout link
This is the standard pattern for a pricing page where Variant B shows a discounted price with a different Stripe payment link:
<!-- 1. SplitKit snippet — must be first script in <head> -->
<script src="https://splitkit.dev/s/YOUR_PUBLIC_KEY.js"></script>
<!-- 2. Price text and buy button in your markup -->
<span id="sk-price">$29/mo</span>
<a id="sk-buy-btn" href="https://buy.stripe.com/CONTROL_LINK">Get started</a>
<!-- 3. Variant bridge — reads assignment, updates DOM before paint -->
<script>
(function () {
var EXP_ID = 'YOUR_EXPERIMENT_UUID'
var VARIANTS = {
'CONTROL_VARIANT_UUID': {
price: '$29/mo',
href: 'https://buy.stripe.com/CONTROL_LINK'
},
'VARIANT_B_UUID': {
price: '$19/mo',
href: 'https://buy.stripe.com/VARIANT_B_LINK'
}
}
var assignment = window.__sk && window.__sk.getVariant(EXP_ID)
var v = (assignment && VARIANTS[assignment.variantId]) || VARIANTS['CONTROL_VARIANT_UUID']
document.getElementById('sk-price').textContent = v.price
document.getElementById('sk-buy-btn').href = v.href
})()
</script>
Place the bridge <script> immediately after any element it reads — since the SplitKit snippet already ran synchronously, window.__sk is available inline.
Recipe: show/hide entire sections
var assignment = window.__sk && window.__sk.getVariant('YOUR_EXPERIMENT_UUID')
var isVariantB = assignment && !assignment.isControl
document.getElementById('section-a').style.display = isVariantB ? 'none' : ''
document.getElementById('section-b').style.display = isVariantB ? '' : 'none'
Preview mode
Append ?sk_preview=VARIANT_UUID to any URL to force a specific variant without being tracked. Useful when testing your bridge code before the experiment is live.
Finding your IDs
| Value | Where to find it |
|---|---|
| Experiment UUID | Dashboard URL /experiments/{uuid} or get_experiment tool response |
| Variant UUIDs | get_experiment → variants[].id |
| Public key | create_site or list_sites → sites[].public_key |