An Agent Run is the lightweight shape: just an agent and a prompt, no SQL blocks above, no destination blocks below. (If you want the agent to sit inside a larger report with its own SQL and Slack/email blocks, see the Agent Step block instead. Webhooks work there too, but the payload is exposed differently.)

Agent Runs can be triggered from outside PushMetrics by an alerting system, a CI job, or any service that can make an HTTP request. The external call carries a JSON payload, the agent reads it, and runs the prompt with those values substituted in.

The most common use is hooking an anomaly alert into an investigation agent: the alert fires, the webhook lands, the agent picks up the dataset/metric/window from the payload and starts looking into the cause.


Enabling the webhook

Open the Agent Run editor, scroll to the Webhook Trigger section, toggle it on, and copy the URL. It looks like:

https://app.pushmetrics.io/api/v1/report/<id>/webhook?token=<token>

The token authenticates the call, so keep it private. You can rotate it from the same panel if it leaks.

Sending a payload

POST a JSON body to the webhook URL. Anything you put in the body becomes available to the agent's prompt.

curl -X POST \
  "https://app.pushmetrics.io/api/v1/report/<id>/webhook?token=<token>" \
  -H "Content-Type: application/json" \
  -d '{
    "dataset": "DataAppsPair",
    "metric": "pairs",
    "magnitude": "-23%",
    "window": "2026-04-26T00:00Z..2026-04-27T00:00Z"
  }'

The webhook returns immediately ({"success": true}) while the agent runs in the background. You can also send query-string parameters; they merge with the JSON body.

Using payload values in the prompt

The agent's Prompt field is rendered as a Jinja template before the agent sees it. Payload keys are exposed three ways:

  • as {{ params.<key> }}
  • as {{ webhook.parameters.<key> }}
  • as top-level {{ <key> }}

Use whichever reads best. A typical anomaly-investigation prompt:

A {{ params.metric }} anomaly was detected on dataset "{{ params.dataset }}".
Magnitude: {{ params.magnitude }}. Window: {{ params.window }}.

Investigate likely upstream causes, summarise the root cause with confidence,
and reply in the alert's Slack thread.

When the webhook fires with the curl above, the agent receives:

A pairs anomaly was detected on dataset "DataAppsPair". Magnitude: -23%. Window: 2026-04-26T00:00Z..2026-04-27T00:00Z. Investigate likely upstream causes…

Standard Jinja features work: filters, conditionals, defaults.

{% if params.severity == "critical" %}This is a CRITICAL alert.{% endif %}
Investigating {{ params.metric | default("the anomaly") }}.

Missing keys

If the prompt references a key that the payload didn't include, the placeholder is left visible in the rendered prompt (e.g. {{ params.metric }} stays literal). The agent can see the gap and ask for clarification rather than running on an incomplete brief.

Syntax errors

If the prompt has invalid Jinja (an unclosed {{, an unknown filter), the run fails fast with a clear error message in the chat session. No tokens are spent on a broken prompt. Fix the template and retry.

Replying in the original Slack thread

Anomaly alerts usually arrive from a Slack-aware system that can include thread coordinates in the payload. If the JSON body contains a Slack channel and thread, the agent's Slack tool can post its reply right back into the same thread when called with send_to_current_thread: true.

Supported keys (use either pair):

  • slack_channel + thread_ts
  • channel + slack_thread_ts

Example payload:

{
  "dataset": "DataAppsPair",
  "metric": "pairs",
  "magnitude": "-23%",
  "slack_channel": "C0XXXXXXX",
  "thread_ts": "1700000000.123456"
}

The agent posts the root-cause summary as a thread reply on the original alert message instead of starting a new conversation.

What you'll see in PushMetrics

Each webhook call shows up as a normal Agent Run execution:

  • A row in Execution Log: green for success, red for failure.
  • A chat session with the rendered prompt as the user message and everything the agent did as the conversation that follows.
  • Standard email and Slack notifications, if configured on the report.

A failed run (bad Jinja, missing required tools, agent error) still creates the chat session so you can see what was sent and what went wrong, side by side.

Use cases

  • Anomaly investigation: pipe alerts from Grafana, Datadog, or your own detector into an agent that cross-references upstream metrics and posts a root-cause summary.
  • On-demand BI: let an internal tool ask a question and receive a chart in Slack a minute later.
  • Post-deploy checks: trigger a regression-check agent from CI after a release.
  • Customer events: when a webhook fires for a high-value account, run an agent to enrich and route the event.