The Agents page in PushMetrics, listing every agent in the workspace

This page is for workspace admins. It walks you through setting up an agent, picking what it can do, and putting limits on it.

You'll find everything in Settings → AI Capabilities → Agents.

The AI Capabilities menu also has these sub-pages:

  • Agents (where you are now)
  • Inbound Channels for connecting agents to Slack channels
  • Skills for managing the workspace's skill library
  • Usage for workspace-wide token and cost reporting
  • Traces for inspecting individual agent runs in detail

Creating a new agent

1
Go to Settings → AI Capabilities → Agents.
2
Click New Agent.
3
Fill in the form (each section is explained below).
4
Click Save Changes. Your new agent appears in the list and is ready to chat with.

Configuration

The first section of the form covers the agent's identity, persona, and lifecycle.

The Configuration section on the agent edit page, with name, avatar, role, status, and default agent fields

Name

Give the agent a name people will recognise. It shows up in the agent picker, in the conversation list on the Requests page, and as the bold sender name when the agent posts back to Slack. Up to 256 characters.

Avatar

Upload a profile picture. The avatar appears next to the agent's messages on the Requests page in PushMetrics.

Role

This is the agent's job description. It tells the agent what it's for, what tone to use, and what rules to follow. Up to 10,000 characters.

The role is the single biggest thing that decides whether the agent feels generic or like a real teammate.

💡
Be specific. Mention which tables to use, which charts to prefer, who to send things to, and any rules ("never email customers without my approval"). The more specific, the better the agent's first answers.

Example:

You are PushMetrics Assistant, an AI agent in a business intelligence
platform. You help users analyze data, create and run reports, manage
workflows, and automate tasks.

Guidelines:
- Use your tools to read, create, edit, and run blocks (SQL queries,
  emails, Slack messages)
- Search the knowledge base for relevant context before answering
  domain-specific questions
- For multi-step tasks, create a plan first using make_plan
- Ask clarifying questions when the user's intent is ambiguous

Status

Active. Anyone with access can chat with it.
Inactive. Hidden from the agent picker. Old conversations stay where they are.

Default

Turn this on to make this the agent everyone in the workspace gets by default when they open the Requests page. Only one agent can be the default at a time. The default agent shows a small Default badge next to its status on the agent page.


Model & Provider

Choose the LLM that powers this agent. You can use different models for different agents. A fast, cheap model for high-volume FAQ work, and a stronger model for research-heavy data analysis.

The Model & Provider section on the agent edit page, with the integration dropdown, model field, and a row of suggested model chips

You'll set two things:

  • Integration. Which provider account to use. PushMetrics supports OpenAI, Anthropic, and Google Gemini.
  • Model. The exact model name. PushMetrics shows a row of suggested chips below the field (for example gpt-5, gpt-5-mini, gpt-5-nano, gpt-4.1, o3, o4-mini for OpenAI), but you can type any model name your provider supports.

If you leave the integration set to the workspace default, you'll see a banner explaining which model and integration the workspace is using behind the scenes.

Connecting your own provider account

Before the Integration dropdown shows your own provider, you need to add it as an integration in your workspace. Go to Settings → Data & Integrations, click + Discover New, and pick OpenAI, Anthropic, or Google Gemini.

The Discover New integrations gallery in PushMetrics, showing OpenAI, Anthropic, and Google Gemini cards alongside the other available integrations

Once your integration is saved, it appears in the agent's Integration dropdown alongside the workspace default. Switching to your own integration unlocks custom model names and lets you set your own 30-day cost cap.


Cost controls

Two budgets keep costs predictable. Both are optional.

Max session cost (USD)
A single conversation will stop on its own once it hits this dollar amount. Stops one bad question from running up a big bill. Leave empty for no limit.
Max 30-day agent cost (USD)
Total spend by this agent across all conversations in the last 30 days. Earlier spend ages out of the window over time.
The Cost controls section on the agent edit page, with Max session cost and Max 30-day agent cost fields
💰
When you're using the workspace default Anthropic integration, the 30-day cap is fixed at $100. To set your own, attach a custom integration in Model & Provider.

Agent Permissions

Agent Permissions decide what tools the agent is allowed to invoke during a run. They're grouped into five sections on the form. The header shows a count like "7 / 7 enabled" so you can see at a glance how many are on.

The Agent Permissions section on the agent edit page, showing the Knowledge, Recipients, Interaction, Reports, and Guardrails groups

Knowledge

Use Knowledge Base
Lets the agent search and retrieve articles from your workspace knowledge base.
🧠
Skills aren't a permission. Whether the agent uses skills, and which ones, is configured in the dedicated Skills section further down the page. See Agent Skills.

Recipients

List Recipients
Look people up by name or tag to find their email or Slack handle.
Create Recipients
Lets the agent add a new person to the contact list. Turn off for read-only agents.

Interaction

Ask User Question
Lets the agent pause and ask you for a choice (multiple choice or open answer) instead of guessing.

Reports

Manage Reports
Lets the agent list, read, create, and update reports via YAML.

Guardrails

Guardrails are the safety nets that catch the agent when something goes wrong.

Loop Detection
Stops the agent if it gets stuck doing the same thing over and over. Better to fail fast than to run for 10 minutes burning tokens.
Pre-completion Verification
A second pair of eyes that runs right before the agent says "done". See below for what gets checked and what each option means.

What Pre-completion Verification actually does

When this is on, PushMetrics intercepts the agent the moment it tries to finish a run and runs a few quick, deterministic checks. If any check fails, the agent doesn't get to send the reply: it gets the failure message back as feedback and is told to try again. Up to two retries before it's allowed to give up.

The checks are not based on AI. They're plain Python rules, so they're fast, free, and predictable. The four things that get checked:

  • Recipient allowlist. If the agent tries to email someone or post to a Slack channel that isn't part of this conversation's allowlist, the run is blocked. This catches the agent inventing an email address or sending to the wrong channel.
  • Metrics view smoke test. If the agent created a new metrics view, it has to prove the view actually works by running a small query_metrics against it before finishing.
  • Analysis save failures. If the agent saved an analysis document but the save returned an error (broken YAML, missing fields), the agent has to fix it instead of pretending it worked.
  • Numeric citations. Numbers in the final reply (like "$2.4M" or "12%") are matched against numbers that came from real tool results. This catches hallucinated figures. Only runs on Scheduled runs only by default since it can be noisy in interactive chat.

When it runs

The four Pre-completion Verification options: Off, Scheduled runs only, When mutating tools are used, Every run

You pick when these checks should fire:

Off
No checks run. The agent finishes whenever it thinks it's ready. This is the default for new agents.
Scheduled runs only
Checks run only on automated, scheduled jobs. Best of both worlds: interactive chat stays fast, while overnight reports get an extra safety net before they go out. Numeric-citation checks are also turned on automatically for this mode.
When mutating tools are used
Checks run only when the agent did something with real-world side effects in this run: sent an email, posted to Slack, saved an analysis, added a recipient, or wrote a memory note. Read-only conversations stay fast.
Every run
Checks run on every conversation, including read-only ones. Strictest setting. Best for high-stakes agents where you'd rather pay the small overhead than risk a bad answer slipping through.
🛡️
Recommended starting point. If you have agents that send emails or post to Slack on a schedule, set them to Scheduled runs only. You get the safety net where it matters most without slowing down interactive chat.

Tools

Below the permissions, the Tools section is where you attach concrete tools to the agent. Each tool has an icon, a header with the integration it's bound to, and a one-line summary of its current configuration. The header counter shows how many you have ("Tools 5").

The Tools section on the agent edit page, showing attached SQL, email, chart, Tableau, and Slack tools with their integration headers and configuration summaries

You can attach tools like:

  • SQL block for the agent to run queries against your databases. Works with any SQL source you've connected: PostgreSQL, BigQuery, MySQL, Snowflake, Redshift, and so on.
  • SMTP integrations for sending emails.
  • Chart for building Plotly charts.
  • Tableau Viewer for exporting dashboards or images.
  • Slack for posting to channels.

Each card shows the current defaults. For example a SQL tool shows the current query and row limit; an email tool shows the recipients, subject, and message; a Tableau tool shows the action, view, and filters.

🧩
How tools work. The integration on a tool is locked once you've added it (so the agent can't accidentally send things through the wrong account). The other values you set on the card are defaults: the agent uses them unless it overrides them in a specific tool call.

To add a tool, click + Add Tool in the Tools section header. You'll pick the integration, then fill in defaults. The three-dots menu on each card lets you duplicate or remove a tool.


Skills

The Skills section sits just below Tools. Skills are reusable instructions the agent loads into its prompt, for example "AWS Schema Querying" or "Customer Health Scoring". Pick what this agent should know.

The Skills section on the agent edit page, with a radio toggle between using all workspace skills and selecting specific ones, and a checkbox list of every workspace skill

You have two modes:

Use all workspace skills
The agent automatically picks up every skill in the workspace, including any new ones your team adds later.
Use only the skills selected below
A list of every skill in the workspace appears below with a checkbox next to each. Tick the ones this agent should use. Click any skill name to open it in the Skills editor.

The number next to the Skills header shows how many skills are currently active for this agent. Switching between modes saves immediately, so does ticking a checkbox.

See Agent Skills for more on writing and managing skills.


Memories

The Memories section shows the agent's saved notes as a table with two columns: Name and Last Modified. Each row has edit and delete actions on the right.

The Memories section on the agent edit page, showing a saved memory note with name, last modified timestamp, and edit and delete actions, plus the New Memory and Upload .md buttons in the header

You can:

  • + New Memory opens an editor where you write a note by hand.
  • Upload .md imports a markdown file as a memory.

You don't have to add any memories upfront. The agent can also write its own during a conversation. See Agent Memory & Knowledge for the full story.


Usage

The Usage section at the bottom shows how this agent is being used. Switch between 7d, 30d, and 90d windows in the top-right.

The Usage section on the agent edit page, showing total spend, runs, average cost per run, tokens in and out, turns, and the most recent run's details

Five numbers are surfaced:

  • Total Spend in dollars.
  • Runs (how many conversations the agent handled in the window).
  • Avg Cost / Run.
  • Tokens In / Out.
  • Turns (total agent turns across all runs).

Below those, a Last Run line shows the cost, model used, and tokens of the most recent run, so you can see what's happening right now.

This is the right place to check when an agent feels expensive or when you want to confirm a budget cap is doing its job. For workspace-wide reporting across all agents, use Settings → AI Capabilities → Usage.


Sharing & access

Agents follow the standard PushMetrics sharing rules. Click Share at the top of the agent page to open the sharing panel.

The Share panel on the agent edit page, with controls for sharing with the whole team, with a group, or inviting individual users by email

From the Share panel you can:

  • Decide what access Everyone at your team has (no access, can view, or can edit).
  • Share with a group by searching for one of your workspace groups.
  • Invite individual users by name or email, and set each one to "can view" or "can edit".
  • Remove access from anyone you've already shared with using the small × next to their name.

A few rules worth knowing:

  • Whoever creates an agent gets edit access by default.
  • The default agent is shared with Everyone so the whole workspace can use it.
  • You can't remove the last editor. At least one person always has to keep full control of the agent.