This page is for workspace admins. It walks you through setting up an agent, picking what it can do, and putting limits on it.
You'll find everything in Settings → AI Capabilities → Agents.
The AI Capabilities menu also has these sub-pages:
- Agents (where you are now)
- Inbound Channels for connecting agents to Slack channels
- Skills for managing the workspace's skill library
- Usage for workspace-wide token and cost reporting
- Traces for inspecting individual agent runs in detail
Creating a new agent
Configuration
The first section of the form covers the agent's identity, persona, and lifecycle.
Name
Give the agent a name people will recognise. It shows up in the agent picker, in the conversation list on the Requests page, and as the bold sender name when the agent posts back to Slack. Up to 256 characters.
Avatar
Upload a profile picture. The avatar appears next to the agent's messages on the Requests page in PushMetrics.
Role
This is the agent's job description. It tells the agent what it's for, what tone to use, and what rules to follow. Up to 10,000 characters.
The role is the single biggest thing that decides whether the agent feels generic or like a real teammate.
Example:
You are PushMetrics Assistant, an AI agent in a business intelligence
platform. You help users analyze data, create and run reports, manage
workflows, and automate tasks.
Guidelines:
- Use your tools to read, create, edit, and run blocks (SQL queries,
emails, Slack messages)
- Search the knowledge base for relevant context before answering
domain-specific questions
- For multi-step tasks, create a plan first using make_plan
- Ask clarifying questions when the user's intent is ambiguous
Status
Default
Turn this on to make this the agent everyone in the workspace gets by default when they open the Requests page. Only one agent can be the default at a time. The default agent shows a small Default badge next to its status on the agent page.
Model & Provider
Choose the LLM that powers this agent. You can use different models for different agents. A fast, cheap model for high-volume FAQ work, and a stronger model for research-heavy data analysis.
You'll set two things:
- Integration. Which provider account to use. PushMetrics supports OpenAI, Anthropic, and Google Gemini.
- Model. The exact model name. PushMetrics shows a row of suggested chips below the field (for example
gpt-5,gpt-5-mini,gpt-5-nano,gpt-4.1,o3,o4-minifor OpenAI), but you can type any model name your provider supports.
If you leave the integration set to the workspace default, you'll see a banner explaining which model and integration the workspace is using behind the scenes.
Connecting your own provider account
Before the Integration dropdown shows your own provider, you need to add it as an integration in your workspace. Go to Settings → Data & Integrations, click + Discover New, and pick OpenAI, Anthropic, or Google Gemini.
Once your integration is saved, it appears in the agent's Integration dropdown alongside the workspace default. Switching to your own integration unlocks custom model names and lets you set your own 30-day cost cap.
Cost controls
Two budgets keep costs predictable. Both are optional.
Agent Permissions
Agent Permissions decide what tools the agent is allowed to invoke during a run. They're grouped into five sections on the form. The header shows a count like "7 / 7 enabled" so you can see at a glance how many are on.
Knowledge
Recipients
Interaction
Reports
Guardrails
Guardrails are the safety nets that catch the agent when something goes wrong.
What Pre-completion Verification actually does
When this is on, PushMetrics intercepts the agent the moment it tries to finish a run and runs a few quick, deterministic checks. If any check fails, the agent doesn't get to send the reply: it gets the failure message back as feedback and is told to try again. Up to two retries before it's allowed to give up.
The checks are not based on AI. They're plain Python rules, so they're fast, free, and predictable. The four things that get checked:
- Recipient allowlist. If the agent tries to email someone or post to a Slack channel that isn't part of this conversation's allowlist, the run is blocked. This catches the agent inventing an email address or sending to the wrong channel.
- Metrics view smoke test. If the agent created a new metrics view, it has to prove the view actually works by running a small
query_metricsagainst it before finishing. - Analysis save failures. If the agent saved an analysis document but the save returned an error (broken YAML, missing fields), the agent has to fix it instead of pretending it worked.
- Numeric citations. Numbers in the final reply (like "$2.4M" or "12%") are matched against numbers that came from real tool results. This catches hallucinated figures. Only runs on Scheduled runs only by default since it can be noisy in interactive chat.
When it runs
You pick when these checks should fire:
Tools
Below the permissions, the Tools section is where you attach concrete tools to the agent. Each tool has an icon, a header with the integration it's bound to, and a one-line summary of its current configuration. The header counter shows how many you have ("Tools 5").
You can attach tools like:
- SQL block for the agent to run queries against your databases. Works with any SQL source you've connected: PostgreSQL, BigQuery, MySQL, Snowflake, Redshift, and so on.
- SMTP integrations for sending emails.
- Chart for building Plotly charts.
- Tableau Viewer for exporting dashboards or images.
- Slack for posting to channels.
Each card shows the current defaults. For example a SQL tool shows the current query and row limit; an email tool shows the recipients, subject, and message; a Tableau tool shows the action, view, and filters.
To add a tool, click + Add Tool in the Tools section header. You'll pick the integration, then fill in defaults. The three-dots menu on each card lets you duplicate or remove a tool.
Skills
The Skills section sits just below Tools. Skills are reusable instructions the agent loads into its prompt, for example "AWS Schema Querying" or "Customer Health Scoring". Pick what this agent should know.
You have two modes:
The number next to the Skills header shows how many skills are currently active for this agent. Switching between modes saves immediately, so does ticking a checkbox.
See Agent Skills for more on writing and managing skills.
Memories
The Memories section shows the agent's saved notes as a table with two columns: Name and Last Modified. Each row has edit and delete actions on the right.
You can:
- + New Memory opens an editor where you write a note by hand.
- Upload .md imports a markdown file as a memory.
You don't have to add any memories upfront. The agent can also write its own during a conversation. See Agent Memory & Knowledge for the full story.
Usage
The Usage section at the bottom shows how this agent is being used. Switch between 7d, 30d, and 90d windows in the top-right.
Five numbers are surfaced:
- Total Spend in dollars.
- Runs (how many conversations the agent handled in the window).
- Avg Cost / Run.
- Tokens In / Out.
- Turns (total agent turns across all runs).
Below those, a Last Run line shows the cost, model used, and tokens of the most recent run, so you can see what's happening right now.
This is the right place to check when an agent feels expensive or when you want to confirm a budget cap is doing its job. For workspace-wide reporting across all agents, use Settings → AI Capabilities → Usage.
Sharing & access
Agents follow the standard PushMetrics sharing rules. Click Share at the top of the agent page to open the sharing panel.
From the Share panel you can:
- Decide what access Everyone at your team has (no access, can view, or can edit).
- Share with a group by searching for one of your workspace groups.
- Invite individual users by name or email, and set each one to "can view" or "can edit".
- Remove access from anyone you've already shared with using the small × next to their name.
A few rules worth knowing:
- Whoever creates an agent gets edit access by default.
- The default agent is shared with Everyone so the whole workspace can use it.
- You can't remove the last editor. At least one person always has to keep full control of the agent.