webrix full-logo black
Amplitude

Amplitude

Local MCP
Search, access, and get insights on your Amplitude data
πŸ› Tools42
⏱Last Updated12 Hours ago
⧉Categoryall
Webrix
Install with Webrix
Claude
βœ“Enterprise-grade security
βœ“SSO & authentication ready
βœ“Full governance & audit logs
OR
Tools

Tools

(42)
Page 1 of 5
get_charts

get_charts

Retrieve full chart objects by their IDs using the chart service directly WHEN TO USE: - You want to retrieve a full chart definition. - Useful if you want to base an ad hoc query dataset analysis on an exsiting chart. INSTRUCTIONS: - Use the search tool to find the IDs of charts you want to retrieve, then call this tool with the IDs.

save_chart_edits

save_chart_edits

Save temporary chart edits as permanent charts WHEN TO USE: - You have chart edit IDs from query_dataset and want to save them as permanent charts - You need to add charts to dashboards or notebooks (which require saved chart IDs) WORKFLOW: 1. Use query_dataset to create ad-hoc analyses (returns editId) 2. Use save_chart_edits to convert editIds into permanent chartIds 3. Use chartIds in create_dashboard or create_notebook IMPORTANT: - All AI-generated charts are saved as unpublished in your personal space - Charts require human review before publishing to shared spaces - Use bulk saving to reduce tool calls when creating multiple charts

get_cohorts

get_cohorts

Get detailed information about specific cohorts by their IDs. WHEN TO USE: - You want to retrieve full cohort definitions after finding them via search. - You need detailed cohort information including definition, metadata, and audience details. INSTRUCTIONS: - Use the search tool to find the IDs of cohorts you want to retrieve, then call this tool with the IDs. - This returns full cohort objects with all details, unlike the search tool which returns summary information.

create_cohort

create_cohort

Create a new cohort with the provided definition and configuration. WHEN TO USE: - You need to create a new audience segment based on user behavior or properties - You want to save a cohort definition for reuse in charts, experiments, or other analyses - You need to create a cohort from specific conditions like events, user properties, or funnels LEARNING FROM EXISTING COHORTS: - Before creating a cohort, use the "search" MCP tool to find relevant cohorts by name/description, then use get_cohorts with those IDs to analyze existing cohort definitions and structure - Study the structure and patterns of existing cohort definitions to understand the correct payload format - Pay attention to how different condition types (event, user_property, other_cohort, etc.) are structured - Learn from the andClauses/orClauses patterns and how they combine different conditions - Use existing cohorts as templates for similar use cases to ensure proper schema compliance EXAMPLES: - Create an event-based cohort (users who performed a specific event >= 1 time in the past 30 days): { "app_id": "365742", "name": "xuan-simple-event-cohort", "definition": { "version": 3, "andClauses": [{ "negated": false, "orClauses": [{ "type": "event", // Condition type: event-based "metric": null, // No specific metric aggregation "offset": 0, // No time offset from the base time range "group_by": [], // No grouping/segmentation by event properties "interval": 1, // Time granularity: 1 = DAY (daily buckets) "operator": ">=", // Event count operator: greater than or equal "time_type": "rolling", // Rolling time window (last N days from now) "time_value": 30, // Time range: 30 units of the interval (30 days) "type_value": "xuan-test-httpapi-event-type", // The specific event name to match "operator_value": 1, // Minimum event count threshold: >= 1 occurrence "exclude_current_interval": false // Include events from the current day }] }], "cohortType": "UNIQUES", // Count unique users (not event occurrences) "countGroup": {"name": "User", "is_computed": false}, // Group by User entities "referenceFrameTimeParams": {} // No additional time frame parameters }, "type": "redshift", // Cohort computation engine type (optional - defaults to "redshift") "published": true // Make cohort discoverable to others } // EXPLANATION: This creates a cohort of users who performed the event "xuan-test-httpapi-event-type" // at least once in the last 30 days. The interval=1 means we evaluate this on a daily basis, // so the system looks at each day in the past 30 days to see if the user performed the event. - Create a complex cohort with multiple conditions (organizations in another cohort OR new active, AND performed an event): { "app_id": "365742", "name": "xuan-test", "definition": { "version": 3, "andClauses": [{ // First AND condition group "negated": false, "orClauses": [{ // First OR condition: existing cohort membership "type": "other_cohort", // Condition type: reference to another cohort "offset": 0, // No time offset "interval": 1, // Time granularity: 1 = DAY (daily evaluation) "time_type": "rolling", // Rolling time window (last N days) "time_value": 365, // Time range: 365 days (1 year lookback) "cohort_keys": ["rs4d2xg5"], // Reference to cohort ID "rs4d2xg5" "exclude_current_interval": false // Include current day in evaluation }, { // Second OR condition: new/active users "type": "new_active", // Condition type: new or active user status "offset": 0, // No time offset "interval": 1, // Time granularity: 1 = DAY (daily buckets) "time_type": "absolute", // Absolute time range (specific dates) "time_value": [1760572800, 1761955199], // Unix timestamps: specific date range "type_value": "new", // Filter for "new" users (vs "active") "exclude_current_interval": false // Include current day }] }, { // Second AND condition group "negated": false, "orClauses": [{ // Event-based condition "type": "event", // Condition type: event-based "metric": null, // No specific metric aggregation "offset": 0, // No time offset "group_by": [], // No event property grouping "interval": 1, // Time granularity: 1 = DAY (daily buckets) "operator": ">=", // Event count operator: greater than or equal "time_type": "rolling", // Rolling time window (last N days) "time_value": 30, // Time range: 30 days "type_value": "test event", // Event name to match "operator_value": 1, // Minimum event count: >= 1 occurrence "exclude_current_interval": false // Include current day }] }], "cohortType": "UNIQUES", // Count unique organizations (not occurrences) "countGroup": {"name": "org id", "is_computed": false}, // Group by Organization entities "referenceFrameTimeParams": {} // No additional time frame parameters }, "type": "redshift", // Cohort computation engine type (optional - defaults to "redshift") "published": true // Make cohort discoverable to others } // EXPLANATION: This creates a complex cohort using boolean logic: // (Organizations in cohort "rs4d2xg5" in the last 365 days OR new users in the specified date range) // AND (Organizations that performed "test event" >= 1 time in the last 30 days) // // The interval=1 in all conditions means daily granularity: // - Cohort membership is checked daily over 365 days // - New user status is evaluated daily within the absolute date range // - Event occurrences are counted daily over the last 30 days // // Note: This cohort counts organizations (org id) rather than individual users.

get_context

get_context

Get information about the current user, organization, and list of accessible projects. WHEN TO USE: - "What projects do I have access to?" - "Show me my organization details" - "What is my user role?" RETURNS: - User details (email, name, role) - Organization info (name, plan, quota) - List of accessible projects (just names and IDs) DO NOT USE FOR: - Project-specific settings (timezone, currency, sessions) β†’ use 'get_project_context' instead - Running analytics queries β†’ use 'query_dataset' or 'query_amplitude_data' instead - Finding charts/dashboards/cohorts β†’ use 'search' instead

get_project_context

get_project_context

Get project-specific settings and configuration for a specific project. WHEN TO USE: - "What timezone is this project using?" - "What are the currency settings?" - "How are sessions defined?" - "What is the week start day?" RETURNS: - Timezone and date settings (timezone, week start, quarter start) - Currency settings (locale, target currency) - Session definition (timeout, custom property) - Source projects (data lineage) - Project AI context (business guidelines) REQUIRES: projectId parameter DO NOT USE FOR: - Listing all projects β†’ use 'get_context' instead - User/org info β†’ use 'get_context' instead

get_dashboard

get_dashboard

Get specific dashboards and all their charts WHEN TO USE: - You want to retrieve full dashboard definitions including chart IDs that you can query and analyze individually. INSTRUCTIONS: - Use the search tool to find the IDs of dashboards you want to retrieve, then call this tool with the IDs. - Very commonly you will want to query the charts after retrieving a dashboard.

create_dashboard

create_dashboard

Create a comprehensive dashboard with charts, rich text, and custom layout WHEN TO USE: - After the user has searched existing content or explored some analysis in Amplitude - The user has explicitly requested to create a dashboard CRITICAL - CHART IDs MUST BE FROM SAVED CHARTS: - Only use chartIds from SAVED/PERMANENT charts - these are returned by save_chart_edits (in the chartId field) or create_chart - DO NOT use editIds from query_dataset - these are temporary IDs that cannot be added to dashboards - DO NOT use the editId from query_dataset responses - you must first call save_chart_edits to get a permanent chartId - The typical workflow is: query_dataset (returns editId) β†’ save_chart_edits (converts editId to permanent chartId) β†’ create_dashboard (uses chartId) - If you use an editId instead of a saved chartId, the dashboard creation will fail with "NotFoundError: No chart" INSTRUCTIONS: - Provide a descriptive name for the dashboard - Use rows array where each row contains items in left-to-right order - Each item specifies width (3-12 columns). If width is omitted, items auto-fill remaining space - Each row must specify height in pixels. Only heights of 375, 500, 625, 750 are allowed - Total width of items in a row must not exceed 12 columns - Max 4 items per row (ensures minimum 3-column width per item) - Use chartMetas to configure chart display options (view type, annotations, etc.) - Return a link to the new dashboard in the response - DO NOT include static analysis in dashboard text content. Dashboards are meant to be long-lived and thus a point in time insight does not help - DO group similar charts together and include a header and some text describing how to interpret the charts effectively MARKDOWN FORMAT: - Rich text content uses standard markdown syntax - Supported: headers (# ## ###), bold (**text**), italic (*text*), lists (- or 1.), links ([text](url)), code blocks (```), inline code (`code`) - Example: "# Overview\n\nThis dashboard shows **key metrics** for user engagement." LAYOUT EXAMPLES: - Full-width item: { height: 6, items: [{ type: 'chart', chartId: '123', width: 12 }] } - Two side-by-side: { height: 4, items: [{ type: 'chart', chartId: '1', width: 6 }, { type: 'rich_text', content: '# Notes', width: 6 }] } - Three columns: { height: 5, items: [{ width: 4 }, { width: 4 }, { width: 4 }] } - Auto-fill: { height: 4, items: [{ type: 'chart', chartId: '1' }, { type: 'chart', chartId: '2' }] } (each gets 6 columns)

edit_dashboard

edit_dashboard

Edit a dashboard's metadata and layout with optimistic concurrency protection. WHEN TO USE: - You already have a dashboard ID and want to update its name/description and/or content rows. INSTRUCTIONS: - Always call get_dashboard first to retrieve the dashboard's current lastModified and rows. - Pass the retrieved lastModified in expectedLastModified. - Metadata fields are only updated when values are not null/undefined. - Use one structural edit at a time via edit. EXAMPLES: - Metadata only: {"dashboardId":"123","expectedLastModified":1700000000,"metadata":{"name":"Q1 Dashboard"}} - Replace all rows: {"dashboardId":"123","expectedLastModified":1700000000,"edit":{"type":"set_rows","rows":[{"height":500,"items":[{"type":"chart","chartId":"abc","width":12}]}]}} - Update a row: {"dashboardId":"123","expectedLastModified":1700000000,"edit":{"type":"update_row","rowIndex":0,"row":{"height":500,"items":[{"type":"rich_text","content":"# Notes","width":12}]}}} - Insert a row: {"dashboardId":"123","expectedLastModified":1700000000,"edit":{"type":"insert_row","index":1,"row":{"height":375,"items":[{"type":"chart","chartId":"def","width":12}]}}} - Remove a row: {"dashboardId":"123","expectedLastModified":1700000000,"edit":{"type":"remove_row","rowIndex":2}} NOTES: - The request fails with a conflict if expectedLastModified is stale. - Response is intentionally compact to minimize context usage.

create_experiment

create_experiment

Create a new experiment across one or more projects. INSTRUCTIONS: - If the user has not specified projects, prompt them to decide which projects to use - Creates a feature A/B test with control and treatment variants - Creates the same experiment in each specified project - Returns the experiment IDs and URLs for viewing in Amplitude EXAMPLES: - Basic A/B test: Provide projectIds, key, and name - Multiple projects: Provide array of projectIds to create experiment in each - With custom variants: Provide projectIds, key, name, and variants array - With links: Provide links array with url and title for each link (e.g., PRs, tickets, docs) - With deployments: Provide deploymentIds array to associate specific deployments (API keys) NOTES: - Experiment keys must be unique within each project - Variants default to 'control' and 'treatment' if not specified - Use get_deployments to retrieve available deployment IDs

Customize Tools

Edit descriptions, modify arguments, select tools, or add new ones

Supercharge This MCP with Webrix

Customize tools to fit your workflows, apply guardrails for governance, reduce token consumption with smart routing, and get complete audit visibilityβ€”all through a secure, enterprise-ready gateway.

webrix full-logo black

Join IT leaders deploying AI at enterprise scale with security, compliance, and governance built in.

Get a Demo