Alerting with Subscriptions
Wire alerts to notifications — get a Slack message, email, or webhook when an alert fires.
Alerts create provenance interactions. Subscriptions trigger notifications when interactions match. Combining the two gives you a complete alerting-to-notification pipeline.
The key insight
Both Subscriber Alerts and Metric Alerts create regular provenance interactions when they fire. These interactions have a resource type and an action — just like any other interaction.
Subscriptions match on resource type + action. So to get notified when an alert fires, you create a subscription that matches the alert's resource type and action.
End-to-end flow
1. Interactions flow in (logins, orders, emails, etc.)
↓
2. Metric calculates aggregate value (e.g. failed logins in 5m = 150)
↓
3. Alert evaluates: 150 > 100 threshold, state changes OK → FIRING
↓
4. Alert creates interaction:
resourceType = "alert-sec"
action = "fired"
interaction = { alertTitle, metricName, value, threshold, ... }
↓
5. DB trigger finds matching subscription:
resourceType = "alert-sec" + action = "fired"
↓
6. Notification queued → adapter executes (email, Slack, webhook)
↓
7. Audit record created (success/error)Setup walkthrough
This example sets up an alert that sends a Slack notification when failed logins exceed 100 in 5 minutes.
Step 1: Create the metric
POST /api/interaction-metrics
{
"name": "Failed Logins (5m)",
"metricTypeId": "count-uuid",
"resourceTypeId": "customer-uuid",
"actionId": "login-fail-uuid",
"timeWindow": "5m"
}
# → Returns metricIdStep 2: Create the alert
Use the fired and recovered actions (or whatever actions you've defined for alerting), and scope to a resource type like alert-sec (Security Alert):
POST /api/interaction-alerts
{
"title": "High Failed Logins",
"metricId": "metric-uuid-from-step-1",
"operator": "gt",
"threshold": 100,
"alertActionId": "fired-action-uuid",
"recoveryActionId": "recovered-action-uuid",
"resourceTypeId": "alert-sec-uuid"
}
# → Returns alertIdStep 3: Create a subscriber
POST /api/subscribers
{
"subscriberName": "Slack — Security Channel",
"subscriberArn": "adapter:slack",
"icon": "slack"
}
# → Returns subscriberIdStep 4: Create the subscription
Match on the same resource type + action the alert uses:
POST /api/subscriptions
{
"resourceTypeId": "alert-sec-uuid",
"actionId": "fired-action-uuid",
"subscriberId": "subscriber-uuid-from-step-3",
"config": {
"adapter": "adapter:slack",
"webhookUrl": "{{secrets.SLACK_SECURITY_WEBHOOK}}",
"blocks": [
{
"type": "header",
"text": {
"type": "plain_text",
"text": "🚨 {{interaction.alertTitle}}"
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Metric:* {{interaction.metricName}}\n*Value:* {{interaction.value}} (threshold: {{interaction.threshold}})\n*Status:* {{interaction.status}}"
}
}
]
}
}Step 5: (Optional) Subscribe to recovery too
POST /api/subscriptions
{
"resourceTypeId": "alert-sec-uuid",
"actionId": "recovered-action-uuid",
"subscriberId": "subscriber-uuid-from-step-3",
"config": {
"adapter": "adapter:slack",
"webhookUrl": "{{secrets.SLACK_SECURITY_WEBHOOK}}",
"blocks": [
{
"type": "header",
"text": {
"type": "plain_text",
"text": "✅ Recovered: {{interaction.alertTitle}}"
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Metric:* {{interaction.metricName}}\n*Value:* {{interaction.value}} (threshold: {{interaction.threshold}})"
}
}
]
}
}Step 6: Evaluate and process
# Evaluate alerts (creates interactions on state change)
POST /api/interaction-alerts/evaluate
# Process notification queue (sends the Slack messages)
POST /api/queue/processTemplate variables in alert subscriptions
When an alert fires, the interaction data is available via {{interaction.*}} in your subscription config. Here's what's available:
Metric alert interaction fields
| Variable | Description | Example |
|---|---|---|
{{interaction.alertId}} | Alert UUID | 550e8400-... |
{{interaction.alertTitle}} | Alert name | High Failed Logins |
{{interaction.metricId}} | Metric UUID | 660e8400-... |
{{interaction.metricName}} | Metric name | Failed Logins (5m) |
{{interaction.metricType}} | Aggregation type | COUNT |
{{interaction.value}} | Current metric value | 150 |
{{interaction.threshold}} | Configured threshold | 100 |
{{interaction.operator}} | Comparison operator | gt |
{{interaction.status}} | exceeded or recovered | exceeded |
{{interaction.dimensions}} | Dimension values (if grouped) | {"country": "ZA"} |
{{interaction.evaluatedAt}} | Evaluation timestamp | 2026-03-20T14:30:00Z |
Subscriber alert interaction fields
| Variable | Description | Example |
|---|---|---|
{{interaction.alert.name}} | Alert name | High Failure Rate |
{{interaction.alert.severity}} | Severity level | critical |
{{interaction.alert.metric_type}} | Metric being tracked | failure_rate |
{{interaction.alert.condition}} | Operator | > |
{{interaction.alert.threshold}} | Threshold | 10 |
{{interaction.alert.time_window}} | Lookback window | 1h |
{{interaction.alert.triggered_at}} | When it fired | 2026-03-20T14:30:00Z |
{{interaction.subscriber.subscriberName}} | Subscriber name | SendGrid |
{{interaction.metrics.current_value}} | Current metric value | 15.5 |
{{interaction.metrics.total_count}} | Total notifications | 200 |
{{interaction.metrics.failure_count}} | Failed notifications | 31 |
{{interaction.metrics.success_count}} | Successful notifications | 169 |
{{interaction.metrics.failure_rate}} | Failure percentage | 15.5 |
{{interaction.metrics.success_rate}} | Success percentage | 84.5 |
Patterns
Email on critical subscriber failure
# Subscriber alert: failure rate > 10% in 1h
POST /api/alerts
{
"subscriberId": "sendgrid-uuid",
"name": "SendGrid failure rate critical",
"metric_type": "failure_rate",
"condition": ">",
"threshold": 10,
"time_window": "1h",
"severity": "critical"
}
# Subscription: send email when ALERT + ALERTTRIG
POST /api/subscriptions
{
"resourceTypeId": "alert-resource-type-uuid",
"actionId": "alerttrig-action-uuid",
"subscriberId": "ops-email-subscriber-uuid",
"config": {
"adapter": "adapter:email",
"personalizations": [{
"to": [{ "email": "ops@company.com" }],
"dynamic_template_data": {
"alert_name": "{{interaction.alert.name}}",
"severity": "{{interaction.alert.severity}}",
"subscriber": "{{interaction.subscriber.subscriberName}}",
"failure_rate": "{{interaction.metrics.failure_rate}}%",
"threshold": "{{interaction.alert.threshold}}%"
}
}],
"from": { "email": "alerts@company.com" },
"template_id": "d-alert-template-id"
}
}Webhook on cart abandonment spike
# Metric: cart abandonments in 1h
POST /api/interaction-metrics
{
"name": "Cart Abandonments (1h)",
"metricTypeId": "count-uuid",
"resourceTypeId": "cart-uuid",
"actionId": "chk-abnd-uuid",
"timeWindow": "1h"
}
# Alert: > 50 abandonments
POST /api/interaction-alerts
{
"title": "High Cart Abandonment",
"metricId": "metric-uuid",
"operator": "gt",
"threshold": 50,
"alertActionId": "fired-uuid",
"resourceTypeId": "alert-biz-uuid"
}
# Subscription: webhook to analytics service
POST /api/subscriptions
{
"resourceTypeId": "alert-biz-uuid",
"actionId": "fired-uuid",
"subscriberId": "analytics-webhook-uuid",
"config": {
"url": "https://analytics.company.com/hooks/alert",
"method": "POST",
"headers": { "Authorization": "Bearer {{secrets.ANALYTICS_TOKEN}}" },
"body": {
"alert": "{{interaction.alertTitle}}",
"value": "{{interaction.value}}",
"threshold": "{{interaction.threshold}}"
}
}
}Multi-dimensional alert with per-country notifications
# Metric: failed logins grouped by country
POST /api/interaction-metrics
{
"name": "Failed Logins by Country (5m)",
"metricTypeId": "count-uuid",
"resourceTypeId": "customer-uuid",
"actionId": "login-fail-uuid",
"timeWindow": "5m",
"groupBy": ["interaction.login_location"]
}
# Alert: > 100 per country
POST /api/interaction-alerts
{
"title": "Failed Logins by Country",
"metricId": "metric-uuid",
"operator": "gt",
"threshold": 100,
"alertActionId": "fired-uuid",
"recoveryActionId": "recovered-uuid",
"resourceTypeId": "alert-sec-uuid"
}Each country is tracked independently. If ZA exceeds 100, only ZA fires. The interaction's dimensions field tells you which country triggered it — use {{interaction.dimensions}} in your subscription template.
Cron setup
For a complete alerting pipeline, set up two cron jobs:
# 1. Evaluate metric alerts every minute
* * * * * curl -s -X POST https://your-api.com/api/interaction-alerts/evaluate \
-H "x-api-key: your-key"
# 2. Evaluate subscriber alerts every 5 minutes
*/5 * * * * curl -s -X POST https://your-api.com/api/alerts/process \
-H "x-api-key: your-key"
# 3. Process notification queue every minute
* * * * * curl -s -X POST https://your-api.com/api/queue/process?batchSize=50 \
-H "x-api-key: your-key"The evaluation endpoints create interactions. The queue processor delivers the notifications. They're decoupled — if the queue processor is down, notifications queue up and get delivered when it comes back.