Grafana → Slack Alert Integration Guide
This document explains how to set up Grafana alerts that send notifications to Slack using the new Slack App-based webhook method.
- Prerequisites
- Slack permissions: Ability to create or request approval for a Slack App with Incoming Webhooks enabled.
- Grafana access: Admin or Editor role with permission to manage alerting.
- Network access: Grafana server must be able to reach Slack’s API endpoints.
- Setup Slack Webhook (New Method)
Since the classic Incoming Webhooks method is deprecated, follow these steps:
- Go to Slack API settings.
- Create a Slack App for your workspace.
- Under Features → Incoming Webhooks, switch Activate Incoming Webhooks = ON.
- Request admin approval for adding new webhook. Once approved:
- Click Add New Webhook to Workspace.
- Choose the channel where alerts should be posted (e.g., #condition-mgmt-pod-dryads-alerts).
- Click Allow.
- Slack generates a Webhook URL, for example:
- https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX
- Copy this URL — you will paste it into Grafana.
- Configure Grafana Contact Point
- In Grafana’s left-hand menu → go to Alerting → Contact points.
- Click + New contact point.
- Provide a clear Name (e.g., Slack – Clinical Alerts).
- Select Integration type → Slack.
- Paste the Slack Webhook URL you copied earlier.
- Save the contact point.
- Use the Test button to confirm a message appears in the selected Slack channel.
- Link Slack Contact Point to Alerts
- Go to Alerting → Notification policies in Grafana.
- Add or edit a policy to determine which alerts go to Slack.
- Under Contact point, select the Slack contact point you created (e.g., Slack – Clinical Alerts).
- Use filters if you want only specific alerts to be routed:
- Folder filter → Apply only to alerts in a specific folder.
- Label filter → Apply only to alerts tagged with certain labels (e.g., service=clinical-insight).
- Save the policy.
➡️ Example: Apply the Slack contact point to the Clinical Insight Engine data processing failed alert.
✅ With this setup:
- Grafana alerts are sent to Slack using the new App-based webhook method.
- Notification policies allow you to control which alerts go to which Slack channels.
- (Optional) Avoid Repeated Notifications
To prevent alert spam while an issue is ongoing, Grafana provides these alert rule settings:
Pending period (equivalent to For duration)
- This is how long the condition must remain in a bad state before firing an alert.
- Example: Pending period = 1h
- Grafana checks the condition every evaluation interval (e.g., 1h).
- If the condition is bad at one check, Grafana waits another full interval.
- Only if the condition is still bad in the next check, the alert fires (~2h total).
Keep firing for
- This controls how long Grafana should keep the alert active after recovery.
- Example: Keep firing for = None
- As soon as the condition returns to normal at the next evaluation, Grafana resolves the alert immediately.
Why these values were chosen
- Pending period = 1h: Ensures that alerts are only raised for sustained issues (not for one-time blips).
- Keep firing = None: Once the issue clears, the alert stops right away, preventing unnecessary notifications.