Send real-time Kubernetes(EKS/GKE/AKS) CPU spike alerts from Prometheus to Slack
Workflow preview
DISCOUNT 20%
Important notice
This workflow is provided as-is. Please review and test before using in production.
Overview
π§Ύ Summary
This workflow monitors Kubernetes pod CPU usage using Prometheus, and sends real-time Slack alerts when CPU consumption crosses a threshold (e.g., 0.8 cores). It groups pods by application name to reduce noise and improve clarity, making it ideal for observability across multi-pod deployments like Argo CD, Loki, Promtail, applications etc.
π₯ Whoβs it for
Designed for DevOps and SRE teams and platform teams, this workflow is 100% no-code, plug-and-play, and can be easily extended to support memory, disk, or network spikes. It eliminates the need for Alertmanager by routing critical alerts directly into Slack using native n8n nodes.
βοΈ What it does
This n8n workflow polls Prometheus every 5 minutes β±οΈ, checks if any pod's CPU usage crosses a defined threshold (e.g., 0.8 cores) π¨, groups them by app π§©, and sends structured alerts to a Slack channel π¬.
π οΈ How to set up
π Set your Prometheus URL with required metrics (container_cpu_usage_seconds_total, kube_pod_container_resource_limits)
π Add your Slack bot token with chat:write scope
π§© Import the workflow, customize:
Threshold (e.g., 0.8 cores)
Slack channel
Cron schedule
π Requirements
- A working Prometheus stack with kube-state-metrics
- Slack bot credentials
- n8n instance (self-hosted or cloud)
π§βπ» How to customize
π§ Adjust threshold values or query interval
π Add memory/disk/network usage metrics
π‘ This is a plug-and-play Kubernetes alerting template for real-time observability.
π·οΈ Tags:
Prometheus, Slack, Kubernetes, Alert, n8n, DevOps, Observability, CPU Spike, Monitoring