See your Celery.Fix it before it breaks.

Real-time monitoring for every task, queue, and worker — with persistent history, search, and management actions Flower never had.

2 lines

of Python to connect

30s

to first data

<2s

event-to-dashboard p95

Free

up to 10K tasks/day

Celery monitoring is broken

Flower crashes

Flower loses all data on restart — no persistence, no alerting, no search. The last meaningful update was in 2023.

Grafana shows half the picture

celery-exporter gives you 12 Prometheus metrics — useful for aggregate trends, but no individual task visibility, no management actions, and no alerting without significant additional setup.

No monitoring

Most teams fly blind. PENDING could mean "waiting" or "lost 3 hours ago." You find out when a customer reports it.

Connect in 30 seconds

1

Option A: Python SDK

Best for most teams — captures events from inside your Celery process.

$ pip install sluice# settings.pyimport sluicesluice.init(api_key="sk_...")
2

Option B: Docker agent (zero Python changes)

Best when you can't modify your Python code — connects directly to your Redis broker.

$ docker run -d --name sluice-agent \    --restart unless-stopped \    -e SLUICE_API_KEY="sk_..." \    -e REDIS_URL="redis://your-broker:6379/0" \    sluice/agent# For production, use --env-file or Docker secrets instead of inline env vars.
3

Open your dashboard

# The SDK auto-enables Celery events — no manual config needed.# Tasks appear at sluice.sh/overview within seconds.

Task metadata only — arguments redacted by default. TLS in transit. Your Redis credentials never leave your infrastructure.

Everything you need. Nothing you don’t.

Real-time dashboard

See every task and its state in real time — search, filter, and take action on what your workers are doing right now.

Queue health bars

Queue depth, throughput, and consumer count at a glance — so you always know which queues need attention.

Task forensics

Dig into full tracebacks, state timelines, and timing breakdowns — then retry or revoke tasks directly from the dashboard without SSH.

Two ways to connect

Install the Python SDK with two lines of code, or skip Python entirely and run the Docker agent — either way, data flows in under a minute.

Persistent history

Every task is stored in Postgres, not in-memory. Search what failed last Tuesday at 3am — even after a worker restart.

Alerting (coming soon)

Stalled tasks, queue backlogs, and worker failures — routed to Slack, PagerDuty, or webhooks. Shipping in V1.

How Sluice compares

CapabilityFlowerGrafana + ExporterSluice
Individual task visibilityLost on restartPersisted
Queue depth monitoringAggregate onlyPer-queue, real-time
Worker health monitoringRequires configAuto-discovered
Task retry/revokeNo confirmation
AlertingManual (Alertmanager)Soon
Data persistenceNone (in-memory)Prometheus (limited)Postgres (full history)
Setup time5 min2–4 hours30 seconds
MaintenanceYou maintain itYou maintain itWe maintain it
Beat schedule monitoringSoon
Silent stall detectionSoon

Did you know?

Celery’s PENDING state doesn’t mean “waiting.” It means “we have no information.”

Any unknown task ID returns PENDING — whether it’s queued, lost three hours ago, or was never dispatched at all. Sluice tracks every state transition from the moment a task is sent, so you always know which one it is.

Stop flying blind.

Free for up to 10K tasks/day. No credit card required.