DEV Community

Damien Gallagher
Damien Gallagher

Posted on • Originally published at buildrlab.com

Heartbeats in OpenClaw: Cheap Checks First, Models Only When You Need Them

Heartbeats in OpenClaw: Cheap Checks First, Models Only When You Need Them

I used to think “heartbeat” meant “run the assistant every X minutes.” It doesn’t.

In OpenClaw, a heartbeat is just a regular pulse where your agent checks a short checklist and decides one of two things:

  • Nothing important changed → reply HEARTBEAT_OK
  • Something needs attention → send a short alert (and maybe do deeper work)

That sounds simple, but there’s a trap: if you throw an LLM at every heartbeat, you end up paying for a whole lot of “nothing happened.”

This post shows the approach I’m using now:

  • do rule-based checks first (fast, deterministic, basically free)
  • only call a model when there’s actual signal (summaries, decisions, or messy human context)

The core idea: a heartbeat is a gate, not a workflow

Think of a heartbeat as a gatekeeper.

A good heartbeat answers questions like:

  • Did anything break? (CI failing, errors, deploy alarms)
  • Did anything change? (new PR, new task queued, new email from a customer)
  • Is anything time-sensitive? (calendar event in <2 hours, expiring cert)

If the answer is “no” to all of those… the best output is literally a single line:

HEARTBEAT_OK

Anything more is noise.


You usually don’t need a model for that

Most heartbeat logic is not “reasoning.” It’s just checking state.

Examples that don’t need an LLM:

  • is the repo dirty?
  • are there open PRs?
  • did the agent queue grow?
  • did a job fail?
  • did Slack/WhatsApp disconnect?

For this, a shell script + a few API calls is perfect.

A practical pattern: cheap mode first

Run a lightweight script first. It outputs either:

  • HEARTBEAT_OK
  • or HEARTBEAT_ALERT + bullet list of what changed

Only if it prints an alert do you involve a model.

That gives you the best of both worlds:

  • $0 heartbeats most of the time
  • still get a clean human summary when something real happens

When should you involve a model?

A model is worth it when the output benefits from language understanding:

  • summarising multiple alerts into one message
  • deciding what to do first when several things changed at once
  • writing a “brief” that a human actually wants to read
  • turning raw logs into a short action plan

In my setup, I keep it simple:

  • run cheap checks
  • if alert → use a small model (e.g. Claude Haiku) to summarise + recommend next actions

Tuning heartbeat frequency: faster isn’t always better

Heartbeat schedules are like monitoring: you want fast enough to catch the important stuff, but not so frequent that it turns into spam (or cost creep).

Short intervals (e.g. every 5 minutes)

Good for:

  • high-velocity shipping (lots of PRs / CI runs)
  • active incident response
  • anything with a “respond now” requirement

Bad for:

  • cost, if an LLM runs every time
  • notification fatigue (“another alert… for nothing”)
  • wasted compute / API calls

Longer intervals (e.g. every 30–60 minutes)

Good for:

  • most solo founder work
  • “keep an eye on it” workflows
  • staying informed without interruptions

Bad for:

  • you’ll learn about failures later
  • some tasks won’t feel “real-time”

A simple rule of thumb

  • If you’re actively shipping: 5–15 minutes
  • If you’re in build mode but stable: 30 minutes
  • If you’re just keeping watch: 60–120 minutes

And if you need something at an exact time (“remind me at 9am sharp”), use a scheduled job (cron) instead of heartbeats.


The takeaway

If you’re using OpenClaw (or any agent system):

  1. Make heartbeats cheap and deterministic
  2. Treat the model as an escalation layer, not the default
  3. Tune frequency so you get signal, not noise

If you want to copy the pattern, start with a tiny script that prints either HEARTBEAT_OK or a short HEARTBEAT_ALERT list — and only summarise the alert with a model when you need to.

Top comments (0)