Cron Job Patterns That Actually Work in Production
25+ production crons. Model selection, timeout handling, failure recovery, zero-token architectures, and the patterns that don't break at 3am.
I run 25+ cron jobs in production. Some fire every hour. Others run once a week. A few have been running without failure for 60+ days straight. This is the accumulated wisdom from two months of production cron automation with OpenClaw.
The Cron Catalog
Current production crons (as of Feb 8, 2026):
Daily (run once per day):
├── briefing-daily (6am): Email + calendar + messages summary
├── youtube-scripts-blockbuddies (8am): Generate 12 quiz scripts
├── youtube-scripts-stellartruths (9am): Generate 2 space fact scripts
├── seo-audit (10am): Check all 6 sites for broken links, missing meta
├── contact-intelligence (11am): CRM decay check (last contact > 14 days)
├── analytics-review (12pm): GA4 metrics across all properties
├── reddit-digest (2pm): Aggregate /r/OpenClaw + /r/LocalLLaMA threads
└── learning-extraction (11:59pm): Extract key insights from day's transcripts
Twice daily:
├── email-triage (7am, 7pm): Flag urgent emails, auto-archive newsletters
└── calendar-prep (8am, 4pm): Upcoming meetings, prep materials
Weekly (run Sunday 9am):
├── cost-dashboard: API spend breakdown, anomalies
├── model-report: Model usage analysis, over-spend flags
├── youtube-analytics: Channel performance, top videos, recommendations
├── competitive-intel: Visiting Media news, competitor analysis
└── memory-audit: Review MEMORY.md for stale/wrong info
On-demand (trigger-based):
├── video-render-check: Poll for completed renders, upload when ready
├── github-deploy-monitor: Watch for Vercel deploy failures
└── fathom-transcript-processor: New call recorded → analyze → reportThat's 25 crons, running 150+ times per week, at a total cost of ~$32/week ($4.60/day).
Pattern 1: Zero-Token Checks
The most expensive cron pattern is also the most common: wake the agent, load context, check for work, find nothing, exit.
Cost per check:
- Load context (MEMORY.md, AGENTS.md, etc.): 1,800 tokens
- Check for pending tasks: 200 tokens
- Exit message: 100 tokens
Total: 2,100 tokens @ $0.10/1M = $0.00021 per check. Sounds cheap, but if the check runs every hour, that's $0.005/day. Multiply by 10 crons, and you're at $0.05/day for doing nothing.
Solution: Shell script pre-checks
#!/bin/bash
# email-triage-check.sh
UNREAD=$(gcloud-gmail count --label INBOX --unread)
if [ "$UNREAD" -gt 0 ]; then
openclaw send --agent mira --message "EMAIL_TRIAGE: $UNREAD unread"
else
exit 0 # No work, no agent spawn
fiSchedule the shell script via launchd/systemd, not OpenClaw cron. Cost: $0/day for checks, only pays when work exists.
Applied this pattern to 8 crons. Savings: ~$10/week.
Pattern 2: Model Selection by Task Type
Not all crons are equal. Some need reasoning (learning extraction), others are pure data processing (analytics review).
# Data extraction → Flash
openclaw cron add seo-audit \
--schedule "0 10 * * *" \
--model "google/gemini-3-flash-preview" \
--task "Check all 6 sites: broken links, missing meta tags, 404s. Report issues."
# Heavy text generation → DeepSeek
openclaw cron add youtube-scripts \
--schedule "0 8 * * *" \
--model "deepseek/deepseek-chat-v3" \
--task "Generate 12 quiz scripts for Block Buddies (questions + answers)."
# Requires judgment → Sonnet
openclaw cron add learning-extraction \
--schedule "59 23 * * *" \
--model "anthropic/claude-sonnet-4-5" \
--task "Review today's transcripts. Extract: decisions, failures, patterns. Update MEMORY.md."Rule of thumb:
- Flash: Data extraction, formatting, monitoring
- DeepSeek: Bulk text generation (scripts, drafts, summaries)
- Sonnet: Analysis, decision-making, judgment calls
- Opus: Never. If a cron needs Opus-level reasoning, it should be a subagent task triggered by the main agent.
Pattern 3: Timeout Handling
Crons die. Network fails, APIs timeout, models hallucinate infinite loops. Every cron needs a hard timeout.
openclaw cron add analytics-review \
--schedule "0 12 * * *" \
--model "google/gemini-3-flash-preview" \
--timeout 300 \ # 5 minutes max
--task "Pull GA4 metrics for all 6 sites. Compare to last week. Flag anomalies."What happens on timeout?
- OpenClaw kills the session
- Logs the failure to
~/.openclaw/logs/cron-failures.log - Sends a notification to the main agent (if configured)
I set timeouts conservatively: 3-5 minutes for data tasks, 10-15 minutes for generation tasks. If a cron hits timeout repeatedly, it's a signal to break it into smaller pieces.
Real Example: YouTube Script Generation
Original cron: "Generate 12 quiz scripts" (one task, 12 scripts).
Problem: Timeout after 8 scripts (~12 minutes). The model would generate 8, then hit context limits and fail.
Fix: Split into 3 crons, 4 scripts each:
openclaw cron add yt-scripts-batch1 --schedule "0 8 * * *" --timeout 300 --task "Generate 4 scripts (topics 1-4)"
openclaw cron add yt-scripts-batch2 --schedule "5 8 * * *" --timeout 300 --task "Generate 4 scripts (topics 5-8)"
openclaw cron add yt-scripts-batch3 --schedule "10 8 * * *" --timeout 300 --task "Generate 4 scripts (topics 9-12)"Result: Zero timeouts in 30 days. Each batch completes in ~4 minutes.
Pattern 4: Failure Recovery
Crons fail. APIs go down, rate limits hit, models return garbage. The question is: what happens next?
Option 1: Retry on Next Schedule
For non-critical tasks (SEO audit, analytics review), just wait for the next run. If today's audit fails, tomorrow's will catch it.
No special handling needed. Let it fail, log it, move on.
Option 2: Immediate Retry with Backoff
For critical tasks (daily briefing, email triage), retry immediately:
#!/bin/bash
# briefing-daily-wrapper.sh
MAX_RETRIES=3
RETRY_DELAY=60 # seconds
for i in $(seq 1 $MAX_RETRIES); do
openclaw send --agent mira --message "BRIEFING_DAILY" && exit 0
if [ $i -lt $MAX_RETRIES ]; then
echo "Retry $i failed, waiting $RETRY_DELAY seconds"
sleep $RETRY_DELAY
RETRY_DELAY=$((RETRY_DELAY * 2)) # exponential backoff
fi
done
# All retries failed
echo "CRITICAL: Daily briefing failed after $MAX_RETRIES attempts" | \
openclaw send --agent mira --message "$(cat -)"
exit 1Schedule the wrapper script, not the bare OpenClaw command. This gives you control over retry logic.
Option 3: Queue for Manual Review
For high-stakes tasks (financial reports, client emails), don't auto-retry. Queue for human review:
# On failure, write to pending-review queue
echo "FAILED: analytics-review at $(date)" >> ~/.openclaw/pending-review.txt
# Main agent checks this file on next heartbeat
openclaw send --agent mira --message "CHECK_PENDING_REVIEW"Pattern 5: Output Persistence
Cron output should be written to disk, not just logged to the session transcript.
Bad:
# Output only in transcript (lost after 7 days)
openclaw cron add briefing-daily --task "Generate daily briefing"Good:
# Output written to memory/briefings/
openclaw cron add briefing-daily \
--task "Generate daily briefing, save to memory/briefings/YYYY-MM-DD.md"Real task prompt:
"Pull last 24h from Gmail, Calendar, and Telegram.
Generate structured briefing with:
- Top priority items
- Calendar for next 48h
- Pending action items
- Recent decisions
Save output to memory/briefings/$(date +%Y-%m-%d).md"This makes briefings searchable via grep, referenceable in future sessions, and recoverable if the session transcript is deleted.
Pattern 6: Cost Monitoring Per Cron
Every cron logs its cost. I review this weekly to catch regressions:
# Weekly cron cost report
openclaw cron add cron-cost-report \
--schedule "0 9 * * 0" \
--model "google/gemini-3-flash-preview" \
--task "Analyze last 7 days of cron runs. Report: total cost, per-cron breakdown, anomalies."
Output: memory/cron-cost-YYYY-MM-DD.mdReal output (week of Feb 1-7):
# Cron Cost Report: Feb 1-7
Total: $32.10 ($4.60/day)
Top spenders:
1. youtube-scripts-blockbuddies: $8.40 (26% of total) — DeepSeek, 7 runs
2. learning-extraction: $6.20 (19%) — Sonnet, 7 runs
3. competitive-intel: $5.20 (16%) — DeepSeek, 1 run (weekly)
4. seo-audit: $4.20 (13%) — Flash, 7 runs
5. reddit-digest: $3.15 (10%) — DeepSeek, 7 runs
Anomalies:
- Feb 4: youtube-scripts ran twice (8am and 8:05am) due to manual trigger. Cost: $2.40 extra.
Action: Added mutex lock to prevent concurrent runs.This catches:
- Crons running more often than expected (misconfigured schedule)
- Crons using wrong model (should be Flash, using Sonnet)
- Crons with inflating costs (output size growing over time)
Pattern 7: Conditional Execution
Not all crons should run unconditionally. Some should check prerequisites first:
# Video render check — only run if renders are pending
openclaw cron add video-render-check \
--schedule "*/15 * * * *" \ # Every 15 minutes
--task "Check ~/yt-automation/renders/ for completed .mp4 files. If found, upload to YouTube. If none, exit silently."
# GitHub deploy monitor — only run if deploys happened today
openclaw cron add github-deploy-monitor \
--schedule "0 */3 * * *" \ # Every 3 hours
--task "Query GitHub API for deployments today. If any failed, report details. If none or all succeeded, exit silently."Key phrase: "If none, exit silently." This tells the agent not to generate output when there's no work. Saves tokens and reduces noise.
Pattern 8: Human-in-the-Loop for High-Stakes Tasks
Some crons generate output that should be reviewed before publishing:
# Generate weekly newsletter draft
openclaw cron add newsletter-draft \
--schedule "0 10 * * 5" \ # Friday 10am
--task "Generate newsletter draft: top 3 articles from this week, 1 community highlight, 1 tool recommendation. Save to drafts/newsletter-YYYY-MM-DD.md. DO NOT SEND."
# Human reviews draft, then manually triggers send:
openclaw send --agent mira --message "SEND_NEWSLETTER: drafts/newsletter-2026-02-14.md"Never auto-send emails, tweets, or public posts from crons. Always queue for review.
Pattern 9: Dependency Chains
Some crons depend on others completing first:
# Step 1: Fetch raw data
openclaw cron add yt-analytics-fetch \
--schedule "0 8 * * 0" \
--task "Fetch YouTube analytics for last 7 days, save to data/yt-analytics-YYYY-MM-DD.json"
# Step 2: Analyze data (runs 10 minutes after fetch)
openclaw cron add yt-analytics-analyze \
--schedule "10 8 * * 0" \
--task "Read data/yt-analytics-YYYY-MM-DD.json, generate insights, save to memory/yt-analysis-YYYY-MM-DD.md"
# Step 3: Generate recommendations (runs 10 minutes after analysis)
openclaw cron add yt-recommendations \
--schedule "20 8 * * 0" \
--task "Read memory/yt-analysis-YYYY-MM-DD.md, generate 3 actionable recommendations for next week"This works for simple chains (3-4 steps). For complex workflows, use a dedicated workflow engine or orchestration script.
Real-World Example: Daily Briefing End-to-End
Here's the full stack for my daily briefing cron:
# 1. Shell script wrapper (scheduled via launchd)
#!/bin/bash
# briefing-daily-wrapper.sh
# Check if there's actually new content
EMAILS=$(gcloud-gmail count --after $(date -d yesterday +%Y-%m-%d))
CALENDAR=$(gcal list --after $(date +%Y-%m-%d))
if [ "$EMAILS" -eq 0 ] && [ -z "$CALENDAR" ]; then
echo "No new content for briefing, skipping"
exit 0
fi
# Trigger OpenClaw cron
openclaw send --agent mira --message "BRIEFING_DAILY"
# 2. OpenClaw cron definition
openclaw cron add briefing-daily \
--model "google/gemini-3-flash-preview" \
--timeout 300 \
--task "$(cat ~/.openclaw/tasks/briefing-daily.md)"
# 3. Task definition (tasks/briefing-daily.md)
Pull last 24h from:
- Gmail (INBOX, unread only)
- Calendar (today + tomorrow)
- Telegram (messages from users)
Generate structured briefing:
## Top Priority
- [ ] Action items with deadlines today
- [ ] Unread emails marked urgent
## Calendar (Next 48h)
- List meetings with start time, attendees, location
## Recent Decisions
- Extract decisions made yesterday from Telegram
## Pending Items
- Tasks mentioned but not completed
Save to: memory/briefings/$(date +%Y-%m-%d).md
Send to: Telegram (notify the team)
# 4. Output format
# Briefing: Feb 8, 2026
## Top Priority
- [ ] Deploy Blueprint site to Vercel (deadline: today)
- [ ] Review Eleanore launch timeline with Alexandra
## Calendar (Next 48h)
- Feb 8, 2pm: Team sync (Zoom)
- Feb 9, 10am: Call with Justin re: architecture
## Recent Decisions (Feb 7)
- Switched all crons to Flash/DeepSeek (cost optimization)
- Approved Stellar Truths video quality standard
## Pending Items
- Booth Beacon blog post for Chicago
- YouTube analytics review (weekly, due Sunday)
Cost: $0.25 | Model: Flash | Runtime: 18sFailure Modes I've Encountered
1. Context Limit Exceeded
Symptom: Cron runs for 10+ minutes, then fails with "context length exceeded."
Cause: Task generates too much output (e.g., "Analyze all 150 YouTube videos").
Fix: Batch the work. Instead of one cron analyzing 150 videos, run 3 crons analyzing 50 each.
2. Stale API Tokens
Symptom: Cron worked fine for weeks, then suddenly fails with "unauthorized."
Cause: OAuth token expired (Gmail, YouTube, etc.).
Fix: Refresh tokens before they expire. I run a weekly token-refresh cron:
openclaw cron add token-refresh \
--schedule "0 3 * * 0" \
--task "Refresh OAuth tokens for Gmail, YouTube, Google Calendar"3. Race Conditions
Symptom: Two crons try to write to the same file simultaneously, one fails.
Cause: No mutex lock on shared resources.
Fix: Use file-based locks:
#!/bin/bash
LOCKFILE="/tmp/youtube-scripts.lock"
if [ -f "$LOCKFILE" ]; then
echo "Another instance is running, exiting"
exit 1
fi
touch "$LOCKFILE"
trap "rm -f $LOCKFILE" EXIT
# Run cron work
openclaw send --agent mira --message "YOUTUBE_SCRIPTS"Key Takeaways
- Zero-token checks save money. Use shell scripts to pre-check for work before spawning agents.
- Match model to task. Flash for data, DeepSeek for text, Sonnet for judgment. Never Opus.
- Timeouts are mandatory. Every cron needs a hard limit. 3-5 minutes for data, 10-15 for generation.
- Write output to disk. Session transcripts are ephemeral. Persistent files are searchable and recoverable.
- Monitor cost per cron. Weekly reviews catch regressions before they compound.
- Fail gracefully. Retry with backoff for critical tasks, queue for review for high-stakes tasks.
- Batch heavy work. If a cron hits timeouts, split it into smaller crons.
Crons are the backbone of autonomous operation. Get them right, and your agent runs itself. Get them wrong, and you'll wake up to $500 in runaway costs or a flooded inbox of failure notifications.
Continue Learning
Skip the trial and error
Get the OpenClaw Starter Kit — config templates, 5 ready-made skills, deployment checklist. Everything you need to go from zero to running in under an hour.
$14 $6.99
Get the Starter Kit →Also in the OpenClaw store
Get the free OpenClaw deployment checklist
Production-ready setup steps. Nothing you don't need.