The literal answer to "how do I set a Linux cron job to reboot a server" is one line of config in root's crontab. Whether you should do it that way is a different question: cron is a fine scheduler for trivial jobs, less great for system-level actions like reboots, and there are usually better tools (systemd timers, at, monitoring-driven remediation) for the same outcome. This guide covers the one-line answer for when you really do want a Linux cron job reboot schedule, the alternatives that work better for most cases, and — importantly — how to confirm that a scheduled reboot actually completed instead of leaving the host stuck in a reboot loop or unreachable on the wrong side of an upgrade.
It's written for sysadmins working with systemd-based distros (Ubuntu 16.04+, Debian 8+, RHEL/CentOS/Rocky/Alma 7+) — which means cron and systemd timers coexist, and you have the choice of either.
The literal one-line answer
sudo crontab -e
Then add:
# Reboot every Sunday at 03:00
0 3 * * 0 /sbin/shutdown -r now
Save and exit. That's it. The five fields are minute hour day-of-month month day-of-week. 0 3 * * 0 means "at 03:00 on Sunday, every week". The 0 for day-of-week is Sunday on every Linux cron implementation; some also accept 7 as Sunday.
A few important details that matter even for this trivial case:
- Use the absolute path to
shutdown. Cron'sPATHis minimal — typically just/usr/bin:/bin./sbin/shutdown(or/usr/sbin/shutdownon newer Ubuntu) is the safe form. Running justshutdownmay or may not find it depending on distro. shutdown -r nowtriggers a graceful reboot through systemd, including service stop, filesystem sync, and final shutdown ordering. Don't userebootdirectly in a cron job —shutdownis the correct entry point.crontab -e(withoutsudo) edits your crontab, which doesn't have permission to reboot. Alwayssudo crontab -e(or use/etc/cron.d/) for system-level jobs.
If you want a different cadence:
0 3 * * 1 # Every Monday 03:00
0 4 1 * * # First day of every month, 04:00
0 5 1 1 * # 05:00 on January 1st (annual)
30 2 * * 6,0 # 02:30 every Saturday and Sunday
That's the entire literal answer. The rest of this article is about what tends to go wrong, what works better in practice, and how to verify the reboot actually happened.
When scheduled reboots are actually the right answer
Worth checking before you set this up — most "I need a weekly reboot" instincts are working around a different problem. Real reasons to schedule reboots:
- Compliance / audit requirements that explicitly mandate reboots on a fixed cadence. Some regulated environments do.
- Memory leaks in legacy software that you can't fix and that runs out of memory after roughly N days. A scheduled reboot is the practical workaround.
- Kernel updates that need a reboot to take effect — though
kexecand live patching (kpatch, kexec-tools, Canonical Livepatch) are usually better solutions. - Rotating credentials, ephemeral state, or boot-time provisioning that resets only on reboot. Some immutable-infrastructure setups depend on this.
Things that look like reasons to reboot but aren't:
- "The application gets slow over time." Find the leak; restart the application, not the kernel.
systemctl restart <service>does this in seconds without a reboot. - "Logs grow until disk is full." Configure log rotation. See the Docker logs guide if containers are involved: How to tail Docker container logs.
- "The server has been up too long; that feels risky." Long uptime is not, by itself, a problem. Patched kernels, monitored memory, healthy services on a 200-day uptime are fine. Reboot when there's a reason.
- "We need to apply security patches." Live patching exists for exactly this. Or schedule reboots only after the package manager actually upgrades a kernel — not on a fixed schedule.
If you're scheduling reboots because you have a real problem with the box and aren't sure why, the reboot is treating the symptom. Worth at least one debugging pass first.
Where to put the cron entry
Several files all work. Pick one based on how the schedule is managed:
root's crontab
sudo crontab -e
sudo crontab -l # list
Editable per-host. Lives in /var/spool/cron/crontabs/root (Debian/Ubuntu) or /var/spool/cron/root (RHEL family). Doesn't have a username column because the file is the user.
/etc/cron.d/
A drop-in directory. One file per job. Has a username column. Better fit for config-managed setups (Ansible, Puppet, Chef) because you can drop a file in and reload without touching a user's per-user spool:
sudo tee /etc/cron.d/weekly-reboot <<'EOF'
# Reboot every Sunday at 03:00
0 3 * * 0 root /sbin/shutdown -r now
EOF
The username (root) is the difference between /etc/cron.d/ and crontab -e. Files here are picked up automatically; no service reload needed in most distros.
/etc/cron.weekly/ (the cleanest for "weekly" cadence)
Drop an executable script in /etc/cron.weekly/:
sudo tee /etc/cron.weekly/scheduled-reboot <<'EOF'
#!/bin/sh
/sbin/shutdown -r now "scheduled weekly reboot"
EOF
sudo chmod +x /etc/cron.weekly/scheduled-reboot
The exact time depends on /etc/crontab and (on Ubuntu/Debian) anacron — by default these run sometime in the early morning. If you need a precise time, use cron.d instead.
The cron environment: where reboots silently fail
Cron jobs run with a minimal environment, which is the cause of most "the cron job runs interactively but fails on schedule" mysteries:
PATHis typically/usr/bin:/bin(not your shell'sPATH). Use absolute paths.- No shell rc files are sourced (no
.bashrc, no.profile). Aliases and functions don't exist. HOMEis set, but most other environment variables are not.- The current directory is the user's home, not where you wrote the script.
For a one-line reboot using /sbin/shutdown, none of this matters. For a script that does pre-reboot work (drain a service, kick off a backup, log a message) — which is the more typical case — this is where things go wrong. Test by simulating the cron environment:
sudo env -i HOME="$HOME" PATH=/usr/bin:/bin /path/to/your/script.sh
env -i clears the environment, HOME and PATH are the only two cron sets. If your script works under that, it'll work in cron.
A more useful pre-reboot script
A bare /sbin/shutdown -r now works, but a small wrapper script gives you a paper trail and lets the reboot do something useful on the way down:
#!/bin/sh
# /usr/local/sbin/scheduled-reboot.sh
set -e
LOG=/var/log/scheduled-reboot.log
NOW=$(date -Is)
{
echo "===== Scheduled reboot starting at $NOW ====="
uptime
echo "Memory:"
free -h
echo "Disk:"
df -h /
echo "Top 5 by memory before reboot:"
ps -eo user,pid,%mem,rss,cmd --sort=-%mem | head -6
echo "Reboot reason: scheduled by cron"
echo "============================================"
} >> "$LOG" 2>&1
# Send a wall message so any logged-in users see it
wall "Scheduled reboot in 1 minute. Save your work." || true
# Schedule the reboot 1 minute out so this script and wall both finish cleanly
/sbin/shutdown -r +1 "scheduled reboot"
sudo install -m 755 scheduled-reboot.sh /usr/local/sbin/
Then in cron:
0 3 * * 0 root /usr/local/sbin/scheduled-reboot.sh
This gives you a /var/log/scheduled-reboot.log you can check after each scheduled reboot to confirm it ran and that the host's pre-reboot state was sane. The 1-minute delay (shutdown -r +1) is so the script and wall message get to finish before the system tears down.
The systemd-timer alternative
On every modern Linux distro, systemd timers are usually a better fit than cron for system-level actions. They give you logging in journalctl, dependency handling, and structured OnCalendar syntax. Two files:
/etc/systemd/system/scheduled-reboot.service:
[Unit]
Description=Scheduled weekly reboot
[Service]
Type=oneshot
ExecStart=/usr/local/sbin/scheduled-reboot.sh
/etc/systemd/system/scheduled-reboot.timer:
[Unit]
Description=Run scheduled-reboot weekly
[Timer]
OnCalendar=Sun 03:00
Persistent=true
RandomizedDelaySec=5min
[Install]
WantedBy=timers.target
Enable:
sudo systemctl daemon-reload
sudo systemctl enable --now scheduled-reboot.timer
sudo systemctl list-timers scheduled-reboot.timer
Why this is better than cron for reboots:
Persistent=true— if the host was off when the timer should have fired, run it as soon as the host is back. Cron just skips missed runs.RandomizedDelaySec— spread fleet reboots across a small window so 200 hosts don't all reboot at exactly 03:00:00. Reduces thundering-herd issues with shared infrastructure.journalctl -u scheduled-reboot.service— structured logs of every run. No more "did it fire?" guessing.- Dependency ordering —
OnCalendarplays nicely with other systemd units (e.g. wait for backups to finish).
For a fleet larger than a handful of hosts, systemd timers are the durable answer. Cron is fine when you already use cron everywhere and don't want to introduce a second scheduler.
Conditional reboots (only if needed)
A common refinement: only reboot if a kernel upgrade actually happened, or if uptime exceeds N days. Both are easy with a small wrapper:
Reboot only if a kernel update is pending:
#!/bin/sh
# Debian/Ubuntu
if [ -f /var/run/reboot-required ]; then
/sbin/shutdown -r +1 "kernel update applied; rebooting"
fi
#!/bin/sh
# RHEL/CentOS/Rocky/Alma
if needs-restarting -r >/dev/null 2>&1; then
: # no reboot needed
else
/sbin/shutdown -r +1 "kernel update applied; rebooting"
fi
(needs-restarting -r from dnf-utils exits 0 if no reboot is needed, 1 otherwise — counter-intuitively.)
Reboot only if uptime > 30 days:
#!/bin/sh
UPTIME_DAYS=$(awk '{print int($1/86400)}' /proc/uptime)
if [ "$UPTIME_DAYS" -gt 30 ]; then
/sbin/shutdown -r +1 "uptime ${UPTIME_DAYS}d > 30d threshold"
fi
Schedule either of these on the same weekly cron line — most weeks they'll do nothing; on the weeks they fire, you get a useful reboot.
One-shot scheduled reboots: at
For a single scheduled reboot — a planned maintenance window, not a recurring schedule — at is simpler than cron:
sudo apt install at # if not already installed
sudo systemctl enable --now atd
# Schedule a reboot for tonight at 23:30
sudo sh -c "echo '/sbin/shutdown -r now' | at 23:30"
# Or for a specific date/time
sudo sh -c "echo '/sbin/shutdown -r now' | at 03:00 2026-05-12"
# Check pending at jobs
sudo atq
# Cancel a pending at job (use the ID from atq)
sudo atrm <id>
at runs the command once and removes itself. No accidental recurrence. For a one-time maintenance reboot, this is much harder to get wrong than editing cron and remembering to remove it afterward.
Verifying the reboot happened
A scheduled reboot that didn't actually happen — or that happened but the host didn't come back — is the real risk. The verification:
# Last reboot time
who -b
# system boot 2026-05-09 03:00
# Recent boot history
last -x reboot | head -5
# Same data via systemd
journalctl --list-boots | head -5
# Was the reboot the scheduled one?
journalctl --since "1 hour ago" -u systemd-logind | grep -i shutdown
If who -b shows the expected reboot time matching the scheduled cron / timer fire time, the reboot worked.
The more common failure mode is "host rebooted, but didn't come back" — a service didn't start, networking didn't come up, fsck got stuck on a manual prompt, the bootloader is wedged. From outside the host, this looks like the same downtime you'd see for any other outage, except you scheduled it.
The monitoring side: catch failed reboots
A reboot is a deliberate ~1–3 minutes of downtime. A reboot that doesn't come back is an outage. The two look identical from the outside until you check. Continuous monitoring is the only thing that distinguishes them in real time:
- HTTP / TCP / ping monitoring with a tight reboot-aware threshold — alert on > 5 minutes of downtime, not on the first failed check (so the scheduled reboot itself doesn't page you).
- Server monitoring — uptime metric, plus a "reboot detected" signal when uptime resets.
- Service monitoring post-reboot — confirm critical services come back. systemd's
Restart=on-failureis part of the answer, but external verification is the safety net.
Xitoring's server monitoring detects reboots automatically (uptime reset is a built-in signal), and the uptime monitoring probes catch the case where the host doesn't come back. Pair with Xitogent on the host so you see CPU / memory / process state right up to the moment of reboot, then again as it returns. That correlation is what tells you whether a scheduled reboot was clean or whether something broke on the way back up.
For monitoring the cron job itself (heartbeat-style: alert if the reboot didn't fire when it was supposed to), see the existing KB on heartbeat uptime monitoring — wire the script to ping a heartbeat URL right before calling shutdown, and the monitor pages you if the heartbeat goes missing.
Operational tips
- Always use absolute paths in cron and
at./sbin/shutdown, notshutdown. The few seconds you save typing aren't worth the day you lose to "why didn't it run?" - Test pre-reboot scripts under the cron environment with
env -i HOME="$HOME" PATH=/usr/bin:/bin. Most "works on my shell, fails in cron" bugs surface immediately. - Schedule reboots in the host's timezone, not UTC, if your team works locally. Cron uses the system timezone.
timedatectlconfirms what's set; align the schedule to whichever timezone makes the on-call shift sensible. - Don't let everything reboot at exactly 03:00. If you have N hosts and they all
0 3 * * 0, they all knock the same upstream service over at the same time. systemd'sRandomizedDelaySecor a per-host random offset (e.g.RANDOM=$$ && sleep $((RANDOM % 600))beforeshutdown) avoids this. - Log the reason.
shutdown -r +1 "reason here"writes the reason to the wall message, the systemd journal, and/var/log/wtmp. It's the difference between "we don't know why this rebooted" and "scheduled weekly reboot, here's the log". - Plan for the "didn't come back" case. Out-of-band access (BMC, IPMI, KVM, cloud console) is what saves you when scheduled-reboot decided not to come back at 03:14 on a Sunday.
- Coordinate with deployments. A scheduled reboot during a deploy is its own kind of bad day. Either window the deploys away from reboot times, or have the cron script check for a "deploy in progress" lock and skip.
- Don't use
crontab -efor system-level cron jobs in config-managed infrastructure. It edits a file Ansible/Puppet/Chef can't see directly. Use/etc/cron.d/files or systemd timers, both of which are file-managed.
Troubleshooting
- Cron entry exists but the reboot doesn't happen. Check the user.
crontab -e(no sudo) edits your user's crontab; that user needs sudo NOPASSWD or a setuid wrapper to runshutdown. Easier: usesudo crontab -eor/etc/cron.d/. /sbin/shutdown: command not found. Either the path is wrong on this distro (/usr/sbin/shutdownon newer Ubuntu; check withwhich shutdown) orcron's minimalPATHdoesn't include the right directory. Use the absolute path fromwhich shutdown.- Reboot fired but happened "30 minutes late". Cron timing is exact; this is usually anacron rescheduling missed jobs (Ubuntu/Debian
cron.daily/weeklyuse anacron and run at delayed times by default). Usecron.dfor precise-time scheduling. shutdown -r nowworks manually but not from cron. Almost always aPATHorsudoissue. Try the wrapper-script + log approach so you can see what cron actually executed.- Host comes back up but a service didn't restart.
systemctl status <service>post-reboot. The fix is usuallysystemctl enable <service>so it autostarts, plusRestart=on-failurein the unit file. systemd'sjournalctl -b 0 -u <service>shows the failure on the current boot. shutdown -r +1fires but the host stays up. Anothershutdown -cwas issued (perhaps by another user or another cron job).last -x | headshows shutdowns and reboots; the cancel doesn't appear there but the missing reboot does.who -bshows a different reboot time than scheduled. Either the reboot was triggered by something else (kernel oops, hardware watchdog, manual reboot from someone else), or the system clock was off.journalctl --list-bootsshows boot times; cross-check againstdmesg | headof the new boot for the cause.
Summary
To set a Linux cron job to reboot a server:
- One-line answer:
0 3 * * 0 /sbin/shutdown -r nowinsudo crontab -e. Reboots Sunday at 03:00 every week. - Absolute paths only. Cron's
PATHis minimal;/sbin/shutdown(or/usr/sbin/shutdown) is the right form. - Use
/etc/cron.d/for config-managed infra, notcrontab -e. Drop a file with a username column; pickup is automatic. - Prefer systemd timers for system-level actions on modern distros.
OnCalendar=Sun 03:00,Persistent=true,RandomizedDelaySec=5minsolves problems cron can't (missed runs, fleet stampede). - Use
atfor one-shot scheduled reboots. A single maintenance reboot doesn't belong in cron. - Wrap the reboot in a small script that logs pre-reboot state and uses
shutdown -r +1 "reason"so the journal and wtmp record why it happened. - Conditional reboots are usually better than fixed-schedule reboots. Reboot if
/var/run/reboot-requiredexists, or if uptime > N days — otherwise no-op. - Verify the reboot happened.
who -b,last -x reboot,journalctl --list-boots. Wire monitoring to catch the host that didn't come back. - Question whether you actually need scheduled reboots. Most "we need weekly reboots" instincts are masking a different problem; fix that instead, and you can drop the schedule.
A scheduled reboot is one of those operational primitives that's trivial to write correctly and surprisingly easy to get wrong in subtle ways (timezone mismatches, PATH issues, hosts that don't come back, fleet-wide stampedes). The 30 minutes spent doing it properly — wrapper script, conditional logic, randomised delay, monitoring on the way back up — is what separates "the reboot is invisible" from "the reboot was the incident".