When Cron Logs Show Success but Your Scripts Do Nothing: Decoding the Missing Environment Variable Problem

· Server Scout

You check /var/log/cron and see your job executed without errors. The script runs fine when you test it manually. But somehow, when cron runs it, nothing happens.

This isn't a permissions issue or a timing problem. It's the environment variable gap that catches even experienced sysadmins off guard.

The Minimal Environment Problem

Cron jobs run with a severely stripped-down environment. While your interactive shell has dozens of variables set automatically, cron provides only the basics: HOME, LOGNAME, PATH (usually just /usr/bin:/bin), and SHELL.

Your script might depend on PYTHONPATH, JAVAHOME, NODEPATH, or custom variables that exist in your shell profile but vanish in cron's sterile environment.

The most common culprit is PATH. If your script calls mysql, python3, or node without full paths, and these executables live in /usr/local/bin or /opt/bin, cron won't find them. Your script fails silently because the command simply doesn't exist in cron's limited PATH.

Debugging the Environment Gap

Start by capturing what cron actually sees:

* * * * * env > /tmp/cron-env.txt

Compare this with your interactive environment:

env > /tmp/interactive-env.txt
diff /tmp/interactive-env.txt /tmp/cron-env.txt

The differences reveal what your script is missing. Look particularly for PATH differences and application-specific variables.

Next, modify your script to log its environment and any command failures:

#!/bin/bash
echo "PATH: $PATH" >> /var/log/myscript.log
echo "PWD: $PWD" >> /var/log/myscript.log
mysql --version >> /var/log/myscript.log 2>&1

When detecting systemd service failures that status checks miss, you often face similar environment inconsistencies between interactive and automated execution contexts.

Three Fixes That Actually Work

Source your profile explicitly. Many scripts work when you add this line at the top:

source ~/.bashrc

This imports your interactive environment, though it can be overkill and slow.

Set PATH in the crontab itself:

PATH=/usr/local/bin:/usr/bin:/bin
0 2 * * * /path/to/your/script.sh

This approach works well when you know exactly which directories your script needs.

Use absolute paths throughout your script. Instead of calling mysql, call /usr/bin/mysql. Instead of python3, use /usr/bin/python3. This method is bulletproof but requires you to know where everything lives.

For database backup scripts, the third approach prevents the kind of silent failures described in when backup scripts exit zero but nothing got backed up.

Making Cron Failures Visible

The real problem isn't just the environment - it's that cron hides failures by default. Add proper error handling and logging:

#!/bin/bash
exec > /var/log/myscript.log 2>&1
set -euo pipefail

echo "Starting backup at $(date)"
# Your actual script here
echo "Backup completed successfully"

The set -euo pipefail line makes bash exit immediately on any error, undefined variable, or pipe failure. Combined with output redirection, you'll finally see what's breaking.

For production environments, consider monitoring tools that can track cron job execution and alert on both failures and unexpected silence. Server Scout's lightweight monitoring approach works well for catching these kinds of scheduled task failures without adding overhead to your systems.

The PostgreSQL documentation provides excellent guidance on environment variables for database tools, which often cause similar issues in cron contexts.

Debugging cron environment issues takes patience, but once you understand the minimal environment principle, you'll spot these problems immediately. The key is making failures visible rather than letting them fail silently in cron's sparse execution context.

Ready to Try Server Scout?

Start monitoring your servers and infrastructure in under 60 seconds. Free for 3 months.

Start Free Trial