Automate S3 Like a Pro: CLI, Bash Scripts, and Lifecycle Rules

Last Updated: June 26, 2025
- AWS CLI installed (
aws --version
) - An AWS IAM user or role with S3 permissions (
s3:PutObject
,s3:GetObject
,s3:ListBucket
, etc.) - Access to a Linux terminal (local or EC2)
- A test S3 bucket already created
- You are familiar with basic AWS services, especially S3
- You are comfortable running CLI commands and writing shell scripts
- You are looking to eliminate manual file uploads, backups, or retention management
- Sync directories to or from S3
- Schedule routine backups
- Control storage costs with lifecycle rules
Let’s walk through how to set it all up.
aws s3 sync /var/log s3://my-backup-bucket/logs/ --delete
What this does:
- Uploads all contents of
/var/log
to your S3 bucket - Only updates changed files (based on size and timestamp)
- Deletes remote files that no longer exist locally
Common flags:
Flag | Description |
---|---|
–delete | Deletes files from S3 that are no longer in your source folder |
--exclude / --include | Filters specific file types |
–dryrun | Preview changes before running them |
# Upload
aws s3 cp ./daily-report.txt s3://my-backup-bucket/reports/
# Download
aws s3 cp s3://my-backup-bucket/reports/daily-report.txt .
You can also use --recursive
for directory-level transfers.
#!/bin/bash
# s3-backup.sh
SOURCE_DIR="/var/log"
BUCKET="my-backup-bucket"
DEST="logs/$(date +%F)"
aws s3 sync "$SOURCE_DIR" "s3://$BUCKET/$DEST" --delete
Make it executable:
chmod +x s3-backup.sh
my-s3-backup.service
[Unit]
Description=Backup logs to S3
[Service]
Type=oneshot
ExecStart=/usr/local/bin/s3-backup.sh
my-s3-backup.timer
[Unit]
Description=Run S3 backup daily at 2am
[Timer]
OnCalendar=*-*-* 02:00:00
Persistent=true
[Install]
WantedBy=timers.target
Enable and start:
sudo systemctl daemon-reload
sudo systemctl enable --now my-s3-backup.timer
- Transition older files to Glacier or Deep Archive
- Auto-delete files after a set number of days
You can do this from the AWS Console or using a JSON configuration with the CLI.
Example rule (via Console):
- Prefix:
logs/
- Transition to Glacier after 30 days
- Expire after 180 days
aws configure
to set up credentials locally (stored in~/.aws/credentials
)- Environment variables:
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
- IAM roles if running in EC2 or Lambda
Always apply the principle of least privilege.
- rclone – for more flexible syncing and encryption
- s3cmd – an older CLI tool that some still prefer
- Lambda + EventBridge – for trigger-based automation (e.g., run a backup on upload)
But for most Linux users and CLI workflows, the native AWS CLI is fast, battle-tested, and widely supported.
- Sync entire folders to and from S3
- Automate backups using Bash scripts and
systemd
timers - Set up lifecycle rules to save costs
- Upload/download single files as needed
- Keep it secure using environment variables or IAM roles
s3-backup.sh
: the Bash script to sync local logs to S3my-s3-backup.service
: the systemd unit to trigger the backupmy-s3-backup.timer
: the timer to run it daily at 2am
