576 lines
16 KiB
Markdown
576 lines
16 KiB
Markdown
# ZFS Snapshot Manager
|
|
|
|
A distributed ZFS snapshot management system with S3-compatible storage support. This project provides client, server, and restore tools for managing ZFS snapshots across multiple machines.
|
|
|
|
## Features
|
|
|
|
- **S3 Storage Support**: Store snapshots in any S3-compatible storage using AWS SDK v2 (AWS S3, MinIO, Backblaze B2, Wasabi, DigitalOcean Spaces)
|
|
- **Local ZFS Storage**: Option to use local ZFS datasets for maximum performance
|
|
- **Multi-client Architecture**: Support for multiple clients with isolated storage and per-client quotas
|
|
- **Automatic Compression**: LZ4 compression for reduced storage costs and faster transfers
|
|
- **Snapshot Rotation**: Automatic cleanup of old snapshots based on quota
|
|
- **Server-Managed Rotation Policies**: Centralized control of client rotation policies - clients must use server-configured retention settings
|
|
- **API Key Authentication**: Secure client-server communication
|
|
- **Simple CLI**: Just use `zfs-client snap` to backup - automatically handles full/incremental
|
|
|
|
## Project Structure
|
|
|
|
```
|
|
zfs/
|
|
├── cmd/
|
|
│ ├── zfs-server/ # Server executable
|
|
│ ├── zfs-client/ # Client executable
|
|
│ └── zfs-restore/ # Restore tool executable
|
|
├── internal/
|
|
│ ├── server/ # Server package (config, storage, HTTP handlers)
|
|
│ ├── client/ # Client package (snapshot creation, upload)
|
|
│ └── restore/ # Restore package (download, restore operations)
|
|
├── go.mod
|
|
├── go.sum
|
|
├── .env # Configuration file
|
|
└── readme.md
|
|
```
|
|
|
|
## Installation
|
|
|
|
### Using Go Install
|
|
|
|
```bash
|
|
# Install server
|
|
go install git.ma-al.com/goc_marek/zfs/cmd/zfs-server@latest
|
|
|
|
# Install client
|
|
go install git.ma-al.com/goc_marek/zfs/cmd/zfs-client@latest
|
|
|
|
# Install restore tool
|
|
go install git.ma-al.com/goc_marek/zfs/cmd/zfs-restore@latest
|
|
```
|
|
|
|
### Build from Source
|
|
|
|
```bash
|
|
# Clone the repository
|
|
git clone https://git.ma-al.com/goc_marek/zfs.git
|
|
cd zfs
|
|
|
|
# Build all binaries
|
|
go build -o bin/zfs-server ./cmd/zfs-server
|
|
go build -o bin/zfs-client ./cmd/zfs-client
|
|
go build -o bin/zfs-restore ./cmd/zfs-restore
|
|
```
|
|
|
|
## Configuration
|
|
|
|
### Server Configuration
|
|
|
|
Create a `.env` file in the working directory:
|
|
|
|
```env
|
|
# S3 Configuration
|
|
S3_ENABLED=true
|
|
S3_ENDPOINT=s3.amazonaws.com
|
|
S3_ACCESS_KEY=YOUR_ACCESS_KEY
|
|
S3_SECRET_KEY=YOUR_SECRET_KEY
|
|
S3_BUCKET=zfs-snapshots
|
|
S3_USE_SSL=true
|
|
|
|
# Local ZFS fallback
|
|
ZFS_BASE_DATASET=backup
|
|
|
|
# Database Configuration (SQLite)
|
|
DATABASE_PATH=zfs-backup.db
|
|
|
|
# Server settings
|
|
PORT=8080
|
|
```
|
|
|
|
> **Note**: All client configuration and snapshot metadata is stored in a SQLite database (`zfs-backup.db` by default). The server automatically creates a default client (`client1` with API key `secret123`) if no clients exist.
|
|
|
|
### Client Configuration
|
|
|
|
```env
|
|
CLIENT_ID=client1
|
|
API_KEY=secret123
|
|
SERVER_URL=http://backup-server:8080
|
|
LOCAL_DATASET=tank/data
|
|
COMPRESS=true
|
|
|
|
# Optional: Direct S3 upload (bypasses server storage)
|
|
S3_ENDPOINT=https://s3.amazonaws.com
|
|
S3_REGION=us-east-1
|
|
S3_BUCKET=zfs-backups
|
|
S3_ACCESS_KEY=your_access_key
|
|
S3_SECRET_KEY=your_secret_key
|
|
```
|
|
|
|
> **Important**:
|
|
> - The `API_KEY` in the client `.env` file must be the **raw (unhashed)** key. The server stores the SHA-256 hash in the database.
|
|
> - **Storage type is determined by the server**, not the client. The server decides whether to use S3 or local ZFS storage based on its configuration.
|
|
> - The client automatically handles full vs incremental backups based on whether a bookmark exists.
|
|
|
|
### Restore Tool Configuration
|
|
|
|
```env
|
|
CLIENT_ID=client1
|
|
API_KEY=secret123
|
|
SERVER_URL=http://backup-server:8080
|
|
```
|
|
|
|
## Usage
|
|
|
|
### Server
|
|
|
|
```bash
|
|
# Start the backup server
|
|
zfs-server
|
|
|
|
# The server listens on port 8080 by default
|
|
# Endpoints:
|
|
# POST /upload - Request upload authorization
|
|
# POST /upload-stream/ - Stream snapshot data
|
|
# GET /status - Check client status
|
|
# POST /rotate - Rotate old snapshots
|
|
# GET /download - Download a snapshot
|
|
# GET /rotation-policy - Get client rotation policy
|
|
# GET /health - Health check
|
|
```
|
|
|
|
### Client Commands
|
|
|
|
The `zfs-client` tool provides simple commands for creating and sending ZFS snapshots:
|
|
|
|
#### `snap`
|
|
Creates a snapshot and sends it to the server. Automatically detects if this is the first backup (full) or subsequent backup (incremental).
|
|
|
|
```bash
|
|
zfs-client snap
|
|
```
|
|
|
|
On first run, it will print: `→ No previous backup found, doing FULL backup...`
|
|
|
|
On subsequent runs, it automatically does incremental backups from the last bookmark.
|
|
|
|
#### `status`
|
|
Displays the current backup status including storage usage, quota, and snapshot count from the server.
|
|
|
|
```bash
|
|
zfs-client status
|
|
```
|
|
|
|
#### `help`
|
|
Shows the help message with all available commands and options.
|
|
|
|
```bash
|
|
zfs-client help
|
|
```
|
|
|
|
### Client Configuration
|
|
|
|
```env
|
|
CLIENT_ID=client1
|
|
API_KEY=secret123
|
|
SERVER_URL=http://backup-server:8080
|
|
LOCAL_DATASET=tank/data
|
|
COMPRESS=true
|
|
|
|
# Optional: S3 direct upload (bypasses server)
|
|
S3_ENDPOINT=https://s3.amazonaws.com
|
|
S3_REGION=us-east-1
|
|
S3_BUCKET=zfs-backups
|
|
S3_ACCESS_KEY=your_access_key
|
|
S3_SECRET_KEY=your_secret_key
|
|
```
|
|
|
|
### Restore Tool Commands
|
|
|
|
The `zfs-restore` tool provides commands for listing and restoring snapshots from the backup server:
|
|
|
|
#### `list`
|
|
Lists all available snapshots for the configured client from the server.
|
|
|
|
```bash
|
|
zfs-restore list
|
|
```
|
|
|
|
Output example:
|
|
```
|
|
# Snapshot ID Timestamp Size
|
|
1 client1/tank_data_2024-02-13 2024-02-13 14:30 1.2 GB
|
|
2 client1/tank_data_2024-02-12 2024-02-12 14:30 1.1 GB
|
|
```
|
|
|
|
#### `restore <number> <dataset>`
|
|
Restores a snapshot by its list number to a specified ZFS dataset.
|
|
|
|
```bash
|
|
zfs-restore restore 1 tank/restored
|
|
```
|
|
|
|
Options:
|
|
- `--force` or `-f` - Overwrite existing dataset if it exists
|
|
|
|
```bash
|
|
zfs-restore restore 1 tank/restored --force
|
|
```
|
|
|
|
#### `latest <dataset>`
|
|
Restores the most recent snapshot to a specified dataset.
|
|
|
|
```bash
|
|
zfs-restore latest tank/restored
|
|
```
|
|
|
|
#### `save <number> <filename>`
|
|
Downloads a snapshot and saves it to a local file without restoring.
|
|
|
|
```bash
|
|
zfs-restore save 1 backup.zfs.gz
|
|
```
|
|
|
|
#### `mount <dataset> <mountpoint>`
|
|
Mounts a restored ZFS dataset to a specified directory for file access.
|
|
|
|
```bash
|
|
zfs-restore mount tank/restored /mnt/restore
|
|
```
|
|
|
|
## S3 Provider Configuration
|
|
|
|
### AWS S3
|
|
|
|
```env
|
|
S3_ENDPOINT=s3.amazonaws.com
|
|
S3_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE
|
|
S3_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
|
|
S3_BUCKET=my-zfs-backups
|
|
S3_USE_SSL=true
|
|
```
|
|
|
|
### MinIO (Self-Hosted)
|
|
|
|
```env
|
|
S3_ENDPOINT=minio.example.com:9000
|
|
S3_ACCESS_KEY=minioadmin
|
|
S3_SECRET_KEY=minioadmin
|
|
S3_BUCKET=zfs-snapshots
|
|
S3_USE_SSL=false
|
|
```
|
|
|
|
#### Setting Up MinIO Locally
|
|
|
|
**Option A: Using Docker (Recommended)**
|
|
|
|
```bash
|
|
# Create a directory for MinIO data
|
|
mkdir -p ~/minio-data
|
|
|
|
# Run MinIO container
|
|
docker run -d \
|
|
--name minio \
|
|
-p 9000:9000 \
|
|
-p 9001:9001 \
|
|
-v ~/minio-data:/data \
|
|
-e MINIO_ROOT_USER=minioadmin \
|
|
-e MINIO_ROOT_PASSWORD=minioadmin \
|
|
minio/minio server /data --console-address ":9001"
|
|
```
|
|
|
|
**Option B: Using Binary**
|
|
|
|
```bash
|
|
# Download MinIO
|
|
wget https://dl.min.io/server/minio/release/linux-amd64/minio
|
|
chmod +x minio
|
|
sudo mv minio /usr/local/bin/
|
|
|
|
# Create data directory
|
|
mkdir -p ~/minio-data
|
|
|
|
# Start MinIO
|
|
MINIO_ROOT_USER=minioadmin MINIO_ROOT_PASSWORD=minioadmin \
|
|
minio server ~/minio-data --console-address ":9001"
|
|
```
|
|
|
|
**Create the Bucket**
|
|
|
|
After starting MinIO, create the bucket using the MinIO Client (mc) or web console:
|
|
|
|
```bash
|
|
# Install MinIO Client (mc)
|
|
wget https://dl.min.io/client/mc/release/linux-amd64/mc
|
|
chmod +x mc
|
|
sudo mv mc /usr/local/bin/
|
|
|
|
# Configure alias to local MinIO
|
|
mc alias set local http://localhost:9000 minioadmin minioadmin
|
|
|
|
# Create bucket
|
|
mc mb local/zfs
|
|
|
|
# Verify bucket was created
|
|
mc ls local
|
|
```
|
|
|
|
Alternatively, access the MinIO Web Console at http://localhost:9001 and create the bucket through the UI (login: `minioadmin` / `minioadmin`).
|
|
|
|
### Backblaze B2
|
|
|
|
```env
|
|
S3_ENDPOINT=s3.us-west-000.backblazeb2.com
|
|
S3_ACCESS_KEY=your_key_id
|
|
S3_SECRET_KEY=your_application_key
|
|
S3_BUCKET=zfs-backups
|
|
S3_USE_SSL=true
|
|
```
|
|
|
|
### Wasabi
|
|
|
|
```env
|
|
S3_ENDPOINT=s3.wasabisys.com
|
|
S3_ACCESS_KEY=your_access_key
|
|
S3_SECRET_KEY=your_secret_key
|
|
S3_BUCKET=zfs-backups
|
|
S3_USE_SSL=true
|
|
```
|
|
|
|
## Database Storage
|
|
|
|
The server uses SQLite to store all configuration and metadata in a single database file (`zfs-backup.db` by default). This includes:
|
|
|
|
- **Admin users**: Authentication credentials for the admin panel
|
|
- **Client configurations**: Authentication, quotas, storage type, rotation policies
|
|
- **Snapshot metadata**: Timestamps, sizes, storage keys, incremental relationships
|
|
|
|
### Database Schema
|
|
|
|
The database contains four main tables:
|
|
|
|
**admins table:**
|
|
- `id` - Unique identifier
|
|
- `username` - Admin username (unique)
|
|
- `password_hash` - SHA-256 hashed password
|
|
- `role` - Admin role (default: "admin")
|
|
- `created_at`, `updated_at` - Timestamps
|
|
|
|
**admin_sessions table:**
|
|
- `id` - Unique identifier
|
|
- `admin_id` - Foreign key to admins table
|
|
- `token` - Session token
|
|
- `expires_at` - Session expiration time
|
|
|
|
**clients table:**
|
|
- `client_id` - Unique identifier
|
|
- `api_key` - SHA-256 hashed API key
|
|
- `max_size_bytes` - Storage quota
|
|
- `dataset` - Target dataset for local ZFS storage
|
|
- `enabled` - Client status
|
|
- `storage_type` - "s3" or "local"
|
|
- `keep_hourly`, `keep_daily`, `keep_weekly`, `keep_monthly` - Rotation policy
|
|
|
|
**snapshots table:**
|
|
- `client_id` - Owner of the snapshot
|
|
- `snapshot_id` - Unique identifier
|
|
- `timestamp` - When the snapshot was taken
|
|
- `size_bytes` - Snapshot size
|
|
- `storage_key` - Location in storage
|
|
- `storage_type` - Where it's stored
|
|
- `compressed`, `incremental`, `base_snapshot` - Snapshot properties
|
|
|
|
### Server-Managed Rotation Policy
|
|
|
|
When a rotation policy is configured for a client in the database, the client **must** use this policy and cannot override it. This enables centralized control of snapshot retention policies:
|
|
|
|
- **Server-Managed**: If rotation policy is set, the client fetches the policy from the server and applies it
|
|
- **Client-Autonomous**: If no rotation policy is set, the client uses its default policy
|
|
|
|
The rotation policy fields are:
|
|
- `keep_hourly`: Number of hourly snapshots to keep (default: 24)
|
|
- `keep_daily`: Number of daily snapshots to keep (default: 7)
|
|
- `keep_weekly`: Number of weekly snapshots to keep (default: 4)
|
|
- `keep_monthly`: Number of monthly snapshots to keep (default: 12)
|
|
|
|
#### API Endpoint
|
|
|
|
The server exposes a `/rotation-policy` endpoint for clients to fetch their configured policy:
|
|
|
|
```bash
|
|
GET /rotation-policy?client_id=client1&api_key=secret123
|
|
```
|
|
|
|
Response:
|
|
```json
|
|
{
|
|
"success": true,
|
|
"message": "Server-managed rotation policy",
|
|
"rotation_policy": {
|
|
"keep_hourly": 24,
|
|
"keep_daily": 7,
|
|
"keep_weekly": 4,
|
|
"keep_monthly": 12
|
|
},
|
|
"server_managed": true
|
|
}
|
|
```
|
|
|
|
## Admin Panel
|
|
|
|
The server includes a web-based admin panel for managing clients, snapshots, and admin users. Access it at `http://localhost:8080/admin/`.
|
|
|
|
### Default Admin Credentials
|
|
|
|
When the server starts for the first time, it creates a default admin user:
|
|
- **Username**: `admin`
|
|
- **Password**: `admin123`
|
|
|
|
> **Important**: Change the default password immediately after first login!
|
|
|
|
### Admin Panel Features
|
|
|
|
- **Dashboard**: View statistics (client count, total snapshots, storage usage)
|
|
- **Client Management**:
|
|
- Create, view, and delete clients
|
|
- Configure storage type (S3 or local ZFS)
|
|
- Set quotas and rotation policies
|
|
- Enable/disable clients
|
|
- **Snapshot Management**:
|
|
- View all snapshots across all clients
|
|
- Filter by client
|
|
- Delete individual snapshots
|
|
- **Admin User Management**:
|
|
- Create additional admin users
|
|
- Delete admin accounts
|
|
|
|
### Admin API Endpoints
|
|
|
|
All admin endpoints require authentication via session cookie.
|
|
|
|
| Endpoint | Method | Description |
|
|
|----------|--------|-------------|
|
|
| `/admin/login` | POST | Login with username/password |
|
|
| `/admin/logout` | POST | Logout current session |
|
|
| `/admin/check` | GET | Check authentication status |
|
|
| `/admin/clients` | GET | List all clients with usage stats |
|
|
| `/admin/client` | GET | Get specific client details |
|
|
| `/admin/client/create` | POST | Create new client |
|
|
| `/admin/client/update` | PUT | Update client configuration |
|
|
| `/admin/client/delete` | POST | Delete client and all snapshots |
|
|
| `/admin/snapshots` | GET | List all snapshots |
|
|
| `/admin/snapshot/delete` | POST | Delete specific snapshot |
|
|
| `/admin/stats` | GET | Get server statistics |
|
|
| `/admin/admins` | GET | List all admin users |
|
|
| `/admin/admin/create` | POST | Create new admin user |
|
|
| `/admin/admin/delete` | POST | Delete admin user |
|
|
|
|
### Creating a Client via API
|
|
|
|
```bash
|
|
# Login first (saves session cookie)
|
|
curl -c cookies.txt -X POST http://localhost:8080/admin/login \
|
|
-H "Content-Type: application/json" \
|
|
-d '{"username":"admin","password":"admin123"}'
|
|
|
|
# Create a new client
|
|
curl -b cookies.txt -X POST http://localhost:8080/admin/client/create \
|
|
-H "Content-Type: application/json" \
|
|
-d '{
|
|
"client_id": "myclient",
|
|
"api_key": "secretkey123",
|
|
"storage_type": "s3",
|
|
"dataset": "backup/myclient",
|
|
"max_size_bytes": 107374182400,
|
|
"enabled": true,
|
|
"rotation_policy": {
|
|
"keep_hourly": 24,
|
|
"keep_daily": 7,
|
|
"keep_weekly": 4,
|
|
"keep_monthly": 12
|
|
}
|
|
}'
|
|
```
|
|
|
|
## Architecture
|
|
|
|
```
|
|
┌─────────────┐ ZFS send ┌──────────────────┐
|
|
│ Client 1 │───────┬─────────▶│ Backup Server │
|
|
│ (S3 mode) │ │ │ │
|
|
└─────────────┘ │ │ ┌────────────┐ │
|
|
│ │ │ S3 Backend │ │
|
|
┌─────────────┐ │ HTTP │ └─────┬──────┘ │
|
|
│ Client 2 │───────┤ Stream │ │ │
|
|
│ (S3 mode) │ │ │ ▼ │
|
|
└─────────────┘ │ │ ┌────────────┐ │
|
|
│ │ │ MinIO │ │
|
|
┌─────────────┐ │ │ │ or │ │
|
|
│ Client 3 │───────┘ │ │ AWS S3 │ │
|
|
│ (Local ZFS) │─────────────────▶│ └────────────┘ │
|
|
└─────────────┘ ZFS recv │ │
|
|
│ ┌────────────┐ │
|
|
│ │ Local ZFS │ │
|
|
│ │ Backend │ │
|
|
│ └────────────┘ │
|
|
└──────────────────┘
|
|
```
|
|
|
|
## Storage Format
|
|
|
|
Snapshots are stored in S3 with the following naming convention:
|
|
|
|
```
|
|
s3://bucket/client1/tank_data_2024-02-13_14:30:00.zfs.gz
|
|
^ ^ ^
|
|
client dataset timestamp
|
|
```
|
|
|
|
## Security
|
|
|
|
- API keys are hashed using SHA-256
|
|
- S3 bucket policies can restrict access to backup server only
|
|
- Server-side encryption available in S3
|
|
- Client-side encryption possible via custom compression pipeline
|
|
|
|
## Monitoring
|
|
|
|
### Health Check
|
|
|
|
```bash
|
|
curl http://localhost:8080/health
|
|
```
|
|
|
|
### Server Logs
|
|
|
|
```bash
|
|
# SystemD
|
|
journalctl -u zfs-server -f
|
|
|
|
# Docker
|
|
docker logs -f zfs-server
|
|
```
|
|
|
|
## Development
|
|
|
|
### Project Layout
|
|
|
|
- `cmd/` - Main applications (entry points)
|
|
- `internal/` - Private application code
|
|
- `server/` - Server logic, HTTP handlers, storage backends
|
|
- `client/` - Client logic for creating and uploading snapshots
|
|
- `restore/` - Restore logic for downloading and restoring snapshots
|
|
|
|
### Building
|
|
|
|
```bash
|
|
# Build all
|
|
go build ./...
|
|
|
|
# Run tests
|
|
go test ./...
|
|
|
|
# Lint
|
|
go vet ./...
|
|
```
|
|
|
|
## License
|
|
|
|
MIT License
|