From ddc70298737469fc46293b0c75b6c32e69b41bd5 Mon Sep 17 00:00:00 2001 From: Fredrik Eriksson Date: Thu, 27 Dec 2018 15:09:43 +0100 Subject: [PATCH] * bump version to 0.3 * remove ugly sudo implementation * added support to set both source and target zfs command * added support for remote sources * renamed some configuration options (BREAKING CHANGES) --- README.md | 41 +++++--- bin/zsnapper | 237 +++++++++++++++++++++++++------------------ zsnaplib/__init__.py | 46 ++++----- zsnapper.ini-sample | 66 ++++++++---- 4 files changed, 229 insertions(+), 161 deletions(-) diff --git a/README.md b/README.md index b6573fb..9e20182 100644 --- a/README.md +++ b/README.md @@ -30,7 +30,12 @@ To configure snapshotting of a file system you need to create a section for it a Configuration values can be empty, a string, a number or an interval. Interval is simply a number followed by the letter 'd', 'h' or 'm' - as in 'day', 'hour' and 'minute'. ## Running as non-privileged user -Managing ZFS snapshots require root privileges, but if zsnapper is started as a non-privileged user it will attempt to use sudo when executing zfs commands. If sudo is installed you can add these lines to your sudo configuration to allow the backup user to run zsnapper on all file systems in the zpool "tank": +Managing ZFS snapshots require root privileges, but you can configure zsnapper to use sudo to execute the zfs binary with root privileges: +``` +[tank] +source_zfs_cmd = /usr/bin/sudo /sbin/zfs +``` +Since zsnapper should be able to run non-interactively, sudo should not require password to run the zfs commands: ``` backup ALL=(ALL) NOPASSWD: /sbin/zfs snapshot tank*@* backup ALL=(ALL) NOPASSWD: /sbin/zfs list -H @@ -63,9 +68,9 @@ Snapshots can be synced either locally or by invoking zfs on a remote system (ss A minimal configuration for local snapshot syncing can look like so: ``` [tank] -remote_enable=all -remote_zfs_cmd=/sbin/zfs -remote_zfs_target=backup/tank +send_enable=all +target_zfs_cmd=/sbin/zfs +target_fs=backup/tank ``` For an example of remote syncing over ssh, see the zsnapper.ini-sample file. @@ -87,47 +92,53 @@ keep_15min=4 Interval of file system snapshots. If unset it will not create any snapshots for the file system. -### remote_enable +### send_enable *default*: unset *valid values*: unset, "all", "latest" If unset the file system will not be sent anywhere. If set to "latest" only the latest snapshot will be sent for incrimental zfs sends (-i flag to zfs send), if set to "all" (or really, any value other than latest) all snapshots newer then the snapshot on the remote side will be sent (-I flag to zfs send). -Note that remote_zfs_cmd and remote_zfs_target must be set as well. +Note that target_zfs_cmd and target_fs must be set as well. -### remote_send_flags +### send_flags *default*: unset *valid values*: unset, space separated flags to zfs send This can be used if you want to enable any (or all) of the optional flags to zfs send, -### remote_recv_flags +### recv_flags *default*: unset *valid values*: unset, space separated flags to zfs receive This can be used if you want to enable any (or all) of the optional flags to zfs receive, -### remote_zfs_cmd +### target_zfs_cmd *default*: unset *valid values*: a command to invoke zfs; either local or remote -This option is required when remote_enable is set. The string configured here will actually be a template that you can fill with any other option defined in the section. See sample configuration file for details. +This option is required when send_enable is set. The string configured here will actually be a template that you can fill with any other option defined in the section. See sample configuration file for details. -### remote_test_cmd +### target_test_cmd *default*: unset *valid values*: a command that will exit with returncode 0 if it's possible to send snapshots to remote The test command is run before each snapshot is transferred to the sync location. If the command exits with a non-zero status zsnapper will consider the sync target unavailable and will not attempt to sync the snapshot and an informational message will be written to syslog. This can be used for example to test if the network is available, or if an external backup drive is plugged in or not. I'm sure there are more creative uses as well. -### remote_host +### source_test_cmd +*default*: unset +*valid values*: a command that will exit with returncode 0 if the snapshots should be taken + +Like target_test_cmd, but checks that the source filesystems are available. + +### target_host *default*: unset *valid values*: any -This setting is completely optional - even when doing remote sync. If present zsnapper will cache the output of 'zfs list -H -t snapshot' on the remote side so it only run once on each remote host. It is also useful to be able to use $(remote_host)s in remote_zfs_cmd. +This setting is completely optional - even when doing remote sync. If present zsnapper will cache the output of 'zfs list -H -t snapshot' on the remote side so it only run once on each remote host. It is also useful to be able to use $(target_host)s in target_zfs_cmd. -### remote_zfs_target +### target_fs *default*: unset -*valid values*: Location to this file system on the remote side +*valid values*: Location to this file system on the receiving side The file system will be created on the first sync; it must not be created manually. diff --git a/bin/zsnapper b/bin/zsnapper index 829cd10..a22e895 100644 --- a/bin/zsnapper +++ b/bin/zsnapper @@ -39,28 +39,34 @@ DEFAULT_CONFIG = { 'keep_5min': 0, 'keep_1min': 0, 'keep_custom': 0, - 'remote_enable': False, - 'remote_send_flags': '', - 'remote_recv_flags': '', - 'remote_zfs_cmd': None, - 'remote_test_cmd': None, - 'remote_zfs_target': None, + 'source_zfs_cmd': '/sbin/zfs', + 'source_test_cmd': None, + 'target_fs': None, + 'target_zfs_cmd': '/sbin/zfs', + 'target_test_cmd': None, + 'send_flags': '', + 'recv_flags': '', + 'send_enable': False, } timedelta_regex = re.compile('([0-9]+)([dhm])') -def remote_is_available(conf): +def fs_is_available(conf): log = logging.getLogger(LOGGER) - cmdstr = Template(conf['remote_test_cmd']).safe_substitute(conf) - cmd = cmdstr.split() - proc = subprocess.Popen( - cmd, - stdout=subprocess.PIPE, - stderr=subprocess.PIPE) - (out, err) = proc.communicate() - - log.info('Healthcheck "{}" returned {}'.format(cmdstr, proc.returncode)) - return proc.returncode == 0 + for test in ('source_test_cmd', 'target_test_cmd'): + if not conf[test]: + continue + cmdstr = Template(conf[test]).safe_substitute(conf) + cmd = cmdstr.split() + proc = subprocess.Popen( + cmd, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE) + (out, err) = proc.communicate() + log.info('Healthcheck "{}" returned {}'.format(cmdstr, proc.returncode)) + if proc.returncode != 0: + return False + return True def str_to_timedelta(deltastr): @@ -75,63 +81,71 @@ def str_to_timedelta(deltastr): delta += datetime.timedelta(minutes=int(match.group(1))) return delta -def get_config_for_fs(fs, config, remote=''): +def get_config_for_fs(fs, config): + if '@' in fs: + fs, remote = fs.split('@', 1) + else: + remote = None fs_config = DEFAULT_CONFIG.copy() fs_build = '' for fs_part in fs.split('/'): fs_build += fs_part - section = "{}@{}".format(fs_build, remote) + if remote: + section = "{}@{}".format(fs_build, remote) + else: + section = fs_build if section in config: fs_config.update(config[section]) if fs_build == fs: break fs_build += '/' + fs_config['source_fs'] = fs return fs_config -def do_snapshots(fslist, snapshots, config, sudo, remote=None, zfs_cmd=None): +def do_snapshots(fslist, snapshots, config): failed_snapshots = set() now = datetime.datetime.now() log = logging.getLogger(LOGGER) - if not remote: - remote = '' for fs in fslist: - conf = get_config_for_fs(fs, config, remote=remote) + conf = get_config_for_fs(fs, config) + source_fs = conf['source_fs'] if not conf['snapshot_interval']: continue + zfs_cmd = Template(conf['source_zfs_cmd']).safe_substitute(conf) + zfs_cmd = zfs_cmd.split() interval = str_to_timedelta(conf['snapshot_interval']) - if fs in snapshots and snapshots[fs] and snapshots[fs][0]: - last_snap = snapshots[fs][0] + if source_fs in snapshots and snapshots[source_fs] and snapshots[source_fs][0]: + last_snap = snapshots[source_fs][0] else: last_snap = datetime.datetime.min if interval > datetime.timedelta() and last_snap+interval < now: try: - if zfs_cmd: - zsnaplib.create_snapshot(fs, sudo, zfs_cmd=zfs_cmd) - log.info('{} snapshot created on {}'.format(fs, remote)) - else: - zsnaplib.create_snapshot(fs, sudo) - log.info('{} snapshot created'.format(fs)) + zsnaplib.create_snapshot(source_fs, zfs_cmd) + log.info('{} snapshot created using {}'.format(fs, zfs_cmd)) except zsnaplib.ZFSSnapshotError as e: log.warning(e) failed_snapshots.add(fs) return failed_snapshots -def get_remote_hosts(config): +def get_remote_sources(config): ret = {} for section in config.sections(): - if '@' in section and 'remote_zfs_cmd' in config[section]: + if '@' in section and 'source_zfs_cmd' in config[section]: fs, remote = section.split('@', 1) - remote_zfs_cmd = Template(config[section]['remote_zfs_cmd']).safe_substitute(config[section]) - remote_zfs_cmd = remote_zfs_cmd.split() - ret[remote] = remote_zfs_cmd + conf = get_config_for_fs(section, config) + if not fs_is_available(conf): + continue + source_zfs_cmd = Template(config[section]['source_zfs_cmd']).safe_substitute(config[section]) + source_zfs_cmd = source_zfs_cmd.split() + ret[remote] = source_zfs_cmd return ret -def send_snapshots(fslist, snapshots, config, sudo): +def send_snapshots(fslist, snapshots, config): failed_snapshots = set() remote_hosts = {} remote_targets = {} @@ -139,56 +153,59 @@ def send_snapshots(fslist, snapshots, config, sudo): for fs in fslist: conf = get_config_for_fs(fs, config) remote_snapshots = None - if not conf['remote_enable']: + if not conf['send_enable']: continue - if conf['remote_test_cmd'] and not remote_is_available(conf): + if not fs_is_available(conf): failed_snapshots.add(fs) continue - repl_mode = conf['remote_enable'] - remote_fs = conf['remote_zfs_target'] + repl_mode = conf['send_enable'] + target_fs = conf['target_fs'] + source_fs = conf['source_fs'] send_opts = [] recv_opts = [] - if conf['remote_send_flags']: - send_opts = conf['remote_send_flags'].split() - if conf['remote_recv_flags']: - recv_opts = conf['remote_recv_flags'].split() + if conf['send_flags']: + send_opts = conf['send_flags'].split() + if conf['recv_flags']: + recv_opts = conf['recv_flags'].split() - rel_local = [k for k, v in remote_targets.items() if v == remote_fs] + rel_local = [k for k, v in remote_targets.items() if v == target_fs] if rel_local: rel_local = rel_local[0] - rel_fs = fs[len(rel_local):] - remote_fs = '{}{}'.format(remote_fs, rel_fs) - remote_targets[fs] = remote_fs + rel_fs = source_fs[len(rel_local):] + target_fs = '{}{}'.format(target_fs, rel_fs) + remote_targets[source_fs] = target_fs # Figure out the state of remote zfs - remote_zfs_cmd = Template(conf['remote_zfs_cmd']).safe_substitute(conf) - remote_zfs_cmd = remote_zfs_cmd.split() + target_zfs_cmd = Template(conf['target_zfs_cmd']).safe_substitute(conf) + target_zfs_cmd = target_zfs_cmd.split() + source_zfs_cmd = Template(conf['source_zfs_cmd']).safe_substitute(conf) + source_zfs_cmd = source_zfs_cmd.split() # to avoid running too many commands on remote host, save result if we # know which host we're working with. - if 'remote_host' in conf: - if conf['remote_host'] in remote_hosts: - remote_snapshots = remote_hosts[conf['remote_host']] + if 'target_host' in conf: + if conf['target_host'] in remote_hosts: + remote_snapshots = remote_hosts[conf['target_host']] else: - remote_snapshots = zsnaplib.get_snapshots(zfs_cmd=remote_zfs_cmd) - remote_hosts[conf['remote_host']] = remote_snapshots + remote_snapshots = zsnaplib.get_snapshots(target_zfs_cmd) + remote_hosts[conf['target_host']] = remote_snapshots if not remote_snapshots: - remote_snapshots = zsnaplib.get_snapshots(zfs_cmd=remote_zfs_cmd) + remote_snapshots = zsnaplib.get_snapshots(target_zfs_cmd) - if remote_fs not in remote_snapshots: + if target_fs not in remote_snapshots: # Remote FS doesn't exist, send a new copy - log.info('{} sending base copy to {}'.format(fs, ' '.join(remote_zfs_cmd))) + log.info('{} sending base copy to {}'.format(fs, ' '.join(target_zfs_cmd))) # oldest snapshot is base_snap if repl_mode != latest - base_snap = snapshots[fs][-1] + base_snap = snapshots[source_fs][-1] if repl_mode == 'latest': - base_snap = snapshots[fs][0] + base_snap = snapshots[source_fs][0] try: zsnaplib.send_snapshot( - fs, + source_fs, base_snap, - remote_zfs_cmd, - remote_fs, - sudo=sudo, + target_zfs_cmd, + target_fs, + source_zfs_cmd, send_opts=send_opts, recv_opts=recv_opts) log.info('{} base copy sent'.format(fs)) @@ -196,31 +213,31 @@ def send_snapshots(fslist, snapshots, config, sudo): failed_snapshots.add(fs) log.warning(e) continue - remote_snapshots[remote_fs] = [base_snap] + remote_snapshots[target_fs] = [base_snap] # Remote FS now exists, one way or another find last common snapshot last_remote = None - for remote_snap in remote_snapshots[remote_fs]: - if remote_snap in snapshots[fs]: + for remote_snap in remote_snapshots[target_fs]: + if remote_snap in snapshots[source_fs]: last_remote = remote_snap break if not last_remote: failed_snapshots.add(fs) log.warning('{}: No common snapshot local and remote, you need to create a new base copy!'.format(fs)) continue - last_local = snapshots[fs][0] + last_local = snapshots[source_fs][0] if last_remote == last_local: - log.info("{} snapshot from {} is already present on remote".format(fs, last_local)) + log.info("{} snapshot from {} is already present at target".format(fs, last_local)) continue - log.info('{} incremental {} -> {}, remote is {}'.format(fs, last_remote, snapshots[fs][0], ' '.join(remote_zfs_cmd))) + log.info('{} incremental {} -> {}, remote is {}'.format(fs, last_remote, snapshots[source_fs][0], ' '.join(target_zfs_cmd))) try: zsnaplib.send_snapshot( - fs, - snapshots[fs][0], - remote_zfs_cmd, - remote_fs, - sudo=sudo, + source_fs, + snapshots[source_fs][0], + target_zfs_cmd, + target_fs, + source_zfs_cmd, send_opts=send_opts, recv_opts=recv_opts, repl_from=last_remote, @@ -231,14 +248,15 @@ def send_snapshots(fslist, snapshots, config, sudo): failed_snapshots.add(fs) return failed_snapshots -def weed_snapshots(fslist, snapshots, config, sudo, failed_snapshots): +def weed_snapshots(fslist, snapshots, config, failed_snapshots): log = logging.getLogger(LOGGER) for fs in fslist: conf = get_config_for_fs(fs, config) + source_fs = conf['source_fs'] if fs in failed_snapshots: log.info("Not weeding {} because of snapshot creation/send failure".format(fs)) continue - if fs not in snapshots: + if source_fs not in snapshots: continue if not conf['weed_enable']: continue @@ -256,29 +274,37 @@ def weed_snapshots(fslist, snapshots, config, sudo, failed_snapshots): 'keep_1min']} if conf['custom_keep_interval']: kwargs['custom_keep_interval'] = str_to_timedelta(conf['custom_keep_interval']) - kwargs['sudo'] = sudo + + zfs_cmd = Template(conf['source_zfs_cmd']).safe_substitute(conf) + zfs_cmd = zfs_cmd.split() zsnaplib.weed_snapshots( fs, # never remove the latest snapshot - snapshots[fs][1:], + snapshots[source_fs][1:], + zfs_cmd, **kwargs) def main(): config = configparser.SafeConfigParser() config.read('/etc/zsnapper.ini') - sudo = False ret = RET_CODES['SUCCESS'] log = logging.getLogger(LOGGER) - if os.getuid() != 0: - sudo = True + # guess the local zfs command, this is pretty ugly... + zfs_cmd_conf = DEFAULT_CONFIG + for section in config.sections(): + if '@' not in section: + if 'source_zfs_cmd' in config[section]: + zfs_cmd_conf = get_config_for_fs(section, config) + local_zfs_cmd = Template(zfs_cmd_conf['source_zfs_cmd']).safe_substitute(zfs_cmd_conf) + local_zfs_cmd = local_zfs_cmd.split() - fslist = sorted(zsnaplib.get_filesystems(sudo)) - snapshots = zsnaplib.get_snapshots(sudo) + fslist = sorted(zsnaplib.get_filesystems(local_zfs_cmd)) + snapshots = zsnaplib.get_snapshots(local_zfs_cmd) - failed_snapshots = do_snapshots(fslist, snapshots, config, sudo) + failed_snapshots = do_snapshots(fslist, snapshots, config) if failed_snapshots: ret = RET_CODES['ERROR'] @@ -310,21 +336,18 @@ def main(): return RET_CODES['FAILED'] # create any remote snapshots - remotes = get_remote_hosts(config) + remotes = get_remote_sources(config) remote_fs = {} remote_snapshots = {} failed_remote_snapshots = {} for remote, zfs_cmd in remotes.items(): try: - remote_fs[remote] = sorted(zsnaplib.get_filesystems(zfs_cmd=zfs_cmd)) - remote_snapshots[remote] = zsnaplib.get_snapshots(zfs_cmd=zfs_cmd) + remote_fs[remote] = sorted(zsnaplib.get_filesystems(zfs_cmd)) + remote_snapshots[remote] = zsnaplib.get_snapshots(zfs_cmd) failed_remote_snapshots[remote] = do_snapshots( - remote_fs[remote], + ["{}@{}".format(x, remote) for x in remote_fs[remote]], remote_snapshots[remote], - config, - False, # sudo should be configured in zfs_cmd already - remote=remote, - zfs_cmd=zfs_cmd) + config) except zsnaplib.ZFSSnapshotError: if remote in remote_fs: del remote_fs[remote] @@ -341,18 +364,34 @@ def main(): for remote, zfs_cmd in remotes.items(): try: if remote in remote_snapshots: - remote_snapshots[remote] = zsnaplib.get_snapshots(zfs_cmd=zfs_cmd) + remote_snapshots[remote] = zsnaplib.get_snapshots(zfs_cmd) except zsnaplib.ZFSSnapshotError: del remote_snapshots[remote] log.warning("Could not refresh snapshots on {}".format(remote)) + snapshots = zsnaplib.get_snapshots(local_zfs_cmd) - snapshots = zsnaplib.get_snapshots(sudo) - failed_send = send_snapshots(fslist, snapshots, config, sudo) + failed_send = send_snapshots(fslist, snapshots, config) if failed_send: ret = RET_CODES['ERROR'] - failed_snapshots.update(failed_send) - weed_snapshots(fslist, snapshots, config, sudo, failed_snapshots) + for remote in remotes.keys(): + failed_send = send_snapshots( + ["{}@{}".format(x, remote) for x in remote_fs[remote]], + remote_snapshots[remote], + config) + if failed_send: + ret = RET_CODES['ERROR'] + failed_snapshots.update(failed_send) + + weed_snapshots(fslist, snapshots, config, failed_snapshots) + + for remote in remotes.keys(): + weed_snapshots( + ["{}@{}".format(x, remote) for x in remote_fs[remote]], + remote_snapshots[remote], + config, + failed_snapshots) + os.remove(lockfile) if __name__ == '__main__': diff --git a/zsnaplib/__init__.py b/zsnaplib/__init__.py index 7e8ad95..f0e3676 100644 --- a/zsnaplib/__init__.py +++ b/zsnaplib/__init__.py @@ -5,26 +5,14 @@ import subprocess import sys time_format='%Y-%m-%d_%H%M' -zfs_bin='/sbin/zfs' -sudo_bin='/usr/bin/sudo' re_snapshot = re.compile(r'^(.*)@([0-9]{4}-[0-9]{2}-[0-9]{2}_[0-9]{4})$') logger = 'zsnapper' class ZFSSnapshotError(Exception): pass -def do_zfs_command(args, sudo, zfs_cmd, pipecmd=None): +def do_zfs_command(args, zfs_cmd, pipecmd=None): cmd = [] - sudopw = None - if sudo: - cmd.append(sudo_bin) - if sys.version_info[0] == 3: - if isinstance(sudo, str): - cmd.append('--stdin') - sudopw = '{}\n'.format(sudo) - elif isinstance(sudo, basestring): - cmd.append('--stdin') - sudopw = '{}\n'.format(sudo) cmd.extend(zfs_cmd) cmd.extend(args) @@ -54,8 +42,7 @@ def send_snapshot( snap, remote_zfs_cmd, remote_target, - zfs_cmd=[zfs_bin], - sudo=False, + zfs_cmd, send_opts=[], recv_opts=[], repl_mode='all', @@ -74,18 +61,18 @@ def send_snapshot( pipecmd = remote_zfs_cmd + [ 'receive' ] + recv_opts + [ remote_target ] - do_zfs_command(args, sudo, zfs_cmd, pipecmd=pipecmd) + do_zfs_command(args, zfs_cmd, pipecmd=pipecmd) -def create_snapshot(fs, sudo=False, zfs_cmd=[zfs_bin]): +def create_snapshot(fs, zfs_cmd): d = datetime.datetime.now().strftime(time_format) args = ['snapshot', '{}@{}'.format(fs, d)] - do_zfs_command(args, sudo, zfs_cmd) + do_zfs_command(args, zfs_cmd) -def get_filesystems(sudo=False, zfs_cmd=[zfs_bin]): +def get_filesystems(zfs_cmd): args = ['list', '-H'] - out = do_zfs_command(args, sudo, zfs_cmd) + out = do_zfs_command(args, zfs_cmd) ret = set() for row in out.splitlines(): @@ -94,9 +81,9 @@ def get_filesystems(sudo=False, zfs_cmd=[zfs_bin]): return ret -def get_snapshots(sudo=False, zfs_cmd=[zfs_bin]): +def get_snapshots(zfs_cmd): args = [ 'list', '-H', '-t', 'snapshot' ] - out = do_zfs_command(args, sudo, zfs_cmd) + out = do_zfs_command(args, zfs_cmd) snapshots = {} for row in out.splitlines(): @@ -115,15 +102,17 @@ def get_snapshots(sudo=False, zfs_cmd=[zfs_bin]): return snapshots -def remove_snapshot(fs, date, sudo=False, zfs_cmd=[zfs_bin]): +def remove_snapshot(fs, date, zfs_cmd): date = date.strftime(time_format) args = [ 'destroy', '{}@{}'.format(fs, date) ] - do_zfs_command(args, sudo, zfs_cmd) + do_zfs_command(args, zfs_cmd) def weed_snapshots( fs, dates, + zfs_cmd, + remote = None, custom_keep_interval = None, keep_custom = 0, keep_yearly = 0, @@ -134,10 +123,13 @@ def weed_snapshots( keep_30min = 0, keep_15min = 0, keep_5min = 0, - keep_1min = 0, - sudo = False): + keep_1min = 0): log = logging.getLogger(logger) + if '@' in fs: + source_fs, remote = fs.split('@', 1) + else: + source_fs = fs keep = { 'custom': [], @@ -278,7 +270,7 @@ def weed_snapshots( for date in to_remove: try: log.info('{}: removing snapshot from {}'.format(fs, date)) - remove_snapshot(fs, date, sudo=sudo) + remove_snapshot(source_fs, date, zfs_cmd) except ZFSSnapshotError as e: log.error(str(e)) diff --git a/zsnapper.ini-sample b/zsnapper.ini-sample index 68858e5..eb40d78 100644 --- a/zsnapper.ini-sample +++ b/zsnapper.ini-sample @@ -15,38 +15,40 @@ snapshot_interval=1h # Remote replication # possible other value is 'latest' to only sync the latest snapshot # Set to empty value to not send the snapshots to remote -remote_enable=all +send_enable=all -# The remote_zfs_cmd option is the command to use to execute zfs on target machine. -# remote_test_cmd, if set, is executed before trying to send any snapshot to remote. -# If remote_test_cmd returns a non-zero status the remote is considered to be unavailable +# source_zfs_cmd is the command to execute zfs locally. +# The target_zfs_cmd option is the command to use to execute zfs on target machine. +# target_test_cmd, if set, is executed before trying to send any snapshot to remote. +# If target_test_cmd returns a non-zero status the remote is considered to be unavailable # and no snapshots are sent. (A warning is written in the log though) # # NOTE: # The command arguments must not contain whitespace characters, due to implementation details. # -# Variables can be used in remote_zfs_cmd and remote_test_cmd. Any setting +# Variables can be used in target_zfs_cmd and target_test_cmd. Any setting # available in the section can be used as a variable -remote_zfs_cmd=/usr/bin/ssh ${remote_user}@${remote_host} /usr/bin/sudo /sbin/zfs -remote_test_cmd=/usr/bin/ssh ${remote_user}@${remote_host} echo "success" -# The remote_host option is optional but recommended if you send snapshots to a remote host. -remote_host=my.backup.server.tld -# remote_user is not a actually a zsnapper option; but it's used as a variable in the remote commands. -remote_user=backup +source_zfs_cmd=/usr/bin/sudo /sbin/zfs +target_zfs_cmd=/usr/bin/ssh ${target_user}@${target_host} /usr/bin/sudo /sbin/zfs +target_test_cmd=/usr/bin/ssh ${target_user}@${target_host} echo "success" +# The target_host option is optional but recommended if you send snapshots to a remote host. +target_host=my.backup.server.tld +# target_user is not a actually a zsnapper option; but it's used as a variable in the remote commands. +target_user=backup -# remote_zfs_target is the file system on the remote client that should receive zfs sends +# target_fs is the file system on the receiving side that should receive zfs sends # for this file system. # NOTE: # Just like any other option this is inherited by file system descendants, -# but if a child has the same remote_zfs_target as the parent, the child +# but if a child has the same target_zfs_target as the parent, the child # will instead use this to figure out where the parent is and be sent to # it position relative to the parent. # For example: The local file system tank/ROOT will be sent to tank/backup/client/ROOT. -remote_zfs_target=tank/backup/client +target_fs=tank/backup/client # These can be set to use custom arguments to zfs send and zfs receive -remote_send_flags=-D -p -remote_recv_flags= +send_flags=-D -p +recv_flags= # snapshot weeding # set weed_enable to an empty value to disable snapshot weeding. @@ -63,15 +65,15 @@ keep_monthly=4 [tank/SWAP] snapshot_interval= -remote_enable= +send_enable= [tank/media] snapshot_interval= -remote_enable= +send_enable= [tank/tmp] snapshot_interval= -remote_enable= +send_enable= [tank/var/log] snapshot_interval=1m @@ -80,4 +82,28 @@ keep_15min=4 [tank/var/tmp] snapshot_interval= -remote_enable= +send_enable= + + + +# '@' in the section title indicates that this file system is not local +# note that the *_zfs_cmd settings. +# +# The remote snapshots are only created *after* the local, after zsnapper +# has aquired the execution lock, so if zsnapper takes a long time to execute +# some snapshotting may be delayed. +# +# the '@' is required since zsnapper otherwise have no way to know which +# filesystems are on the same server... +[zroot/backup@remote_system1] +source_zfs_cmd=/usr/bin/ssh user@remote_system1 /sbin/zfs +target_zfs_cmd=/sbin/zfs +send_enable=all +snapshot_interval=1h +target_fs=tank/backup/remote_system1 +recv_flags=-u +weed_enable=1 +keep_hourly=24 +keep_daily=7 +keep_weekly=4 +keep_monthly=4