This project is deprecated. Instead ot this new versions of nxs-backup
available here.
Nxs-backup is an open source backup software for most popular GNU/Linux distributions. Features of Nxs-backup include amongst others:
- Support of the most popular storages: local, s3, ssh(sftp), ftp, cifs(smb), nfs, webdav
- Database backups, such as MySQL(logical/physical), PostgreSQL(logical/physical), MongoDB, Redis
- Possibility to specify extra options for collecting database dumps to fine-tune backup process and minimize load on the server
- Incremental files backups
- Easy to read and maintain configuration files with clear transparent structure
- Built-in generator of the configuration files to expedite initial setup
- Support of user-defined custom scripts to extend functionality
- Possibility to restore backups with standard tools (no extra software including Nxs-backup is required)
- Email and webhooks notifications about status and errors during backup process
The source code of Nxs-backup is available at https://github.com/nixys/go-nxs-backup under the license. Nxs-backup offers binary files for the Linux distributions https://github.com/nixys/go-nxs-backup/releases.
In order to make nxs-backup as flexible as possible, the directions given to nxs-backup are specified in several pieces. The main instruction is the job resource, which defines a job. A backup job generally consists of a Type, a Sources and Storages. The Type defines what type of backup shall run (e.g. MySQL "physical" backups), the Sources defines the target and exceptions (for each job at least one target must be specified), the Storages define storages where to store backups and at what quantity (for each job at least one storage must be specified). Work with remote storage is performed by local mounting of the FS with special tools.
Nxs-backup configuration files are usually located in the /etc/nxs-backup/ directory. The default configuration has only one configuration file nxs-backup.conf and the conf.d subdirectory that stores files with descriptions of jobs (one file per job). Config files are in YAML format. For details, see Settings.
You can generate your configuration file for a job by running the script with the command generate and -S/* --storages* (map of storages), -T/--type (type of backup), -O/--out-path (path to generated file) options. The script will generate configuration file for the job and print result:
# nxs-backup generate -S store=scp s3_store=s3 -T mysql -P /etc/nxs-backup/conf.d/mysql.conf
nxs-backup: Successfully generated '/etc/nxs-backup/conf.d/mysql.conf' configuration file!
You can test if configuration correct running the script with the -t option and optional -c/--config (path to main conf file). The script will process the config files and print any error messages and then terminate:
# nxs-backup -t
nxs-backup: The configuration is correct.
You cat start your jobs by running the script with the command start and optional -c/--config (path to main conf file). The script will execute the job passed by the argument. It should be noted that there are several reserved job names:
all
- simulates the sequential execution of external, databases, files jobs (default value)files
- random execution of all jobs of types desc_files, inc_filesdatabases
- random execution of all jobs of types mysql, mysql_xtrabackup, postgresql, * postgresql_basebackup*, mongodb, redisexternal
- random execution of all jobs of type external
# nxs-backup start all
Nxs-backup main settings block description.
Name | Description | Value |
---|---|---|
server_name |
The name of the server on which the nxs-backup is started | "" |
project_name |
The name of the project, used for notifications (optional) | "" |
notifications.webhooks |
Contains list of webhook notification channel parameters | [] |
notifications.mail |
Contains email notification channel parameters | {} |
storage_connects |
Contains list of remote storages connections | [] |
jobs |
Contains list of backup jobs | [] |
include_jobs_configs |
Contains list of filepaths or glob patterns to job config files | ["conf.d/*.conf"] |
waiting_timeout |
Time to waite in minutes for another nxs-backup to be completed (optional) | 0 |
logfile |
Path to log file | /var/log/nxs-backup/nxs-backup.log |
loglevel |
Level of messages to be logged. Supported levels | info |
Name | Description | Value |
---|---|---|
enabled |
Enables notification channel | true |
webhook_url |
Contains URL of the webhook service | "" |
payload_message_key |
Defines request payload key that will contain notification message | "" |
extra_payload |
Contains struct that contains extra request payload keys | {} |
extra_headers |
Contains map of strings with request headers | {} |
insecure_tls |
Allows to skip invalid certificates on webhook service side | false |
message_level |
Level of messages to be notified about. Supported levels | "warning" |
Name | Description | Value |
---|---|---|
enabled |
Enables notification channel | true |
mail_from |
Mailbox on behalf of which mails will be sent | "" |
smtp_server |
SMTP host. If not specified email will be sent using /usr/sbin/sendmail |
"" |
smtp_port |
SMTP port | 465 |
smtp_user |
SMTP user login | "" |
smtp_password |
SMTP user password | "" |
recipients |
List of notifications recipients emails | [] |
message_level |
Level of messages to be notified about. Supported levels | "warning" |
Name | Description |
---|---|
debug |
The most detailed information about the backup process |
info |
General information about the backup process |
warning |
Information about the backup process that requires special attention |
error |
Only critical information about failures in the backup process |
Nxs-backup storage connect settings block description.
Name | Description | Value |
---|---|---|
name |
Unique storage name | "" |
s3_params |
Connection parameters for S3 storage type (optional) | {} |
scp_params |
Connection parameters for scp/sftp storage type (optional) | {} |
ftp_params |
Connection parameters for ftp storage type (optional) | {} |
nfs_params |
Connection parameters for nfs storage type (optional) | {} |
smb_params |
Connection parameters for smb/cifs storage type (optional) | {} |
webdav_params |
Connection parameters for webdav storage type (optional) | {} |
Name | Description | Value |
---|---|---|
bucket_name |
S3 bucket name | "" |
endpoint |
S3 endpoint | "" |
region |
S3 region | "" |
access_key_id |
S3 access key | "" |
secret_access_key |
S3 secret key | "" |
Name | Description | Value |
---|---|---|
host |
SSH host | "" |
port |
SSH port (optional) | 22 |
user |
SSH user | "" |
password |
SSH password | "" |
key_file |
Path to SSH private key instead of password | "" |
connection_timeout |
SSH connection timeout seconds (optional) | 10 |
Name | Description | Value |
---|---|---|
host |
FTP host | "" |
port |
FTP port (optional) | 21 |
user |
FTP user | "" |
password |
FTP password | "" |
connect_count |
Count of FTP connections opens to sever (optional) | 5 |
connection_timeout |
FTP connection timeout seconds (optional) | 10 |
Name | Description | Value |
---|---|---|
host |
NFS host | "" |
target |
Path on NFS server where backups will be stored | "" |
UID |
UID of NFS server user (optional) | 0 |
GID |
GID of NFS server user (optional) | 0 |
Name | Description | Value |
---|---|---|
host |
SMB host | "" |
port |
SMB port (optional) | 445 |
user |
SMB user (optional) | "Guest" |
password |
SMB password (optional) | "" |
share |
SMB share name | "" |
domain |
SMB domain (optional) | "" |
connection_timeout |
SMB connection timeout seconds (optional) | 10 |
Name | Description | Value |
---|---|---|
url |
WebDav URL | "" |
username |
WebDav user | "" |
password |
WebDav password | "" |
oauth_token |
WebDav OAuth token (optional) | "" |
connection_timeout |
WebDav connection timeout seconds (optional) | 10 |
Nxs-backup job settings block description.
Name | Description | Value |
---|---|---|
job_name |
Job name. This value is used to run the specific job | "" |
type |
Backup type. Supported backup types | "" |
tmp_dir |
A local path to the directory for temporary backups files | "" |
safety_backup |
Delete outdated backups after creating a new one. IMPORTANT Using of this option requires more disk space. Perform sure there is enough free space on the device where temporary backups stores |
false |
deferred_copying |
Determines that copying of backups to remote storages occurs after creation of all temporary backups defined in the task. IMPORTANT Using of this option requires more disk space. Perform sure there is enough free space on the device where temporary backups stores |
false |
sources |
Specify a list of source objects for backup | [] |
storages_options |
Specify a list of storages to store backups | [] |
dump_cmd |
Full command to run an external script. Only for external backup type | "" |
skip_backup_rotate |
Skip backup rotation on storages. Only for external backup type | false |
Option skip_backup_rotate
may be used if creation of a local copy is not required. For example, in case when script
copying data to a remote server, rotation of backups may be skipped with this option.
Name | Description | Value |
---|---|---|
name |
Used to differentiate backups in the target directory | "" |
connect |
Defines a set of parameters for connecting to the database. Only for databases types | {} |
targets |
List of directories/files to be backed up. Glob patterns are supported | [] |
target_dbs |
List of databases to be backed up. Use the keyword all for backup all databases. Only for databases types | [] |
target_collections |
List of collections to be backed up. Use the keyword all for backup all collections in all dbs. Only for mongodb type | [] |
excludes |
List of databases/schemas/tables or directories/files to be excluded from backup. Glob patterns are supported for file types | [] |
exclude_dbs |
List of databases to be excluded from backup. Only for mongodb type | [] |
exclude_collections |
List of collections to be excluded from backup. Only for mongodb type | [] |
db_extra_keys |
Special parameters for the collecting database backups. Only for databases types | "" |
gzip |
Whether you need to compress the backup file | false |
save_abs_path |
Whether you need to save absolute path in tar archives Only for file types | true |
prepare_xtrabackup |
Whether you need to make xtrabackup prepare. Only for mysql_xtrabackup type | true |
Name | Description | Value |
---|---|---|
db_host |
DB host | "" |
db_port |
DB port | "" |
socket |
Path to DB socket | "" |
db_user |
DB user | "" |
db_password |
DB password | "" |
mysql_auth_file |
Path to MySQL auth file | "" |
psql_ssl_mode |
PostgreSQL SSL mode option | "require" |
psql_ssl_root_cert |
Path to file containing SSL certificate authority (CA) certificate(s) for PostgreSQL | "" |
psql_ssl_crl |
Path to file containing SSL server certificate revocation list (CRL) for PostgreSQL | "" |
mongo_replica_set_name |
MongoDB replicaset name | "" |
mongo_replica_set_address |
Comma separated list of MongoDB replicaset hosts | "" |
You may use either auth_file
or db_host
or socket
options. Options priority follows:
auth_file
→ db_host
→ socket
Name | Description | Value |
---|---|---|
storage_name |
The name of storage, defined in main config. local storage available by default | "" |
backup_path |
Path to directory for storing backups | "" |
retention |
Defines retention for backups on current storage | {} |
Name | Description | Value |
---|---|---|
days |
Days to store backups | 7 |
weeks |
Weeks to store backups | 5 |
months |
Months to store backups. For inc_files backup type determines how many months of incremental copies will be stored relative to the current month. Can take values from 0 to 12 |
12 |
Name | Description |
---|---|
mysql |
MySQL logical backup |
mysql_xtrabackup |
MySQL physical backup |
postgresql |
PostgreSQL logical backup |
postgresql_basebackup |
PostgreSQL physical backup |
mongodb |
MongoDB backup |
redis |
Redis backup |
Name | Description |
---|---|
desc_files |
Files discrete backup |
inc_files |
Files incremental backup |
Name | Description |
---|---|
external |
External backup script |
Identical to creating a backup using tar
.
Identical to creating a backup using tar
.
Incremental copies of files are made according to the following scheme:
At the beginning of the year or on the first start of nxs-backup, a full initial backup is created. Then at the beginning of each month - an incremental monthly copy from a yearly copy is created. Inside each month there are incremental ten-day copies. Within each ten-day copy incremental day copies are created.
In this case, since now the tar file is in the PAX format, when you deploy the incremental backup, you do not need to
specify the path to inc-files. All the info is stored in the PAX header of the GNU.dumpdir
directory inside the
archive.
Therefore, the commands to restore a backup for a specific date are the following:
- First, unpack the full year copy with the follow command:
tar xGf /path/to/full/year/backup
- Then alternately unpack the monthly, decade, day incremental backups, specifying a special key -G, for example:
tar xGf /path/to/monthly/backup
tar xGf /path/to/decade/backup
tar xGf /path/to/day/backup
Works on top of mysqldump
, so for the correct work of the module you have to install compatible mysql-client.
Works on top of xtrabackup
, so for the correct work of the module you have to install compatible **
percona-xtrabackup**. Supports only backup of local instance.
Works on top of pg_dump
, so for the correct work of the module you have to install compatible postgresql-client.
If there is no database with the same name for the user, you must specify the name of the database, which will be used
to connect to the PSQL instance, after the @
symbol as part of the username. Example: backup@postgres
.
Works on top of pg_basebackup
, so for the correct work of the module you have to install compatible **
postgresql-client**.
If there is no database with the same name for the user, you must specify the name of the database, which will be used
to connect to the PSQL instance, after the @
symbol as part of the username. Example: backup@postgres
.
Works on top of mongodump
, so for the correct work of the module you have to install compatible **
mongodb-clients**.
Works on top of redis-cli
with --rdb
option, so for the correct work of the module you have to install compatible **
redis-tools**.
In this module, an external script is executed passed to the program via the key "dump_cmd".
By default at the completion of this command, it is expected that:
- A complete backup file with data will be collected
- The stdout will send data in json format, like:
{
"full_path": "/abs/path/to/backup.file"
}
IMPORTANT:
- make sure that there is no unnecessary information in stdout
- the successfully completed program should finish with exit code 0
If the module used with the skip_backup_rotate
parameter, the standard output is expected as a result of running
the command. For example, when executing the command "rsync -Pavz /local/source /remote/destination" the result is
expected to be a
standard output to stdout.
For running nxs-backup in Kubernetes you can use already available docker
image with client apps registry.nixys.ru/public/nxs-backup:latest-alpine
or build your own image containing only the
client applications you need.
Here is example of making alpine image with client apps:
FROM registry.nixys.ru/public/nxs-backup:latest AS bin
FROM alpine
RUN apk update --no-cache && apk add --no-cache tar gzip mysql-client postgresql-client mongodb-tools redis
COPY --from=bin /nxs-backup /usr/local/bin/nxs-backup
CMD nxs-backup start
If you are using Helm to deploy your apps to Kubernetes, you can use universal chart with values examples that uses CronJosb to make backups.