Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug-0.2.5-renew] Initial backup not thin #57

Open
MRLOLKOPF opened this issue Apr 6, 2020 · 7 comments
Open

[Bug-0.2.5-renew] Initial backup not thin #57

MRLOLKOPF opened this issue Apr 6, 2020 · 7 comments

Comments

@MRLOLKOPF
Copy link

Hi,

I just tested the new version (0.2.5-renew) on my test Proxmox environment, unfortunately I found some bugs. I might try to fix them myself and make a pull request, but due to the corona situation I'm quite busy now. Just want to report those bugs, so that others might be aware of it. But really nice improvements with fsfreeze, checksum check, etc.!!

Unfortunately I had to notice, that the initial backups in the new version (0.2.5-renew) took way more time and storage space then the version before (0.2.1). It looks like the initial backup is now thick provisioned and not thin like it was in 0.2.1.

Some numbers (Test-Proxmox-Cluster with Test VMs, no changes between those 2 runs):

0.2.5-renew:

Elapsed time: 8h and 23m
Transferred data: 1560GB (pretty much exactly the "provisioned" space)

0.2.1:

Elapsed time: 2h and 28m
Transferred data: 449GB (pretty much exactly the "used" space)

Especially when doing offsite backups, this is kind of a problem due to limited bandwidth and expensive traffic.

Is it a bug or is this behavior needed due to some technical requirement?

Stay healthy!
Greetings from Germany!
Andy

@franklupo
Copy link
Member

@lephisto do you have any idea?

@lephisto
Copy link
Contributor

lephisto commented Apr 10, 2020

There is nothing like Thin Backup, just the option to compress. To get this effective it requires that you trim your filesystem once old data gets deleted.

Nothing has changed regarding that. Have you enabled compression? Do you fstrim on a regular base? Discard enabled in the VM Settings?

Your complete commandline would be helpful.

@MRLOLKOPF
Copy link
Author

MRLOLKOPF commented Apr 10, 2020

Hello,

both runs (with the old and new version) were made one after another on the same day. All VMs on that Test Cluster do have discard enabled and are doing trim. No compression in eve4pve-barc was used. As I was testing the new version, nothing on the Test VMs und the Test system was changed. VMs were idling. No new data.

The command which I run for the old and the new version was exactly the same:

eve4pve-barc backup --vmid=101,102,103,104,105,106,107,108,109,110,111,112,113,115,117 --label='daily' --path='/mnt/pve/bak2hetznerCeph' --keep=10 --unprotect-snap --mail='[email protected]'

@lephisto
Copy link
Contributor

If you don't compress you will always get the raw size, e.g. what's provisioned.

@lephisto
Copy link
Contributor

Yeah, I double checked this now, because I wanted to make sure that the fact that I tee the output of rbd export through several programs doesn't mess things up. It doesn't. rbd export simply can't write sparse images, it always writes you the complete size, even if you discarded stuff. Only compression will save you here.

@MRLOLKOPF
Copy link
Author

Hi, thank you for checking this, so I can choose between none, gzip, bzip2 and pigz in the new version. Do you know what type of compression was used in the 0.2.1 version as it worked really good?

@lephisto
Copy link
Contributor

Performancewise go for pigs. It's the only Compression that Multithreads.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants