Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workers: report real disk size (and CPU / RAM) #976

Open
benoit74 opened this issue May 27, 2024 · 7 comments
Open

Workers: report real disk size (and CPU / RAM) #976

benoit74 opened this issue May 27, 2024 · 7 comments

Comments

@benoit74
Copy link
Collaborator

Currently, disk size assigned to each workers is not much used AFAIK.

However, the disk size configured in zimfarm.config is displayed in the UI.

Would be great to at least check if the disk size available for docker containers is in accordance with what is configured in zimfarm.config, so that Zimfarm editors / devs are not mislead by a wrong configuration

By default, a df /var/lib/docker/overlay2 should do the trick, but this can be customized, don't know yet how to correctly check it)

Probably true for RAM / CPU as well, this would help diagnose problems as we have more and more workers which are not under our control.

@rgaudin
Copy link
Member

rgaudin commented May 27, 2024

I like that worker admins can set resources for zimfarm and don't have to entirely donate all host resources to the worker. Was done on purpose but can be reassessed indeed.

Probably true for RAM / CPU as well, this would help diagnose problems

I don't see how that would help TBH.

Also, we should keep in mind that there's a major difference between CPU/RAM which are enforced by docker and disk which is not at all.

@benoit74
Copy link
Collaborator Author

I like that worker admins can set resources for zimfarm and don't have to entirely donate all host resources to the worker.

I like it too (even I would recommend to do it with VMs so that isolation is more realist, but it is not a requirement, we must support both scenarii)

However here issue was opposite: disk declared by owner was 1TB (whole disk size) while in fact only 30G was available in /var due to advanced partitioning scheme (worker owner expected /var to contain only logs). It made it hard to realize that disk full errors where real and not a bug.

@rgaudin
Copy link
Member

rgaudin commented May 27, 2024

Is the doc misleading or was it overlooked?

@benoit74
Copy link
Collaborator Author

When I read the doc, I understand that Zimfarm data will be stored in ZIMFARM_ROOT, but this is wrong. Or at least zimit scraper / zimit configuration is not respecting a Zimfarm constraints (I've never heard of before, or at least forgotten) that every big stuff must be pushed in /output. It looks like only /output (inside the container) in mounted to {ZIMFARM_ROOT}/data/{uuid}. Everything else is left in /var, not only logs. And zimit is pushing its warc to /crawls subfolder.

@rgaudin
Copy link
Member

rgaudin commented May 27, 2024

Everything else is left in /var, not only logs.

Yes, everything else is not managed and thus handled by docker. Those container data as well as image data are the reason we recommend to change the default folder. We should not mention logs there as it's misleading IMO.

And zimit is pushing its warc to /crawls subfolder.

That's a zimit bug that must be opened.

@benoit74
Copy link
Collaborator Author

We should not mention logs there as it's misleading IMO.

Yes, this is what misled me at least

That's a zimit bug that must be opened.

Double checking, it is not indeed. I.e. by default zimit creates WARCs in a temporary subfolder of /output, see --cwd below

[zimit::2024-05-27 08:09:51,768] INFO:Running browsertrix-crawler crawl: crawl --failOnFailedSeed --waitUntil load --title Zimit test website --description A test website for Zimit --depth 0 --timeout 90 --lang eng --behaviors autoplay,autofetch,siteSpecific --behaviorTimeout 90 --diskUtilization 90 --url https://tmp.kiwix.org/ci/test-website/ --userAgentSuffix +Zimit [email protected] --mobileDevice Pixel 2 --cwd /output/.tmpsdqb2yyl --statsFilename /output/crawl.json

@kevinmcmurtrie
Copy link

Docker has multiple storage drivers.

I'm using ZFS because it works really well for Docker and hybrid disk+NVMe storage. Docker uses ZFS snapshots for layers. Compression+dedup is on for Docker layers and $ZIMFARM_ROOT/data has compression. It's not an uncommon setup. df and du won't work like you think they would.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants