-
Notifications
You must be signed in to change notification settings - Fork 624
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleanup directive does not work when bucket-dir
is specified.
#5373
Comments
Duplicate of #2683 |
Current cleanup doesn't work with remote storage Check out nf-boost for a more robust cleanup (also something we plan to merge into Nextflow eventually) |
Thank you. I did not notice the duplicate bug report. I tried your suggestion by adding this to
It does not seem to have worked fully. Staged files and Also, when I changed my larger pipeline to use boost.cleanup and reran it, I received this crash:
As noted in the summary, this is not important, I just thought I would share my experience. |
Good catch, the null pointer bug is easy to fix. I'm still deciding how to handle the logs in the cleanup, but in general we recommend using a cleanup policy on the underlying filesystem or object storage to cleanup things like logs and helper files. The nf-boost cleanup is mainly intended to delete large intermediate files during the run, in order to prevent cost and storage overruns. But many users like to keep the logs, and deleting all of those little files is better handled by things like retention policies rather than the pipeline run. |
You also mentioned staged inputs. The problem is that it's difficult to know when a staged input is no longer used. You might be able to do some DAG analysis to figure it out, but sometimes people use the same input file (e.g. a reference genome or AI model) across many tasks and that complicates things. So this use case is also better covered by a retention policy for now |
Thank you. That makes better sense actually that you may want to leave behind the command scripts and log files, and even the staged files. This makes it ideal to reuse the same workdir for multiple workflows without having to worry about bloat over time, or configuring a TTL like mechanism for cleanup, which is easy in the cloud but not so easy in local HPC envs. Maybe whenever the feature makes it in, this can be an explicit point in the documentation. I will leave it to you to close the issue or keep it to track the null pointer bug. Thanks! |
Bug report
Expected behavior and actual behavior
When
cleanup = true
for a workflow that has specified abucket-dir
and has files that need to be cleaned up, the cleanup operation fails.Steps to reproduce the problem
main.nf:
nextflow.config:
Program output
Relevant part of the log file:
Environment
Additional context
This is not really important as users can always specify separate paths for each workflow and cleanup manually after a successful run.
The text was updated successfully, but these errors were encountered: