Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MultiQC fails with latest dev release #83

Open
alexblaessle opened this issue Sep 6, 2024 · 0 comments
Open

MultiQC fails with latest dev release #83

alexblaessle opened this issue Sep 6, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@alexblaessle
Copy link

Description of the bug

multiqc \ [49/663] --force

--config multiqc_config.yml


.

cat <<-END_VERSIONS > versions.yml
"NFCORE_SCDOWNSTREAM:SCDOWNSTREAM:MULTIQC":
multiqc: $( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS

Command exit status:
1

Command output:
/// MultiQC 🔍 v1.22.2

          config | Loading config settings from: multiqc_config.yml
          config | Loading config settings from: multiqc_config.yml
   version_check | MultiQC Version v1.24.1 now available!
     file_search | Search path: /scratch/nextflow/work/blaessle/90/09f340333a89b9517a5a92c3d44e27

  custom_content | sizes: Found 1 samples (table)
  custom_content | 1254_0002_preprocessed: Found 1 sample (image)
  custom_content | 1254_0001_raw: Found 1 sample (image)
  custom_content | 1254_0001_preprocessed: Found 1 sample (image)
  custom_content | nf-core-scdownstream-summary: Found 1 sample (html)
  custom_content | 1254_0002_raw: Found 1 sample (image)
  custom_content | nf-core-scdownstream-methods-description: Found 1 sample (html)
  custom_content | merged: Found 1 sample (image)

-[nf-core/scdownstream] Pipeline completed with errors- [e6/2a3c52] NOTE: Process NFCORE_SCDOWNSTREAM:SCDOWNSTREAM:MULTIQC terminated with an error exit status (1) -- Execution is retried (2) ERROR ~ Error executing process > 'NFCORE_SCDOWNSTREAM:SCDOWNSTREAM:MULTIQC' Caused by: Process NFCORE_SCDOWNSTREAM:SCDOWNSTREAM:MULTIQC terminated with an error exit status (1)
Command error:
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/kaleido/scopes/base.py", line 293, in _perform_transform
self._ensure_kaleido()
File "/usr/local/lib/python3.11/site-packages/kaleido/scopes/base.py", line 198, in _ensure_kaleido
raise ValueError(message) ValueError: Failed to start Kaleido subprocess. Error stream:

[0906/012724.393580:WARNING:resource_bundle.cc(431)] locale_file_path.empty() for locale [0906/012724.426076:WARNING:resource_bundle.cc(431)] locale_file_path.empty() for locale
[0906/012724.428809:WARNING:resource_bundle.cc(431)] locale_file_path.empty() for locale
[0906/012724.456874:WARNING:discardable_shared_memory_manager.cc(194)] Less than 64MB of free space in temporary directory for shared memory files: 0
[0906/012724.598061:ERROR:platform_shared_memory_region_posix.cc(250)] Creating shared memory in /tmp/61220341/.org.chromium.Chromium.ZYrLX4 failed: No such file or directory (2)
[0906/012724.598107:ERROR:platform_shared_memory_region_posix.cc(253)] Unable to access(W_OK|X_OK) /tmp/61220341: No such file or directory (2)
Received signal 6
#0 0x55d311962d79 base::debug::CollectStackTrace()
#1 0x55d3118e0633 base::debug::StackTrace::StackTrace()
#2 0x55d31196295b base::debug::(anonymous namespace)::StackDumpSignalHandler()
#3 0x14af88308fd0 (/usr/lib/x86_64-linux-gnu/libc.so.6+0x3bfcf)
#4 0x14af88357d3c (/usr/lib/x86_64-linux-gnu/libc.so.6+0x8ad3b)
#5 0x14af88308f32 gsignal
#6 0x14af882f3472 abort
#7 0x55d31190e30a base::internal::OnNoMemoryInternal()
#8 0x55d31190e329 base::(anonymous namespace)::OnNoMemory()
#9 0x55d31190e319 base::TerminateBecauseOutOfMemory()
#10 0x55d3118f80ab base::FieldTrialList::InstantiateFieldTrialAllocatorIfNeeded()
#11 0x55d3118f8239 base::FieldTrialList::CopyFieldTrialStateToFlags()
#12 0x55d310453f82 content::GpuProcessHost::LaunchGpuProcess()
#13 0x55d310452910 content::GpuProcessHost::Init()
#14 0x55d3104526c2 content::GpuProcessHost::Get()
#15 0x55d31086fb6e base::internal::Invoker<>::RunOnce()
#16 0x55d311926306 base::TaskAnnotator::RunTask()
#17 0x55d311937cf6 base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::DoWorkImpl()
#18 0x55d3119379ea base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::DoWork()
#19 0x55d311984899 base::MessagePumpLibevent::Run()
#20 0x55d31193859b base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run()
#21 0x55d3119119bd base::RunLoop::Run()
#22 0x55d3102f9f18 content::BrowserProcessSubThread::IOThreadRun()
#23 0x55d311950874 base::Thread::ThreadMain()
#24 0x55d311972caa base::(anonymous namespace)::ThreadFunc()
#25 0x14af88356044 (/usr/lib/x86_64-linux-gnu/libc.so.6+0x89043)
#26 0x14af883d661c (/usr/lib/x86_64-linux-gnu/libc.so.6+0x10961b)
r8: 0000000000000000 r9: 000036ec1dbb720f r10: 0000000000000008 r11: 0000000000000246
r12: 0000000000000006 r13: 000055d30e286eb0 r14: 000036ec1db11cc0 r15: 000036ec1dbbe240
di: 00000000000000b4 si: 00000000000000bc bp: 000014af879666c0 bx: 00000000000000bc
dx: 0000000000000006 ax: 0000000000000000 cx: 000014af88357d3c sp: 000014af87965070
ip: 000014af88357d3c efl: 0000000000000246 cgf: 002b000000000033 erf: 0000000000000000
trp: 0000000000000000 msk: 0000000000000000 cr2: 0000000000000000
[end of stack trace]
Calling _exit(1). Core file will not be generated.

Work dir:
/scratch/nextflow/work/blaessle/90/09f340333a89b9517a5a92c3d44e27

Tip: when you have fixed the problem you can continue the execution adding the option -resume to the run command line

-- Check '.nextflow.log' file for details
ERROR ~ Pipeline failed. Please refer to troubleshooting docs: https://nf-co.re/docs/usage/troubleshooting

-- Check '.nextflow.log' file for details****

Command used and terminal output

nextflow run ~/scdownstream/ --input ../input_small.csv -c gpu_config.config  --ambient_removal cellbender -profile cluster --outdir out/

Relevant files

Config file used:

process {
executor = 'slurm'
withName: '.:.:CELLBENDER_REMOVEBACKGROUND' {
cpus = 1
debug = true
container = 'docker://us.gcr.io/broad-dsde-methods/cellbender:0.3.2'
queue = "gpu"
clusterOptions = '--gres=gpu:1'
ext.args = { "--epochs ${params.cellbender_epochs} --cuda" }
}
}

singularity {
enabled=true
runOptions = '--no-mount tmp --writable-tmpfs --nv --env CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES --env ROCR_VISIBLE_DEVICES=$ROCR_VISIBLE_DEVICES --env ZE_AFFINITY_MASK=$ZE_AFFINITY_MASK --env NVIDIA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES'

System information

nextflow 23.10.0
HPC
cluster
singularity
Linux
Latest dev

@alexblaessle alexblaessle added the bug Something isn't working label Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant