-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues Adding Chemical Mechanims #2494
Comments
Thanks @Anna-Gerosolina for writing. Would you be able to post the following files to this issue?
Also from your screenshot it looks like something is very wrong as the times are out of sequence. I wonder if you multiple GEOS-Chem instances running simultaneously. to Did you use Also we do have some documentation about adding species to the chemical mechanism. See: |
Hi, thanks so much for the help, I really appreciate it!!! I have attached the files. I think I was testing two different methods at once when this was running, but they were in completely different run directories. We did try to parallelize it for a few months, but kept running into segmentation faults and memory out of bounds errors, so it is currently running on one node and one core. I have looked at the documentation extensively, but keep getting error messages, so I feel like I am missing something and I am trying to just go step by step at this point. |
Thanks for your patience @Anna-Gerosolina. Can you also attach the run script that you are using? I'm curious why you are getting multiple echos if you are running on 1 core. That might be something specific to your machine. |
Of course! Thanks again for helping, here is the run script. |
Thanks @Anna-Gerosolina. I think we can get rid of the duplicate outputs to the log file if we use #!/bin/bash
#SBATCH -c 12
#SBATCH -N 1
#SBATCH -t 1-0:00
#SBATCH --mem=250GB
###############################################################################
### Sample GEOS-Chem run script for SLURM
### You can increase the number of cores with -c and memory with --mem,
### particularly if you are running at very fine resolution (e.g. nested-grid)
###############################################################################
# Set the proper # of threads for OpenMP
# SLURM_CPUS_PER_TASK ensures this matches the number you set with -c above
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# Run GEOS_Chem. The "time" command will return CPU and wall times.
# Stdout and stderr will be directed to the "GC.log" log file
# (you can change the log file name below if you wish)
srun --hint=nomultithread -c $OMP_NUM_THREADS time -p ./gcclassic >> GC.log
# Exit normally
exit 0
#EOC |
Thanks so much! I tried the new run script, but unfortunately that resulted in a lot of segmentation faults. I have attached the log and slurm output files. I have never been able to use multiple cores. I think it may be due to the amount of private variables resulting in memory issues, but have not figured out the reason with certainty. |
Thanks @Anna-Gerosolina. Am wondering if this is something specific to your cluster. Might be worth reporting the issue to your research computing folks, and ask for suggestions. |
Of course, thanks for taking a look at it! I have already tried contacting our Research Services in order to implement MPI, but we have been unsuccessful. My goal is to just get the jobs to run even if that requires using only one core. I am able to run them using only one core so long as I am not trying to add new molecules/mechanisms to the source code. I will contact Research Services again and see if there are any more suggestions that they have. I will probably have more questions after meeting with them :) Thanks again! |
@Anna-Gerosolina you shouldn't need to use MPI if you are running GEOS-Chem Classic only. What type of architecture are your running on? Is it an x86_64 chipset? You can tell by using this command: $ uname -a
Linux holy8a24302.rc.fas.harvard.edu 4.18.0-513.18.1.el8_9.x86_64 #1 SMP Wed Feb 21 21:34:36 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux Which gives you the OS, node name, Linux kernel ID etc. |
Hi, thanks for reaching out! I am using: Linux l001 4.18.0-477.27.1.el8_8.x86_64 #1 SMP Thu Aug 31 10:29:22 EDT 2023 x86_64 x86_64 x86_64 GNU/Linux. And sorry, I am not trying to use MPI across nodes, I am just trying to do serial parallelization across cores :). At this point however, I am totally fine just trying to use one core, it is moreso I am trying to get a normal termination message when putting a dummy variable in as a species. |
Your name
Anna Gerosolina
Your affiliation
Boston College
Please provide a clear and concise description of your question or discussion topic.
I have a goal of adding new molecules, mechanisms, and rate laws to GEOS-Chem, but am having lots of trouble figuring out how to do it. I tried a few things that resulted in errors, so am backtracking and trying everything one step at a time. I am currently trying to just add a dummy molecule to species_database.yml. My GC.log is not resulting in any errors, nor is it hitting a wall time, but the simulation is not completing (below is end of file). Any guidance at all is appreciated!
The text was updated successfully, but these errors were encountered: