Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sensitivity of Redi to critical slope parameter #70

Open
vanroekel opened this issue Sep 21, 2023 · 11 comments
Open

Sensitivity of Redi to critical slope parameter #70

vanroekel opened this issue Sep 21, 2023 · 11 comments
Assignees

Comments

@vanroekel
Copy link
Collaborator

This is a new issue that builds from a comment by @dengwirda in #65 -- MPAS-Ocean implementation of Redi is sensitive to the critical slope parameter for the parabolic bowl test case even when the slope is less than the max value. For example, the test case has a slope of 0.005 but the solution is sensitive to a change in critical slope from 0.1 to 0.5 when it should not be. Here is the relevant comment from @dengwirda

follow-up on the slope calc. aspects here, there currently appears to be a (seemingly unreasonable) dependence on the slope limit (config_Redi_maximum_slope) in the parabolic bowl test case --- this problem should setup simple linearaly sloping isopycnals via an uncontroversial stratification, and the slope limit should not be necessary to maintain stability. It should therefore be possible to set a larger value of config_Redi_maximum_slope than the actual slope defined in the test case and obtain consistent output. (e.g. if the defined isopycnal slope is 0.005, setting config_Redi_maximum_slope = 0.01, 0.05, 0.10, 0.20, etc should give the same answers).

Currently though, the slope limit appears to immediately become active, and both it and the monotone limiter in https://github.com/E3SM-Ocean-Discussion/E3SM/pull/53 are needed to keep the simulation on the rails. To me this suggests revisiting the slope triad formulations re: instabilities.

There are two things that I've tried:

The safe_slope function included in this PR, which bounds the slopes onto [-1,+1] and prevents div-by-zero errors while removing the need for a small fp-tolerance, which in itself seems to be responsible weird behaviour around 0.
Evaluating the slope triads at the top and bottom layer interfaces rather than the layer midplanes, as per: https://github.com/dengwirda/E3SM/blob/6c9aa01fdd11981f699b0d48c54665389cef9d66/components/mpas-ocean/src/shared/mpas_ocn_gm.F#L332. This averaging does appear to significantly improve the instability in the parabolic bowl case (allowing to run stably with a larger config_Redi_maximum_slope than the isopycnals themselves). It's unclear whether this change is necessarily compatible with the slope triad definitions themselves though.
Overall, this behaviour makes me unconfident re: the slope formulation in general, and suggests a further look.
@vanroekel vanroekel self-assigned this Sep 21, 2023
@vanroekel
Copy link
Collaborator Author

@dengwirda I took a hard look through the slope calculation and the tracer_hmix_redi routine including working things through paper and pencil. Everything checked out

I then ran a simulation with no slope taper and saw that salinity and temperature stay fixed through the 6 months of the run (but I did include your quasi monotone as on)

I also flipped on the slope limiter next and the results stayed identical. These tests included my new limiting approach, so perhaps we are okay now? I forgot if you ever tried your slope sensitivity tests with my limiter PR.

@dengwirda
Copy link
Collaborator

@vanroekel sounds good! As you say, the sensitivity to the slope limit that I saw was with the previous approach to tapering/limiting, so it's encouraging to know those changes of yours have improved things on this front.

@vanroekel
Copy link
Collaborator Author

well shoot, but my initial assessment was wrong. There is still sensitivity @dengwirda -- My plots from earlier today were incorrect. I reran with analysis fixes and there are definite differences and they emerge at the bottom. What I see is that the bottom slope starts to grow (don't understand why) and this causes the sensitivity. I"m looking into this a bit further now.

@vanroekel
Copy link
Collaborator Author

Seems like perhaps this is a stability issue. I reran the tests with a timestep of 5 minutes instead of 30minutes and the sensitivity to slope disappeared. I'm using rediKappa = 3600 so perhaps that is no surprising, but dc is 40,000m so I didn't expect a stability issue.

@vanroekel
Copy link
Collaborator Author

for comparison, here is the parabolic bowl salinity after 2 months for dt = 30 min
image

and here is the same configuration except dt = 5min (again at 2 months)

image

@dengwirda
Copy link
Collaborator

Okay, interesting. I suppose it's possible kappa = 3600 is exceeding a CFL-type limit, though there would only be a factor of 2 involved beyond the kappa = 1800 at 30km that's been used in real runs, so I hope this isn't the case...

Is this using a linear or nonlinear EoS? There will be a small feedback in the nonlinear case which may cause a drift away from the linear isopycnals I believe.

There does appear to be some small kinks in the contours near bathymetry with dt = 5min too, so I wonder if there's still an issue to pursue re: the masking?

@vanroekel
Copy link
Collaborator Author

Yes definitely agree there is still something funny around bathymetry. I've had a hard look at the masking and it seems to be doing as expected. I've stepped through slopeTriadUp and slopeTriadDown a timestep at a time with and without limiting and it seems to be right, but one thing that surprised me (but maybe shouldn't?) is that the masking of bathymetry is always the same index (index 1).

One place I'm not super certain of yet is the masking interplay with k33. I haven't yet convinced myself that that is doing the right thing near the bottom. I'll have a look at that next

@vanroekel
Copy link
Collaborator Author

@dengwirda and @mark-petersen I've been poking at this bottom sensitivity issue and think I made some progress. Here is a run with a couple of changes (notably vmix inside supercycling) 3months simulation 30min timestep

image

here is the original as a reminder (same length, same timestep)

image

certainly is significantly better, but this requires a lot more calls to vmix. I was thinking a possible next step is to try this test with Hyun's AB2 option as it gets rid of the supercycle loop.

@vanroekel
Copy link
Collaborator Author

Also just to note, it's still not perfect, there is some noise along the right bottom boundary, but this is a big step it seems

@vanroekel
Copy link
Collaborator Author

sorry one further note -- the runs I did in the comment just above where the T&S look much better do not include the quasi monotone limiter and slope limiting. So it is running with no limiting at all. without vmix in the supercycle loop the run crashes pretty quickly

@dengwirda
Copy link
Collaborator

Interesting @vanroekel! It makes sense to me that lagging updates to the K33 part of the tensor may introduce some asymmetry, and it looks like these results confirm that instability can come from that. Hopefully this explains the dt=5min vs dt=30min differences above, which had definitely seemed strange.

I believe that even the RK4 implementation hoists vmix outside of the substage updates, so I suspect this time-splitting error will be common across all of the MPAS config.'s, except the AB2 solver as you say.

If I remember correctly, we didn't see a big change in global runs moving vmix inside the split-ex cycle, but perhaps with so many other changes it's worth revisiting that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants