-
Notifications
You must be signed in to change notification settings - Fork 261
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Introduce prognostic updraft velocity to saSAS and C3 convection schemes #2567
base: develop
Are you sure you want to change the base?
Introduce prognostic updraft velocity to saSAS and C3 convection schemes #2567
Conversation
@jkbk2004 I keep seeing failures in these four tests, even though you can successfully run them with this PR: I don't know how to upload a test_changes.list and move forward with this PR, if you have time - could you help me with the regression tests? Thank you. |
@lisa-bengtsson I can try to do this on Hera if you'd like. |
@grantfirl thank you, that would be much appreciated! |
OK, RTs running on Hera right now. FYI, I did need to bring in the latest develop branches to your fv3atm and ufs-weather-model branches. Perhaps that was the issue? I'll let you know when it's finished. |
@grantfirl thanks, I did update them now, and will try again. I've had some trouble with the same four cases for a couple of weeks, @jkbk2004 ran them earlier and confirmed that they passed - so I don't know why I'm having trouble with them on my end. I'd be interested to see if your RT suite passes. |
@lisa-bengtsson I'm seeing the same failures. The runs completed, but the results were not identical. |
@lisa-bengtsson If you really don't think that those tests should produce different results, it's possible that the baselines on Hera for those tests are wrong. As soon as the entire rt.conf finishes up, I'll checkout the top-of-develop and run only one of the tests that is failing to see if the baselines are OK. |
@lisa-bengtsson So, the develop branch doesn't fail at least for the
Anyway, these are my best 2 hypotheses at the moment. One or both could be wrong! |
The most common cause is numbers that are close to, but not exactly, zero. It can cause problems when optimization is changed to round subnormal (AKA denormal) numbers to zero. It also causes problems when switching to single-precision in physics. A good way to detect problems like this is Look for:
|
@grantfirl @SamuelTrahanNOAA Thanks, I will take a look. Grant, did your test of this PR also fail in the control_p8_faster intel test? The reason I'm asking is that I had this problem before, but then @jkbk2004 ran the RT's using this PR and the tests passed. |
Yes, my test also had failures for: |
Ok, thanks - let me see if I can understand it better following Sam's clues. |
@SamuelTrahanNOAA Do you think it could be possible that the RT's does not reproduce due to code inside the condition "if progomega" even if progomega is false in all the tests that fail? There aren't many lines of code outside of this statement that were updated. |
An update is that the options -DDEBUG=YES -DCCPP_32BIT=YES doesn't work for the coupled tests as compilation of MOM6 fails. I tried instead of running the coupled_debug_p8 as an atmosphere only test, adding in the physics tested here for gfsv17 and compiling with -DDEBUG=YES -DCCPP_32BIT=YES. It compiles and runs without problems. So unfortunately this option didn't provide any additional information as to why the comparison with basline fails when using the "DFASTER" option. |
Perhaps I'm misunderstanding. @grantfirl - You said the test "failed." What do you mean by that? Is it crashing? Or does it run to completion and produce different results? If it's running to completion and producing different results, that's expected when there are changes to source code files used by the test. Changing the code can change the optimizations used in that code. That's true even if the changes are hidden behind an "if" block. For example, something that was vectorized may no longer be vectorized due to a new branch. The code shouldn't crash, though. All the tests should run to completion. |
@SamuelTrahanNOAA no tests are crashing. But these tests fail in comparing the runs with the baselines: cpld_control_p8_faster intel For comparison the tests cpld_control_p8 and cpld_debug_p8 are OK, it is just these jobs with the DFASTER flag that gives different results compared to the baselines. |
Is there some reason why those tests wouldn't change results? If they run subroutines that you've changed, then results are expected to change. The The only true way to confine the tests to your changes is to disable optimization entirely. Otherwise, you have to accept that changing code in a subroutine may change the output of any test that uses the subroutine. |
@SamuelTrahanNOAA @lisa-bengtsson I think that the issue is that there have been MANY PRs that have come through that change physics source files in similar ways that don't trigger differences with the I think that it makes it harder to understand if the |
@grantfirl @SamuelTrahanNOAA thanks for explaining. Since the other tests pertaining to GFSv17 physics (or RT's using the samf* code) are bit-wise reproducible (including the debug tests), I think it supports Sam's argument that the differences pertain to optimization. I do change some intents, and add arguments as input/output. |
Commit Queue Requirements:
Description:
This development incorporates a new prognostic updraft velocity equation into the saSAS deep and shallow convection schemes. It is introduced as a module and will be easily added to the C3 scheme as well, once the modularized C3 code is in place.
The equation describes the time-evolution of convective updraft velocity, with buoyancy as a source term, and quadratic damping is due to aerodynamic drag and mixing/diffusion and linear damping from wind shear.
The equation is discretized using an implicit approach, and solved as a quadratic equation. For implementation details - please see: https://docs.google.com/document/d/13VH7DV4erJcuuF_-dUGpplk9uYHF6dalfx8l1fAIxe8/edit?tab=t.0
The prognostic updraft velocity is used in the prognostic closure if progsigma = true and progomega = true, it is also replacing the diagnostic updraft velocity in the adjustment time-scale computation if progomega = true (regardless of progsigma). I here implement the scheme using the default setting of progomega = false for further testing and evaluation, and thus no regression tests should fail and no new baselines are required.
Commit Message:
Priority:
Git Tracking
UFSWM:
Sub component Pull Requests:
UFSWM Blocking Dependencies:
Changes
Regression Test Changes (Please commit test_changes.list):
New baselines needed for the following tests:
cpld_control_p8_faster intel
control_p8_faster intel
hafs_regional_storm_following_1nest_atm_ocn_wav intel
hafs_regional_storm_following_1nest_atm_ocn_wav_inline intel
Input data Changes:
Library Changes/Upgrades:
Testing Log: