-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
More coriolis options #438
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Just some pending questions about docstrings, terminology (β
versus Beta
, and lat
versus latitude
), and some @inbounds
annotations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree with Greg's comments. Otherwise looks good!
Can we also have a BetaPlane(T=Float64; Ω, latitude)
?
Yeah exactly, it's there so we don't mix precisions when running with e.g. Could do |
Codecov Report
@@ Coverage Diff @@
## master #438 +/- ##
==========================================
+ Coverage 73.46% 73.83% +0.37%
==========================================
Files 27 27
Lines 1515 1525 +10
==========================================
+ Hits 1113 1126 +13
+ Misses 402 399 -3
Continue to review full report at Codecov.
|
Just to expand on what @ali-ramadhan said: in writing numbers, we adopt a mixed approach -- in some cases we use integers and rely on promotion to obtain the correct floating point precision. In other cases, we explicitly impose the precision of a number. For fractions, we typically impose precision. For multiplication by integers, we just use the integer and promotion. Also, I don't think its actually the floating point conversation that is costly here. Rather I think the added cost is doing the arithmetic at higher precision. Also, due to promotion rules, I think if a single kernel operator outputs the wrong floating point type, we could end up performing much of an entire kernel's computation at the wrong precision. This appears to have a negligible effect on a Tesla V100 --- but it may have more important effects on other machines, especially if we try to run at half precision on machines especially designed for half precision calculations. |
…eananigans.jl into more-coriolis-options
@ali-ramadhan Added this but we also have to specify the radius for this one to work. |
This PR might be of interest to @masonrogers14 |
@suyashbire1 is this ready to be merged? We need this for our work with @masonrogers14. I noticed, however, that some commits for the new NetCDF output writer made their way into this PR. Does that mean we should merge the two PRs together? This is fine with me. |
Yes, this one's good to go. I'm sorry I let it marinate for so long. Feel free to make you changes and merge @glwagner . About the netcdf commits, yeah, I must have created this branch off of the netcdf branch. Yeah, lets merge the two together. |
…eananigans.jl into more-coriolis-options
…to more-coriolis-options
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! Just one minor comment but otherwise looks good to merge.
Ah sorry merging PR #496 introduced a tiny merge conflict, just fixed it. |
More coriolis options Former-commit-id: 7bbdd3d
This pull request adds
Fplane(omega, lat)
and\betaPlane(f_0, \beta)
functionality.What does the
T(0.5)
term do in the following snippet? Does it translate toFloat64(0.5)
? In that case, I'm not sure it should be there. I have kept it for now.