-
-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
test: fix DelayParentScope
test
#1214
Conversation
@AayushSabharwal are the failures here because the update to MTK hasn't been released yet? |
Yes, and also because it needs a couple more tweaks to account for the new |
97e35aa
to
ddff0a3
Compare
@AayushSabharwal the test that is failing is one where the initial condition is Float32 but it is now getting promoted to Float64 in the solution object? (That one was passing just yesterday.) |
There is an error now where type specification of solution objects gets dropped: # Create model. Checks when input type is `Float64` the produced values are also `Float64`.
rn = @reaction_network begin
(k1,k2), X1 <--> X2
end
# Checks when values are `Float32` (a valid type and should be preserved).
u0 = [:X1 => 1.0f0, :X2 => 3.0f0]
ps = [:k1 => 2.0f0, :k2 => 3.0f0]
oprob = ODEProblem(rn, u0, 1.0f0, ps)
osol = solve(oprob, Tsit5())
@test eltype(osol[:X1]) == eltype(osol[:X2]) == typeof(oprob[:X1]) == typeof(oprob[:X2]) == Float32 # These are all `Float64`
@test eltype(osol.t) == typeof(oprob.tspan[1]) == typeof(oprob.tspan[2]) == Float32 |
Can we set this Float32 test to test_broken for now? I'm fine making it a blocker for release, but it would be nice to get CI working for other PRs. |
There i somtehing in the Jacobian tests again (again relating to sparse Jacobians, SciML/ModelingToolkit.jl#3554). I have disabledt hem for now so that we can continue checking if there is any more problems. |
Ok, seems we are not too far off. Think the only stuff causing failures not fixed here is |
test/runtests.jl
Outdated
@@ -47,7 +47,7 @@ end | |||
|
|||
# Tests ODE, SDE, jump simulations, nonlinear solving, and steady state simulations. | |||
@time @safetestset "ODE System Simulations" begin include("simulation_and_solving/simulate_ODEs.jl") end | |||
@time @safetestset "Automatic Jacobian Construction" begin include("simulation_and_solving/jacobian_construction.jl") end | |||
#@time @safetestset "Automatic Jacobian Construction" begin include("simulation_and_solving/jacobian_construction.jl") end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the CI failure just from testing floating point results are equal instead of agreeing to some tolerance? If so just update the test to test agreement to some reasonable tolerance. We generally shouldn’t test floating point results are exactly equal anyways.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I think doing approx
is a sensible solution. There is a discussion over at SciML/ModelingToolkit.jl#3554 (comment) and there might be some reason why the MTK people would want to have a look. From our purposes, using approx
should be more than fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah with how SciML/ModelingToolkit.jl#3554 was narrowed down, I thought at first it could be something a bit more fundamental but it was narrowed down to CSE, which is an expression reordering to save computations thing. For reference, common subexpression elimination is
x = (x + 1) * y + (x + 1) * z
you instead do
tmp = x + 1
x = tmp*y + tmp*z
Symbolics.jl build function now supports doing this, and MTK turns it on by default. You can see it changes the BCR sparse Jacobian build from 200 seconds to 20 seconds:
───────────────────────────────────────────────────────────────────────────
───────────────────────────────────
Time
Allocations
───────────────
──────── ────────────────────────
Tot / % measured: 500s / 9
3.2% 34.4GiB / 90.1%
Section ncalls time %tot
avg alloc %tot avg
───────────────────────────────────────────────────────────────────────────
───────────────────────────────────
Compile jacobian - no CSE 1 213s 45.7%
213s 3.05GiB 9.8% 3.05GiB
Compile jacobian - CSE 1 77.8s 16.7%
77.8s 0.96GiB 3.1% 0.96GiB
Calculate jacobian - without hashconsing 1 73.3s 15.7%
73.3s 10.5GiB 34.0% 10.5GiB
Calculate jacobian - hashconsing, without caching 1 72.1s 15.5%
72.1s 10.5GiB 33.8% 10.5GiB
Calculate jacobian - hashconsing and caching 1 23.7s 5.1%
23.7s 5.19GiB 16.8% 5.19GiB
Build jacobian - no CSE 1 4.42s 0.9%
4.42s 615MiB 1.9% 615MiB
Build jacobian - CSE 1 1.65s 0.4%
1.65s 162MiB 0.5% 162MiB
Compute jacobian - no CSE 1 91.4μs 0.0%
91.4μs 272B 0.0% 272B
Compute jacobian - CSE 1 46.6μs 0.0%
46.6μs 272B 0.0% 272B
───────────────────────────────────────────────────────────────────────────
───────────────────────────────────
in the unreleased benchmarks SciML/SciMLBenchmarks.jl#1178
But yes, it is a floating point level difference and I don't think Catalyst tests should care about a 1e-16 from a reordering of arithmetic, but at the same time I do want to follow up with this in MTK just to find out why the in-place and out of place are not giving exactly the same ordering post LLVM optimizations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had seen the recent CSE stuff and figured that was the cause, but yes we shouldn’t generally expect floating point equality except for results from the exact same function. I figured it could be that something was slightly different in the codegen for in and out of place.
test/test_functions.jl
Outdated
@@ -51,7 +51,7 @@ function jac_eval(rs::ReactionSystem, u, p, t; combinatoric_ratelaws = true, spa | |||
prob = ODEProblem(rs, u, 0.0, p; jac = true, combinatoric_ratelaws, sparse) | |||
J = sparse ? deepcopy(prob.f.jac_prototype) : zeros(length(u), length(u)) | |||
prob.f.jac(J, prob.u0, prob.p, t) | |||
@test J == prob.f.jac(prob.u0, prob.p, t) | |||
@test J ≈ prob.f.jac(prob.u0, prob.p, t) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use a smaller tolerance. Seems like they should agree to basically floating point precision (so maybe check agreement to one or two lower orders)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will do, think there might be some other cases where his happens as well. When I do I will add like 1e-14
to them and we should be good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this one need a tolerance update to be a bit more strict (like the others)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree, this one should have as well
Co-authored-by: Sam Isaacson <isaacsas@users.noreply.github.com>
Co-authored-by: Sam Isaacson <isaacsas@users.noreply.github.com>
ok, test passes now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TorkelE one comment on a test where you missed using a tighter tolerance, but fine to then merge when you are happy.
Updated. When tests pass, are we happy to merge? |
Done, thanks for the help @AayushSabharwal ! |
Checklist
contributor guidelines, in particular the SciML Style Guide and
COLPRAC.
Additional context
Add any other context about the problem here.