Skip to content

AdvancedPS v0.7 (and thus Libtask v0.9) support #2585

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

mhauru
Copy link
Member

@mhauru mhauru commented Jun 6, 2025

The complement PR of TuringLang/AdvancedPS.jl#114, which adds support for the newly rewritten Libtask.

Work in progress, currently blocked by TuringLang/Libtask.jl#186

@mhauru
Copy link
Member Author

mhauru commented Jun 19, 2025

The tests that I had the patience to run locally now pass. Waiting for the AdvancedPS release to be able to run the full test suite on CI.

Some indicators of speed:

julia> module MWE

       using Turing

       @model function gdemo(x, y)
           s ~ InverseGamma(2, 3)
           m ~ Normal(0, sqrt(s))
           x ~ Normal(m, sqrt(s))
           y ~ Normal(m, sqrt(s))
           return s, m
       end

       @time chn = sample(gdemo(2.5, 1.0), PG(10), 10_000)
       describe(chn)

       end

On main:

104.715858 seconds (58.48 M allocations: 13.259 GiB, 1.10% gc time, 1.20% compilation time)

On this branch:

 16.612050 seconds (116.52 M allocations: 16.296 GiB, 8.27% gc time, 5.91% compilation time)
julia> module MWE

       using Turing

       @model function f(dim=20, ::Type{T}=Float64) where T
           s = Vector{Bool}(undef, dim)
           x = Vector{T}(undef, dim)
           for i in 1:dim
               s[i] ~ Bernoulli()
               if s[i]
                   x[i] ~ Normal()
                else
                   x[i] ~ Beta()
                end
                0.0 ~ Normal(x[i])
           end
           return nothing
       end

       alg = Gibbs(
           @varname(s)=>PG(10),
           @varname(x)=>HMC(0.1, 5),
       )
       @time chn = sample(f(), alg, 1_000)

       end

On main:

 49.682945 seconds (65.43 M allocations: 9.463 GiB, 1.79% gc time, 8.23% compilation time)

On this branch:

  9.180071 seconds (61.58 M allocations: 4.028 GiB, 4.04% gc time, 55.49% compilation time)

Obviously the speed gains are all due to @willtebbutt's fantastic work on Libtask, everything else is just wrapping that work.

Copy link
Contributor

Turing.jl documentation for PR #2585 is available at:
https://TuringLang.github.io/Turing.jl/previews/PR2585/

@yebai yebai marked this pull request as ready for review June 23, 2025 21:23
@yebai yebai requested a review from penelopeysm June 23, 2025 21:23
@@ -85,7 +85,7 @@ Statistics = "1.6"
StatsAPI = "1.6"
StatsBase = "0.32, 0.33, 0.34"
StatsFuns = "0.8, 0.9, 1"
julia = "1.10.2"
julia = "1.10.8"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Libtask requires 1.10.8 at a minimum.

@@ -402,11 +391,11 @@ end

function trace_local_varinfo_maybe(varinfo)
try
trace = AdvancedPS.current_trace()
return trace.model.f.varinfo
trace = Libtask.get_taped_globals(Any).other
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we change Libtask.get_taped_globals to return nothing if not inside a running TapedTask, the following try .. catch ... end can be removed.

@@ -416,11 +405,10 @@ end

function trace_local_rng_maybe(rng::Random.AbstractRNG)
try
trace = AdvancedPS.current_trace()
return trace.rng
return Libtask.get_taped_globals(Any).rng
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same with above.

Copy link

codecov bot commented Jun 23, 2025

Codecov Report

Attention: Patch coverage is 41.17647% with 10 lines in your changes missing coverage. Please review.

Project coverage is 50.44%. Comparing base (7ebde76) to head (7cf8ee0).

Files with missing lines Patch % Lines
src/mcmc/particle_mcmc.jl 41.17% 10 Missing ⚠️

❗ There is a different number of reports uploaded between BASE (7ebde76) and HEAD (7cf8ee0). Click for more details.

HEAD has 15 uploads less than BASE
Flag BASE (7ebde76) HEAD (7cf8ee0)
28 13
Additional details and impacted files
@@             Coverage Diff             @@
##             main    #2585       +/-   ##
===========================================
- Coverage   85.57%   50.44%   -35.13%     
===========================================
  Files          22       22               
  Lines        1456     1447        -9     
===========================================
- Hits         1246      730      -516     
- Misses        210      717      +507     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@coveralls
Copy link

Pull Request Test Coverage Report for Build 15835391573

Details

  • 7 of 17 (41.18%) changed or added relevant lines in 1 file are covered.
  • 498 unchanged lines in 14 files lost coverage.
  • Overall coverage decreased (-35.1%) to 50.519%

Changes Missing Coverage Covered Lines Changed/Added Lines %
src/mcmc/particle_mcmc.jl 7 17 41.18%
Files with Coverage Reduction New Missed Lines %
src/mcmc/external_sampler.jl 2 90.24%
src/mcmc/prior.jl 3 0.0%
src/mcmc/hmc.jl 13 85.5%
src/mcmc/is.jl 15 5.88%
ext/TuringDynamicHMCExt.jl 27 0.0%
src/mcmc/mh.jl 32 59.13%
src/mcmc/Inference.jl 33 48.62%
src/mcmc/particle_mcmc.jl 43 58.28%
src/mcmc/emcee.jl 44 9.62%
src/variational/VariationalInference.jl 48 0.0%
Totals Coverage Status
Change from base Build 15765001949: -35.1%
Covered Lines: 730
Relevant Lines: 1445

💛 - Coveralls

@penelopeysm
Copy link
Member

penelopeysm commented Jun 24, 2025

Is this reviewable? The tests are failing, there's a method ambiguity that Aqua complains about, there's a Gibbs failure on 1.12 which should be disabled with @test_broken, and the sampling in mcmc/Inference is returning numerically inaccurate values:

beta binomial: Test Failed at /home/runner/work/Turing.jl/Turing.jl/test/test_utils/numerical_tests.jl:55
  Expression: ≈(E, val, atol = atol, rtol = rtol)
   Evaluated: 0.5430087089005213 ≈ 0.7142857142857143 (atol=0.05, rtol=0.0)

beta binomial: Test Failed at /home/runner/work/Turing.jl/Turing.jl/test/test_utils/numerical_tests.jl:55
  Expression: ≈(E, val, atol = atol, rtol = rtol)
   Evaluated: 0.4 ≈ 0.7142857142857143 (atol=0.1, rtol=0.0)

I don't want to speak for @mhauru in his absence but last time we spoke about this PR, it was clear that there were still a few gaps to bridge. If I were to review it at this stage, my sole comment would be to fix the tests.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants