Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Sampling-Based Nonparametric Sensitivity Analysis #169

Closed
max-de-rooij opened this issue Jul 12, 2024 · 1 comment
Closed

Sampling-Based Nonparametric Sensitivity Analysis #169

max-de-rooij opened this issue Jul 12, 2024 · 1 comment

Comments

@max-de-rooij
Copy link
Contributor

What kind of problems is it mostly used for? Please describe.

This is a global multiparametric sensitivity analysis method often used in our systems biology models to find sensitive parameters.

Describe the algorithm you’d like

The algorithm is known within our group simply as MPSA (Multi-Parametric Sensitivity Analysis), and explained in Hornberger & Spear (1981). The algorithm works as follows.

Given a function $f(x)$ of your parameters $x$, we take many samples $x_i$ using latin hypercube sampling from the subset of allowable parameters. Additionally, we include a set of dummy parameters that do not influence the function $f(x)$. For each sample, we compute a sensitivity value $f(x_i)$. We then classify each sample as 'accept' or 'reject', which we often base on the mean of all sensitivity values. In many cases, we set all samples for which holds that $f(x_i) > \mu(f(x))$ to accept, and all others to reject. This then results in a binary vector $\phi$.

Following this, we generate cumulative distributions for accept and reject samples based on the parameter values in each sample. For each parameter, we obtain the CDFs by ordering $\phi$ according to the parameter values of the specific parameter in each sample and compute

cdf_accept_i = cumsum(x == accept for x in phi)
cdf_reject_i = cumsum(1 - (x == accept) for x in phi)

The sensitivity for each parameter is then calculated as
k_s = maximum(abs.(cdf_accept_i .- cdf_reject_i))

This is done for all parameters and dummies.

The dummies can be used to check if enough initial Monte Carlo samples have been drawn (the dummy sensitivity should have converged) and can be used to compare whether a parameter is sensitive (k_s_param > k_s_dummy)

Other implementations to know about

We have a custom implementation in Matlab and I have written a very simple implementation in Julia as:

function MPSA(f, lb, ub; n_dummy_parameters = 200, n_mpsa_samples = 100_000)
 
  lb = [lb; repeat([0.0], n_dummy_parameters)]
  ub = [ub; repeat([1.0], n_dummy_parameters)]
  value_samples = QuasiMonteCarlo.sample(n_mpsa_samples, lb, ub, LatinHypercubeSample(rng))
 
  sensitivity_indices = Float64[]
  for sample in eachcol(value_samples)
    push!(sensitivity_indices, f(Vector(sample[1:length(lb)])))
  end
 
  acceptance_threshold = mean(sensitivity_indices)
  flag = Int.(sensitivity_indices .> acceptance_threshold)
  KS_scores = Float64[]
  # Cumulative distributions (for model parameters and dummies)
  @inbounds for i = 1:length(lb)+n_dummy_parameters
    temp = value_samples[i,:]  # associate 1 to acceptable cases and 0 to unacceptable parameter values
    temp_sort_idx = sortperm(temp)
    #temp = temp[sortperm(temp[:, 1]), :]  # sorts temp based on column 1
    acceptance_dist = cumsum(flag[temp_sort_idx])
    rejection_dist = cumsum(1 .- flag[temp_sort_idx])
 
    # normalize distributions
    acceptance_dist = acceptance_dist / maximum(acceptance_dist)
    rejection_dist = rejection_dist / maximum(rejection_dist)
 
    # calculate KS score
    KS = maximum(abs.(acceptance_dist .- rejection_dist))
    push!(KS_scores, KS)
  end
 
  KS_scores[1:length(u0)], mean(KS_scores[length(u0)+1:end]), std(KS_scores[length(u0)+1:end])
end

References

See Hornberger & Spear (1981)

@max-de-rooij
Copy link
Contributor Author

Fixed with #170

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant