Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

randomized svd draft #3008

Merged
merged 3 commits into from
Mar 13, 2025
Merged

randomized svd draft #3008

merged 3 commits into from
Mar 13, 2025

Conversation

hanbin973
Copy link
Contributor

Description

A draft of randomized principal component analysis (PCA) using the TreeSequence.genetic_relatedness_vector. The implementation contains spicy.sparse which should eventually be removed.
This part of the code is only used when collapsing a #sample * #sample GRM into a #individual * #individual matrix.
Therefore, it will not be difficult to replace with pure numpy.

The API was partially taken from scikit-learn.

To add some details, iterated_power is the number of power iterations in the range finder in the randomized algorithm. The error of SVD decreases exponentially as a function of this number.
The effect of power iteration is profound when the eigen spectrum of the matrix decays slowly, which seems to be the case of tree sequence GRMs in my experience.

indices specifies the individuals to be included in the PCA, although decreasing the number of individuals does not meaningfully reduce the amount of computation.

@hanbin973
Copy link
Contributor Author

@petrelharp Here's the code.

Copy link

codecov bot commented Oct 3, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 89.95%. Comparing base (a405996) to head (774ff5f).
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3008      +/-   ##
==========================================
+ Coverage   89.92%   89.95%   +0.03%     
==========================================
  Files          29       29              
  Lines       32362    32477     +115     
  Branches     5802     5822      +20     
==========================================
+ Hits        29101    29216     +115     
  Misses       1859     1859              
  Partials     1402     1402              
Flag Coverage Δ
c-tests 86.71% <ø> (ø)
lwt-tests 80.78% <ø> (ø)
python-c-tests 89.23% <ø> (ø)
python-tests 98.99% <100.00%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
python/tskit/trees.py 98.86% <100.00%> (+0.05%) ⬆️
🚀 New features to boost your workflow:
  • Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@petrelharp
Copy link
Contributor

This looks great! Very elegant. I think probably we ought to include a samples argument, though? For consistency, but also since the tree sequence represents phased data, and so it's actually informative to look at the PCs of maternally- and paternally-inherited chromosomes separately.

So, how about the signature is like

def pca(samples=None, individuals=None, ...)

and:

  • the default is equivalent to samples=ts.samples(), individuals=None
  • you can't have both samples and individuals specified
  • if individuals is a list of individual IDs then it does as in the code currently
  • otherwise, is just skips the "sum over individuals" step

Note that we could be getting PCs for non-sample nodes (since individual's nodes need not be samples); I haven't thought through whether the values you get are correct or informative. My guess is that maybe they are? But we need a "user beware" note for this?

@petrelharp petrelharp marked this pull request as draft October 4, 2024 01:46
@petrelharp
Copy link
Contributor

Ah, sorry - one more thing - does this work with windows? (It looks like not?)

I think the way to do the windows would be something like

drop_windows = windows is None
if drop_windows:
    windows = [0, self.sequence_length]

# then do stuff; with these windows genetic_relatedness will always return an array where the first dimension is "window";
# so you can operate on each slice separately

if drop_windows:
    # get rid of the first dimension in the output

Basically - get it to work in the case where windows are specified (ie not None) and then we can get it to have the right behavior.

@hanbin973
Copy link
Contributor Author

A simple test case for the windows feature.

demography = msprime.Demography()
demography.add_population(name="A", initial_size=5_000)
demography.add_population(name="B", initial_size=5_000)
demography.add_population(name="C", initial_size=1_000)
demography.add_population_split(time=1000, derived=["A", "B"], ancestral="C")
ts = msprime.sim_ancestry(
    samples={"A": 500, "B": 500},
    sequence_length=1e6,
    recombination_rate=3e-8,
    demography=demography, 
    random_seed=12)
seq_length = ts.sequence_length

U, _ = ts.pca(individuals=np.asarray([i.id for i in ts.individuals()]), iterated_power=5, random_seed=1, windows=[0, seq_length/2, seq_length])
U0, _ = ts.pca(individuals=np.asarray([i.id for i in ts.individuals()]), iterated_power=5, random_seed=1, windows=[0, seq_length/2])
U1, _ = ts.pca(individuals=np.asarray([i.id for i in ts.individuals()]), iterated_power=5, random_seed=1, windows=[seq_length/2, seq_length])

idx = 0 # idx is the idx-th principal component
# correlation instead of allclose because PCA is rotation symmetric
np.corrcoef(U[0][:,idx], U0[:,idx]), np.corrcoef(U[1][:,idx], U1[:,idx])

Because of the randomness of the algo, the correlation is not exactly 1, although it's nearly 1 like 0.99995623-ish.

@hanbin973
Copy link
Contributor Author

I just noticed that centre doesn't work with nodes option. The new commit fixed this problem.

@hanbin973
Copy link
Contributor Author

Check results for two windows.

demography = msprime.Demography()
demography.add_population(name="A", initial_size=5_000)
demography.add_population(name="B", initial_size=5_000)
demography.add_population(name="C", initial_size=1_000)
demography.add_population_split(time=1000, derived=["A", "B"], ancestral="C")
seq_length =1e6
ts = msprime.sim_ancestry(
    samples={"A": 500, "B": 500},
    sequence_length=seq_length,
    recombination_rate=3e-8,
    demography=demography, 
    random_seed=12)

# for individuals
U, _ = ts.pca(individuals=np.asarray([i.id for i in ts.individuals()]), iterated_power=5, random_seed=1, windows=[0, seq_length/2, seq_length])
U0, _ = ts.pca(individuals=np.asarray([i.id for i in ts.individuals()]), iterated_power=5, random_seed=1, windows=[0, seq_length/2])
U1, _ = ts.pca(individuals=np.asarray([i.id for i in ts.individuals()]), iterated_power=5, random_seed=1, windows=[seq_length/2, seq_length])

idx = 0 # idx is the idx-th principal component
# correlation instead of allclose because PCA is rotation symmetric
np.corrcoef(U[0][:,idx], U0[0][:,idx]), np.corrcoef(U[1][:,idx], U1[0][:,idx])

# for nodes
U, _ = ts.pca(iterated_power=5, random_seed=1, windows=[0, seq_length/2, seq_length])
U0, _ = ts.pca(iterated_power=5, random_seed=1, windows=[0, seq_length/2])
U1, _ = ts.pca(iterated_power=5, random_seed=1, windows=[seq_length/2, seq_length])

idx = 0 # idx is the idx-th principal component
# correlation instead of allclose because PCA is rotation symmetric
np.corrcoef(U[0][:,idx], U0[0][:,idx]), np.corrcoef(U[1][:,idx], U1[0][:,idx])

@petrelharp
Copy link
Contributor

Okay; here I've made a good start at the tests. I think everything is working fine; the tests are not passing because (I think) of numerical tolerance. I could just set the numerical tolerance to like 1e-4 and they'd pass, but I think this is flagging a bigger issue - how do we tell we're getting good answers? I ask because currently the tests pass (at default tolerances) for small numbers of samples but not for 30 samples; if I increase iterated_power then they do pass; so the current defaults seem okay for 30 samples, but I worry that at 10,000 samples they might be very bad. If u, s is one of the output vector, value pairs, can we compare s * u to W @ u and translate the result to some measure of tolerance? Not very good options here that I can think of are

  1. add a tolerance argument and we iterate until this tolerance is reached
  2. return an estimate of error; perhaps we should return some kind of structured output that contains information about convergence as well?

We'd like this to be not a can of worms; I think our goal is to have something that is good enough, and fowards-compatible for an improved method in the future.

Notes:

  • For testing, I use np.linalg.svd, and (of course) had to mess around so that we'd correctly deal with indeterminancy of sign and also indeterminancy of ordering when singular values are the same.
  • Perhaps we should call it num_components rather than n_components to match elsewhere?
  • I did some rearranging of how the operator in _rand_svd was defined to avoid lint complaining (but was unsuccessful; so inserted the NOQA)
  • Some of the changes were due to automatic linting; apologies for the noise there.

TODO:

  • add tests that include samples and individuals arguments
  • test on some weirder tree sequences (that's easy)

@jeromekelleher
Copy link
Member

Just a quick note that I'd be very much in favour of returning a dataclass here rather than a tuple so that the option of returning more information about convergence etc is open.

@hanbin973
Copy link
Contributor Author

There's an adaptive rangefinder algorithm described in Halko et al. (https://arxiv.org/pdf/0909.4061, Algo 4.2). I don't see it implemented in scikit-learn (https://scikit-learn.org/dev/modules/generated/sklearn.decomposition.TruncatedSVD.html#sklearn.decomposition.TruncatedSVD).
Halko et al. also has an error estimation formula with an overhead similar to the main routine.

I like Jerome's idea to return a class instead of the result. There's an intermediate matrix Q (approximate range of the original matrix) that takes up most of the computation, and storing it is useful in many cases. For example, one can add additional power iterations to Q without doing it from scratch to improve precision.

@hanbin973
Copy link
Contributor Author

Not a classobject yet but added random_sketch to the input/output.

@jeromekelleher
Copy link
Member

Re the result object, I'd imagined something like

@dataclasses.dataclass
class PcaResult:
    descriptive_name1: np.ndarray # Or whatever type hints we can get to work
    descriptive_name2...

@hanbin973
Copy link
Contributor Author

Now, pca() returns a dataclass of the following

@dataclass
class PCAResult:
    U: np.ndarray
    D: np.ndarray
    Q: np.ndarray
    E: np.ndarray

U and D are as before. Q is the range sketch matrix that is used as the approximate orthonormal basis of the GRM. It is also the most and the only expensive part of the algorithm that involves GRM*matrix operations. E is the error bounds for the singular values. Both Q and E will have different values for each windows if present.

A user can continuously improve their estimate through Q. pca now has a range_sketch: np.ndarray = None option that accepts Q from the previous found of the pca. This can be done like

pca_result = ts.pca( ... )
pca_result_round_2 = ts.pca( ..., range_sketch = pca_result.Q, ...)

If the first round did q power iterations and the second round did p additional power iterations, the result of the second round has total q+p iterations. By adding additional power iterations in successive rounds, one can improve the accuracy without running the whole process from scratch.

Copy link
Member

@jeromekelleher jeromekelleher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great, but I would suggest we break the nested functions out to the module level rather than embedding them in the TreeSequence class. The function is currently too long, and it's not clear what needs to be embedded within the function because it's using the namespace, vs what's in there just because. It would be nice to be able to test the bits of this individually, and putting them at the module level will make that possible.

Certainly the return class should be defined at the module level and added to the Sphinx documentation so that it can be linked to.

Copy link
Member

@jeromekelleher jeromekelleher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nitpick about code organisation!

@hanbin973
Copy link
Contributor Author

It now has a time-resolved feature. You can select branches within the lower and the upper time limits. It is based on decapitate.

@petrelharp
Copy link
Contributor

NICE!

@petrelharp
Copy link
Contributor

Also - I wonder if a better name for iterated_power would be power_iterations?

Oh, I see that scikit-learn says iterated_power (and also n_components instead of num_components). Hm. I think that being compatible with them is good, but copying their bad API choices is not requred? (Well, n_components is a fine API, but it isn't consistent with the rest of our API?, which is num_X for things?)

@jeromekelleher
Copy link
Member

I'd vote for compatible with scikit second, compatible with our other APIs first.

@hanbin973
Copy link
Contributor Author

hanbin973 commented Mar 10, 2025

Okay - I've put in a bunch of the testing code for correct arguments, etcetera. Still TODO:

  • testing for the error bounds
  • figure out why individuals is not passing tests (see below)

One issue I've turned up along the way is that the individuals mode I think was not agreeing with our definition of genetic_relatedness: as described in the relatedness paper, relatedness between two sample sets is the average of the relatedness between pairwise combinations of samples. This is just a factor of 4 for diploids - but that's a discrepancy that will confuse people - and actually makes a difference for mixed ploidy (eg sex chromosomes). I think I've sorted this out correctly.

Have we convinced ourselves that the default iterated_power is a good value? I have a bit of a hard time figuring out what "good" means - like, what tolerance do we need these eigenvectors to?

I also refactored the code to randomly generate range_sketch outside of the loop over windows, because this makes (a) the testing-for-correct-inputs code up top cleaner, and (b) the _rand_pow_range_finder and _rand_svd functions simpler. This has a memory downside of having to have range_sketch for all windows, not just one -- but, this is something we're returning anyhow, so is not a big deal.

I added an individuals argument to the testing implementation, but this isn't matching; I've stared at it a bunch and don't know why. Oddly, ts.pca( samples=x) and ts.pca( individuals=x) match for haploid individuals, but these don't match to the testing code.

What is the exact failure about? Is it the eigenvalues not matching or the factor scores not matching? In terms of the factor scores, I think it's better to compare U.T @ U to each other so that we avoid rotation and sign problems. Shall I change the code?

@petrelharp
Copy link
Contributor

The failure is that the two answers are Definitely Not the Same. Clicking on the "Tests / ..." link above, or running locally, we get

FAILED tests/test_relatedness_vector.py::TestPCA::test_individuals[1-0-True] - AssertionError: 
Not equal to tolerance rtol=1e-07, atol=1e-08

Mismatched elements: 5 / 5 (100%)
Max absolute difference among violations: 5733.50077407
Max relative difference among violations: 0.60127089
 ACTUAL: array([3802.135878, 3157.318008, 2779.806231, 2273.337602, 1637.086583])
 DESIRED: array([9535.636652, 5611.718265, 4424.8184  , 4081.358172, 2913.365737])

To run this locally, do

python -m pytest tests/test_relatedness_vector.py::TestPCA::test_individuals[1-0-True] 

I hear what you're saying about comparing U.T @ U, but I think we do want to actually test the individual eigenvectors, not just that the low dimensional approx is right. (Or, maybe I don't understand your suggestion?) The testing code is robust to ordering of eigen-things and sign switches; however, if two eigenvalues are very close, then you're right, we could have problems with nonidentifiabilty that way. Please do have a look at the testing code, though, to see if you have a better suggestion?

But - I don't think that's the problem - inserting a print into the testing code shows that these are the eigenvalues:

[3837.56623916, 3364.17632569, 2843.38691979, 2419.60371452, 655.31546225]

@petrelharp
Copy link
Contributor

Good catch!

ndows,)
-        U, D = pca(ts=ts, windows=windows, samples=samples, centre=centre)
+        U, D = pca(ts=ts, windows=windows, centre=centre, samples=samples, individuals=individuals)

And - hah - I guess I missed the ... in the output saying that they're of different lengths.

@hanbin973
Copy link
Contributor Author

Now all the tests are passing in my local machine. The error was due to a minor omission of the individuals option in the verify_pca() function.

@petrelharp
Copy link
Contributor

Hey, that's great! I need to scrutinize codecov still, but there's one not totally trivial linting issue:

python/tskit/trees.py:8886:24: B023 Function definition does not bind loop variable '_f_high'.
python/tskit/trees.py:8888:29: B023 Function definition does not bind loop variable 'indices'.
python/tskit/trees.py:8891:29: B023 Function definition does not bind loop variable 'this_window'.
python/tskit/trees.py:8896:27: B023 Function definition does not bind loop variable '_f_low'.
python/tskit/trees.py:8898:33: B023 Function definition does not bind loop variable 'indices'.
python/tskit/trees.py:8901:33: B023 Function definition does not bind loop variable 'this_window'.

(I get this by running the pre-commit hooks; see the developer docs for how to set this up; in particular by

pre-commit run --files tskit/trees.py tests/test_relatedness_vector.py 

Would you like to have a look at how to deal with that? (I haven't figured out what it means yet.)

@hanbin973
Copy link
Contributor Author

The linting error appears when a global variable inside a function that is defined within a loop is the loop variable. I can't immediately think of a case where this causes a problem, but the point is that the use of the loop variable inside the function should be explicit. An easy hack is to add the loop variable to the function argument and set it as the default value. This is adapted from : https://stackoverflow.com/questions/54947434/function-definition-in-a-for-loop-using-local-variable

@hanbin973
Copy link
Contributor Author

I don't know why docs build is failing to detect the new PCAResult class.

@jeromekelleher
Copy link
Member

@benjeffery any thoughts on why the docs build is failing? If we squash the commits down to a reasonable number (5, say, or must 1 may be easier) then I think we can log an issue to track beefing up the test coverage a bit, and merge.

@benjeffery
Copy link
Member

Docs build is failing as there is a link to the PCAResult class, but that class is not included in the docs.

@hanbin973
Copy link
Contributor Author

hanbin973 commented Mar 13, 2025

I can't tell how great it is to write code under the guidance of expert maintainers. I added the .rst block for PCAResult and now the error is gone. I'm not sure if I've put the docstring in the right section though.

@petrelharp
Copy link
Contributor

Something funny is going on with codecov: I think the missing line numbers are out of sync with the code. I'd like to at least look at the coverage before merging - let me just squash down the commits, that might help.

@petrelharp
Copy link
Contributor

I can't tell how great it is to write code under the guidance of expert maintainers.

I know, right? I have learned so much from Jerome & Ben.

@jeromekelleher
Copy link
Member

I know, right? I have learned so much from Jerome & Ben.

Likewise Peter! ❤️

add windows feature
return range sketch matrix Q
support for subset of samples and individuals in a tree
lint requires a function inside a loop to be explicit about the loop variable; fixed it by setting a default variable
fix docs build error; add PCAResult to python-api.md
@hanbin973
Copy link
Contributor Author

Great. Tests cover all parts of the code and are passing.

@jeromekelleher jeromekelleher marked this pull request as ready for review March 13, 2025 20:59
@jeromekelleher
Copy link
Member

Happy to merge when you are @petrelharp

@petrelharp
Copy link
Contributor

Let's merge, but there's been no testing so far of the "error bound"; I'd like to get at least a little scrutiny on that before releasing.

@petrelharp petrelharp merged commit aa31e06 into tskit-dev:main Mar 13, 2025
19 checks passed
@hanbin973
Copy link
Contributor Author

I'll look at it over night.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants