Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Revisit permutation test methodology #102

Open
tsalo opened this issue Jun 2, 2022 · 1 comment
Open

Revisit permutation test methodology #102

tsalo opened this issue Jun 2, 2022 · 1 comment
Labels
help wanted Extra attention is needed question Further information is requested

Comments

@tsalo
Copy link
Member

tsalo commented Jun 2, 2022

In working on #101, I've come across a few things in the permutation test methods that confuse me.

First, the permutation tests loop over datasets and parallelize across permutations. This makes sense in a non-imaging context, when you won't have many, if any, parallel datasets. However, in neuroimaging meta-analyses, you'll typically have many more parallel datasets (e.g., voxels) than permutations. Would it make sense to flip the approach in PyMARE, or would that cause too many problems for non-imaging meta-analyses?

Second, I'm comparing PyMARE's approach to Nilearn's permuted_ols function. I've noticed that there are a few steps in Nilearn's procedure that aren't in PyMARE, including some preprocessing done on the target_vars (y), tested_vars (X), and confounding_vars (also X). Should we (1) adopt this step and/or (2) treat confounding variables differently from tested variables?

@tsalo tsalo added help wanted Extra attention is needed question Further information is requested labels Jun 2, 2022
@tsalo
Copy link
Member Author

tsalo commented Jun 9, 2022

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
help wanted Extra attention is needed question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant