-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Make Magma optional for cuda builds? #275
Comments
If you build with magma, you need it at runtime even if you don't use magma. |
Ok, I have some good news. The use of Magma is entirely limited to I think we can split it into a separate subpackage, and build a Magma and non-Magma variants to choose from. On the minus side, we probably can't avoid building most of libtorch twice — though it could be possible to use ccache to minimize the cost of doing that. |
Isn't it used by |
No, it's dynamically loaded:
|
With the libmagma fix, the size of libmagma is significantly reduced now. From @rgommers
With 385MB for libmagma, does it still make sense that we make magma optional with a opt-out? If that size is tolerable, I'd rather we don't make magma optional. |
How do we reconcile 385MB compare to the 600MB you stated before |
Re the size difference of 385 Mb vs 600 Mb, do you have CUDA 11 locally perhaps @isuruf? I posted the exact package hash I measured at #298 (comment), so this is hopefully not too hard to clarify - and that's probably worth doing after all the effort spent on this issue.
Assuming 385 Mb is correct, then I tend to agree that making the recipe more complex, some build time increases I assume, and having an extra user-facing variable is likely not worth it anymore. And not merging it is perfectly fine imho - this whole exercise has already been valuable either way. |
It should be 600MB. We missed a few SMs in the last build. See conda-forge/libmagma-feedstock#23 |
Ah that is unfortunate. It probably doesn't change the conclusion though. My gut feel is that the 2 GB was clearly worth it, the 385 MB not, and somewhere in the middle is a gray zone - with 600 MB on the lower end of that gray zone. It can always be revisited later anyway, while adding now and removing later is harder (it's a user-facing knob, so bc-breaking to remove). For now I suggest to be happy with the achieved gains, and not merge |
I agree. It's better not to expose the option to users, than to temporarily have it and then remove it. If we are to change this in the future, the work's been done, it will remain in the commit history and we can reuse it again. |
isuruf@209b1cd should also give a small reduction in binary sizes. (Didn't measure it yet) |
We should also ignore the |
I'll prepare that locally but I don't want to restart the CI again. |
Pointed out by @isuruf in conda-forge#275 (comment)
Comment:
A conda-forge environment with nothing but pytorch for cuda in it currently ways in at 7.2GB, which is perceived to be rather on the heavy side of things.
Looking at potential for slimming things down, libmagma at ~2GB looks like a good candidate.
The pytorch docs seem to suggest that libmagma is used as an alternative for cusolver, which is included anyway, at a much more modest 150 MB.
In the past, magma was significantly faster than cusolver, as demonstrated by [1]. However, a recent 2024 paper by the magma authors [2] shows that cusolver has made progress and is now faster for some of the most important problems.
Magma still offers significant performance benefits for certain workloads, but given that pytorch has the ability to switch between the available libraries, we could make magma an optional dependency, i.e. merely include it in
run_constrained
and leave it up to the user to choose space or performance optimization based on their use-case.Is this feasible or am I missing something about the use of magma in pytorch?
Do you think this is desirable?
[1] S. Abdelfattah, A. Haidar, S. Tomov and J. Dongarra, "Analysis and Design Techniques towards High-Performance and Energy-Efficient Dense Linear Solvers on GPUs," in IEEE Transactions on Parallel and Distributed Systems, vol. 29, no. 12, pp. 2700-2712, 1 Dec. 2018, doi: 10.1109/TPDS.2018.2842785. keywords: {Graphics processing units;Energy efficiency;Task analysis;Multicore processing;Dense linear solvers;GPU computing;energy efficiency},
[2] Abdelfattah A, Beams N, Carson R, et al. MAGMA: Enabling exascale performance with accelerated BLAS and LAPACK for diverse GPU architectures. The International Journal of High Performance Computing Applications. 2024;38(5):468-490. doi:10.1177/10943420241261960
The text was updated successfully, but these errors were encountered: