Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Complie PyTorch Bindings with float32 precision. #51

Closed
srxdev0619 opened this issue Feb 27, 2022 · 2 comments
Closed

Complie PyTorch Bindings with float32 precision. #51

srxdev0619 opened this issue Feb 27, 2022 · 2 comments

Comments

@srxdev0619
Copy link

Hi,

First up, thanks for this is great work! I was just wondering if there's a way to compile tiny-cuda-nn and its pytorch bindings to use float32 by default? Thanks!

@srxdev0619 srxdev0619 changed the title Complie PyTorch with float32 precision. Complie PyTorch Bindings with float32 precision. Feb 27, 2022
@Tom94
Copy link
Collaborator

Tom94 commented Feb 28, 2022

Hi there, you can go into include/tiny-cuda-nn/common.h and change

#define TCNN_HALF_PRECISION (!(TCNN_MIN_GPU_ARCH == 61 || TCNN_MIN_GPU_ARCH <= 52))

to

#define TCNN_HALF_PRECISION 0

Cheers!

@Tom94 Tom94 closed this as completed Feb 28, 2022
@srxdev0619
Copy link
Author

Worked perfectly, thanks!

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants