Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Change workspace dir #566

Merged
merged 1 commit into from
Oct 29, 2024
Merged

Conversation

abcdabcd987
Copy link
Member

In my development setup, I have different machines with different GPUs. They share the home directory on a network filesystem. When I switch between machines, since the JIT compilation flags change, I'll have to recompile kernels every time.

One solution is to specify the same TORCH_CUDA_ARCH_LIST every time. However, I keep forgetting that.

Another solution, as proposed in this PR, is to put different arch list in different cache directory.

@yzh119 yzh119 merged commit cdc12c3 into flashinfer-ai:main Oct 29, 2024
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants