Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Force use_cache=True in config only #497

Merged
merged 1 commit into from
Sep 2, 2023
Merged

Force use_cache=True in config only #497

merged 1 commit into from
Sep 2, 2023

Conversation

borzunov
Copy link
Collaborator

@borzunov borzunov commented Sep 2, 2023

This reverts a part of #496 and instead overrides use_cache in LlamaConfigs only (so the correct value is visible by HF .generate() as well).

@borzunov borzunov merged commit b4d822a into main Sep 2, 2023
@borzunov borzunov deleted the use-cache-2 branch September 2, 2023 21:16
d-popov pushed a commit to d-popov/petals-ai that referenced this pull request Sep 6, 2023
This reverts a part of bigscience-workshop#496 and instead overrides `use_cache` in `LlamaConfig`s only (so the correct value is visible by HF `.generate()` as well).
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant