You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 30, 2023. It is now read-only.
The model can sometimes take several minutes to complete for long files. This was observed by @dabdine on mac but seems to be a general issue. I suspect that this is to do with the model having to factor in all the tokens in a really long prompt before making new predictions.
A possible technical trade-off could be to artificially limit prompt length to the last N words (this could be configurable). I imagine the model would still be able to make useful recommendations without the whole file as context.
The text was updated successfully, but these errors were encountered:
The model can sometimes take several minutes to complete for long files. This was observed by @dabdine on mac but seems to be a general issue. I suspect that this is to do with the model having to factor in all the tokens in a really long prompt before making new predictions.
A possible technical trade-off could be to artificially limit prompt length to the last N words (this could be configurable). I imagine the model would still be able to make useful recommendations without the whole file as context.
The text was updated successfully, but these errors were encountered: