-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
multi-line code completions for the inline assistant seem to write tokens & lines of code in reverse order #25788
Comments
Here's a screen share of it happening live: https://us06web.zoom.us/clips/share/A2F3MRZiMm00YTdmOVNUdVg0d1ZpVWl3Qk13AQ If I had to guess, it's caused by the disappearing just before the edit. (1 second into the clip) I think the file is getting formatted on focus change, so the line number position index gets rewritten by formatting, but the AI insertion still uses the old index. |
Hmm, seems I may have prematurely closed this one a few weeks ago #25309. I'm going to leave this issue open rather than mark it as duplicate and reopen the previous as this one has more detailed reproduction steps. Thanks for reporting! |
Also, copying over from the other issue:
Originally posted by @failable in #25309 Would you mind confirming that you are also seeing it insert line by line in reverse @kurtbuilds? |
WORKAROUNDWhen this issue occurs, select the generated output lines and run |
In my experience it's not line by line, but reversed in what could be "LLM token by LLM token", but yes they are written in reverse order. The screenshots + recording I shared has an example. My workaround was turning off auto_save on focus change. |
Oh, very interesting. Are you saying that once you disabled auto save on focus change the issue stopped happening? @failable can you confirm whether you had this setting enabled when the issue was happening for you as well? |
I have never use auto save.🙂 |
Yeah, turning it off made this issue disappear for me. |
I mean even though I have never use auto save, I still have this issue. |
Summary
This is easiest to show with a screenshot:
I'm working on an application that has a notion of "dispute" objects, and "messages" that comprise back and forth on the dispute.
I asked the inline assistant this prompt:
and the code is spit out is reasonable, except that it seemed to spit out the tokens and lines in reverse order. This is what happens visually as well: as LLM output is received, the tokens are appended at the start of the previous tokens instead of the end.
This doesn't always happen, so something about the index or data structure/file positions for the file is somehow getting corrupted, where the insertion happens as described. As a comparison, opening a random other file in the codebase, and using the same prompt gives more sane results:
Zed Version and System Specs
Zed: v0.175.6 (Zed)
OS: macOS 15.3.1
Memory: 128 GiB
Architecture: aarch64
The text was updated successfully, but these errors were encountered: