Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

multi-line code completions for the inline assistant seem to write tokens & lines of code in reverse order #25788

Open
kurtbuilds opened this issue Feb 27, 2025 · 9 comments

Comments

@kurtbuilds
Copy link
Contributor

Summary

This is easiest to show with a screenshot:

Image

I'm working on an application that has a notion of "dispute" objects, and "messages" that comprise back and forth on the dispute.

I asked the inline assistant this prompt:

create a serde struct that contains a dispute and a message, and then have this function return that struct

and the code is spit out is reasonable, except that it seemed to spit out the tokens and lines in reverse order. This is what happens visually as well: as LLM output is received, the tokens are appended at the start of the previous tokens instead of the end.


This doesn't always happen, so something about the index or data structure/file positions for the file is somehow getting corrupted, where the insertion happens as described. As a comparison, opening a random other file in the codebase, and using the same prompt gives more sane results:

Image

Zed Version and System Specs

Zed: v0.175.6 (Zed)
OS: macOS 15.3.1
Memory: 128 GiB
Architecture: aarch64

@kurtbuilds
Copy link
Contributor Author

Here's a screen share of it happening live:

https://us06web.zoom.us/clips/share/A2F3MRZiMm00YTdmOVNUdVg0d1ZpVWl3Qk13AQ

If I had to guess, it's caused by the disappearing just before the edit. (1 second into the clip) I think the file is getting formatted on focus change, so the line number position index gets rewritten by formatting, but the AI insertion still uses the old index.

@probably-neb
Copy link
Contributor

Hmm, seems I may have prematurely closed this one a few weeks ago #25309. I'm going to leave this issue open rather than mark it as duplicate and reopen the previous as this one has more detailed reproduction steps. Thanks for reporting!

@probably-neb
Copy link
Contributor

Also, copying over from the other issue:

I am almost 100% certain this issue is not related to LLM, because I watched Zed insert the input into the buffer line by line from bottom to top. The existing text is being pushed down line by line when generating.

Write CRUD for this module
("read/{item_id}")@router.get
 > Write CRUD for this module
):_id: strd_item(itemasync def rea
("read/{item_id}")@router.get

...

Originally posted by @failable in #25309

Would you mind confirming that you are also seeing it insert line by line in reverse @kurtbuilds?

@probably-neb
Copy link
Contributor

WORKAROUND

When this issue occurs, select the generated output lines and run editor: reverse lines to produce the correct line ordering

@kurtbuilds
Copy link
Contributor Author

In my experience it's not line by line, but reversed in what could be "LLM token by LLM token", but yes they are written in reverse order. The screenshots + recording I shared has an example.

My workaround was turning off auto_save on focus change.

@probably-neb
Copy link
Contributor

Oh, very interesting. Are you saying that once you disabled auto save on focus change the issue stopped happening? @failable can you confirm whether you had this setting enabled when the issue was happening for you as well?

@failable
Copy link

I have never use auto save.🙂

@kurtbuilds
Copy link
Contributor Author

Yeah, turning it off made this issue disappear for me.

@failable
Copy link

I mean even though I have never use auto save, I still have this issue.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants