-
Notifications
You must be signed in to change notification settings - Fork 30.2k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Terminal scrolling regression in application with alternative screen buffer #204769
Comments
Clarification: the issue occurs only when the application using the alt screen buffer does not use mouse events for scrolling. When an application uses mouse events, everything seems fine. So, while running this has the issue:
This does not:
|
(on I can confirm both the original issue, and that With I suppose the difference is that with Whereas by default (mouse scroll events turning into down/up key presses), the inputs still get collected while the slow-scrolling terminal is taking its time, and can easily pile up. (click to open first, less reliable, benchmarking attempt)Using $ date -Ins; seq -w 1000 | less --mouse -e; date -Ins
2024-03-05T00:08:36,110989866+02:00
2024-03-05T00:08:43,803949975+02:00 That's almost 8 seconds, during which I had to keep flinging my scroll wheel. By comparison, Konsole (my main terminal outside of VSCode), with the same setup, can get as fast as:
That's ~160ms, or 50x faster than VSCode's Terminal. In part that's because I was had already gotten the scroll wheel up to speed before pressing Enter, but that was also the case for the VSCode run. Initially I thought reducing the VSCode Terminal height was making it faster, but I got ~8 seconds again after shrinking the height to the minimum allowed (6 lines), so it likely only feels faster. Sadly, at some point I stopped being able to reproduce those numbers, so I'm less sure of them now. However, the mouse/human factor can be automated away if we assume that the problem has to do with scrolling performance, and we start benchmarking that, directly: $ date -Ins; tput smcup; seq -w 100000 | sed 's/$/\x1b[10S/'; tput rmcup; date -Ins
2024-03-05T00:22:50,639432483+02:00
2024-03-05T00:23:00,216516418+02:00 So a bit over 10 seconds for VSCode's Terminal. My Konsole seems to handle that in about Shrinking the VSCode Terminal's width and/or height doesn't appear to have an effect on timings. However, looking at an old version of VSCode I could easily get (1.78.2), it's not any faster according to the automated benchmark, it just seems to somehow combine what So this regression may have been caused by trying to make the terminal look better, without having the throughput to actually handle all the necessary updates as fast as other implementations. EDIT: just realized all my weird |
This is confusing: I tried coming up with a way to benchmark scrollwheel speed (it starts timing after the first 3-byte sequence -
$ bash -c 'tput smcup; saved="$(stty -g)"; stty cbreak -echo; dd bs=3 count=1 of=/dev/null; report="$(dd bs=3 count=1000 of=/dev/null 2>&1)"; stty "$saved"; tput rmcup; echo "$report" >&2'
1000+0 records in
1000+0 records out
3000 bytes (3.0 kB, 2.9 KiB) copied, 1.06856 s, 2.8 kB/s
$ bash -c 'tput smcup; saved="$(stty -g)"; stty cbreak -echo; dd bs=3 count=1 of=/dev/null; report="$(dd bs=3 count=1000 of=/dev/null 2>&1)"; stty "$saved"; tput rmcup; echo "$report" >&2'
dd: warning: partial read (2 bytes); suggest iflag=fullblock
941+59 records in
941+59 records out
2922 bytes (2.9 kB, 2.9 KiB) copied, 0.50047 s, 5.8 kB/s
$ bash -c 'tput smcup; saved="$(stty -g)"; stty cbreak -echo; dd bs=3 count=1 of=/dev/null; report="$(dd bs=3 count=1000 of=/dev/null 2>&1)"; stty "$saved"; tput rmcup; echo "$report" >&2'
1000+0 records in
1000+0 records out
3000 bytes (3.0 kB, 2.9 KiB) copied, 5.16949 s, 0.6 kB/s What happened, was VSCode dropping input events before? (the partial read itself is a bit suspicious) This change seems independent of scrolling performance, because nothing should be written during the test, and in fact the "alternative screen buffer" feature is only used to enable inputting arrow keys with the mouse scrollwheel. EDIT: by setting the log level to Debug, it seems |
Oh no, this a fix for a cursed bug: d92c1a8 By setting the log level to Trace, I could see this:
(and a lot more single-escape-sequence trace logs) This explains everything I've been seeing (and why it's unusably bad for me): for most smooth scrolling inputs, the 5ms delay can easily slow everything down by 5x, and e.g. scrolling for 10 seconds could take 50 seconds to finish writing. Write queue logic: vscode/src/vs/platform/terminal/node/terminalProcess.ts Lines 484 to 514 in 8aca9a5
Chunking logic: vscode/src/vs/platform/terminal/common/terminalProcess.ts Lines 81 to 107 in 8aca9a5
@Tyriar I'm amazed it took this long to become a problem, one escape sequence every 5ms is really bad throughput, I guess it's just hard to hit with keyboard, only scroll wheel in pager-managed altbufs (or TUI mouse cursor?). I would suggest decreasing the delay to 1ms, but there is another more immediate problem/solution: the chunking stops just before the 2nd EDIT: honestly simplest fix might be to always return one chunk if |
I was able to test changing the timeout to env -C $(mktemp -d) $(nix-build --no-out-link -E "with import <nixpkgs> {}; vscode.overrideAttrs (orig: { postFixup = orig.postFixup + ''
sed -E -i 's/(setTimeout)\((\(\)=>\{[^{}]+\}),5\)/\1(\2,0)/' \$out/lib/vscode/resources/app/out/vs/platform/terminal/node/ptyHostMain.js
''; })")/bin/code --user-data-dir ./vscode-data --disable-extensions (i.e. patching the single use of And medium-speed scrolling (i.e. the kind of "7 env -C $(mktemp -d) $(nix-build --no-out-link -E "with import <nixpkgs> {}; vscode.overrideAttrs (orig: { postFixup = orig.postFixup + ''
sed -E -i 's/(\+1\]===.).x1B(.)/\1\2/' \$out/lib/vscode/resources/app/out/vs/platform/terminal/node/ptyHostMain.js
''; })")/bin/code --user-data-dir ./vscode-data --disable-extensions This second attempt disables the special And testing this version, I can't make it fall behind noticeably, even when trying really hard, so it always feels real-time (even if the hacky chunking workaround means it could theoretically fall behind 5-10ms etc.). |
@eddyb thanks a bunch for looking into this 👏. Do you want to make a PR with your condition so you get credit?
Looks like this specifically was introduced relatively recently: #201157 I think we can also disable the chunking logic outright unless we're on Windows here: vscode/src/vs/platform/terminal/node/terminalProcess.ts Lines 443 to 445 in 465e402
I'm pretty sure it was only to workaround a weird Windows issue where text was truncated randomly. The chunk logic improvement in #201157 was a general improvement to that workaround which was only needed for Windows to avoid cutting sequences in half. |
Actually looking into it more I remembered wrong apparently, the chunking was required on macOS 🙁 #38137. Any optimizations to the chunking algorithm are definitely welcome |
This seems to be fixed in the latest release, version 1.92.0! I'll leave it up to a maintainer to close, since it would be good to identify the code change that contributed to the fix, but regardless the issue with scrolling after input has been stopped is no longer present. |
@jaminthorns brand new scroll bar implementation in v1.92 so it makes sense this behavior would have changed 👍 https://code.visualstudio.com/updates/v1_92#_new-scroll-bar |
Sorry, I've been swamped and only originally looked into this as it was increasingly impacting my work at the time, I didn't get to spend time on it after I found the workaround.
Not 100% sure I understand the connection, but I guess one way they could interact is that the scroll wheel could be ignoring hardware events while previous events' chunks have not been sent yet, instead of accumulating them at the same time? The new behavior in e.g. The benchmark I described in #204769 (comment) is still disappointing in VSCode (in fact it takes 16.6s for me now, instead of 5s, whereas for Konsole I can take it down to 2ms depending on the scrollwheel speed). And this is with my chunking patch applied still, which is a bit surprising. I had to go back to confirm but that's 1.91.1 doing 0.4s somewhat consistently (and you can tell it has my patch because the escapes get truncated/corrupted, which is why the chunking is needed AFAIK, but ideally it should allow keeping multiple escape sequences in the same chunk). Meanwhile 1.92.0, both with and without chunking patches, does 16.6s, and never seems to show any truncation/corruption of escapes even with the chunking patch. Anyway, the summary is that the jarring "scroll event buffering" is gone, but whatever changes were made result in a "lockstep" slowdown instead - it's not as bad, and e.g. |
@Tyriar wait, I made the mistake again of not tracking down the relevant changes first, it looks like you just removed the scroll amount? xtermjs/xterm.js@721d483 This explains what we're seeing, the Was this only done because the correct value was hard to compute? |
@eddyb I pulled that out since the old viewport is gone and with it the ability to translate events into lines scrolled. Created xtermjs/xterm.js#5123 to bring that back |
Makes sense, I was hoping it was something like that.
Thanks! As a reference, if that does happen, here's a quick chunking improvement: // equivalent to today's logic, but iterating whole chunks not chars:
// (could even use data = data.substring(chunk.length); and remove chunkStartIndex)
export function chunkInput(data: string): string[] {
const chunks: string[] = [];
let chunkStartIndex = 0;
while (chunkStartIndex < data.length) {
let chunk = data.substring(chunkStartIndex, chunkStartIndex + Constants.WriteMaxChunkSize);
// End the chunk before first non-initial ESC, to avoid splitting its escape sequence
const firstEscInChunk = chunk.indexOf('\x1b');
if (firstEscInChunk >= 1)
chunk = chunk.substring(0, firstEscInChunk);
chunks.push(chunk);
chunkStartIndex += chunk.length;
}
return chunks;
} // same as above but "last" instead of "first":
export function chunkInput(data: string): string[] {
const chunks: string[] = [];
let chunkStartIndex = 0;
while (chunkStartIndex < data.length) {
let chunk = data.substring(chunkStartIndex, chunkStartIndex + Constants.WriteMaxChunkSize);
// End the chunk before last non-initial ESC, to avoid splitting its escape sequence
const lastEscInChunk = chunk.lastIndexOf('\x1b');
if (lastEscInChunk >= 1)
chunk = chunk.substring(0, lastEscInChunk);
chunks.push(chunk);
chunkStartIndex += chunk.length;
}
return chunks;
} There's a good chance not iterating characters might also improve performance in edge cases (but most likely it will just slightly reduce CPU usage, there are other things linear over the data with larger constant factors AFAIK). |
@eddyb did you want to submit a PR with the chunk improvements? |
Does this issue occur when all extensions are disabled?: Yes
Version: 1.86.1 (Universal)
Commit: 31c37ee
Date: 2024-02-07T09:09:01.236Z
Electron: 27.2.3
ElectronBuildId: 26495564
Chromium: 118.0.5993.159
Node.js: 18.17.1
V8: 11.8.172.18-electron.0
OS: Darwin arm64 23.2.0
Steps to Reproduce:
less
).I noticed this regression in the January update (1.86). It's very noticeable when flicking a MacBook's trackpad or flinging a scroll wheel that supports fast scroll (like my Logitech mouse). Here's a video that demonstrates the issue:
IMG_0684_converted.mp4
VS Code is on the left, and WezTerm (which doesn't have the issue and behaves similarly to VS Code pre-1.86) is on the right.
The text was updated successfully, but these errors were encountered: