Skip to content

Retry querying string completions from the inferred type if the original completions request doesn't return anything #52875

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Merged

Conversation

Andarist
Copy link
Contributor

…nal completions request doesn't return anything
@typescript-bot typescript-bot added the For Uncommitted Bug PR for untriaged, rejected, closed or missing bug label Feb 20, 2023
@typescript-bot
Copy link
Collaborator

This PR doesn't have any linked issues. Please open an issue that references this PR. From there we can discuss and prioritise.

@@ -392,7 +392,7 @@ function getStringLiteralCompletionEntries(sourceFile: SourceFile, node: StringL
// });
return stringLiteralCompletionsForObjectLiteral(typeChecker, parent.parent);
}
return fromContextualType();
return fromContextualType() || fromContextualType(ContextFlags.None);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's unfortunate that this involves querying the contextual type twice (with different flags). However, it's super hard to distinguish the cases in which the completions should be returned from the constraint (that can be returned from the first requests since it blocks the inference) or if they should be returned based on the inferred type (second request without the inference block).

I assume here that this second request is somewhat cheap since it should often just look through the existing~ contextual type. Perhaps there is a way to limit situations in which this second request is even done. I wasn't sure what would be the best condition for that though. Essentially, I only want to do this request when the node is within a potential inference context (nested in a call-like).

@@ -16,5 +16,5 @@
verify.completions({ marker: "1", includes: ["a", "b", "c"] });
verify.completions({ marker: "2", includes: ["0", "1", "2"], isNewIdentifierLocation: true });
verify.completions({ marker: "3", includes: ["a", "b", "c"] });
verify.completions({ marker: "4", excludes: ["a", "b", "c"] });
verify.completions({ marker: "4", exact: [] });
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This case wasn't actually returning any meaningful completions, see the playground here. It's just that empty actualCompletions were received here but with the current change in this PR I just don't return any completions here. I strongly believe that both are OK, it's just that excludes in the verify.completions call imply that there should be some completions (even if empty). It's purely a test setup/helper logic and not something that affects conumers.

//// },
//// },
//// on: {
//// EV: "/*ts*/",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this test, which node gets marked as an “inference-blocked source”?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC it's everything from the marker up to the call-like expression.

A way smaller repro is this:
working TS@5.0.0-dev.20221220 playground
broken TS@5.0.0-dev.20221221 playground

The problem in here is that those completions depend on the argument in which they are contained.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is giving me déjà vu so apologies if we’ve discussed this before, but I always feel in these cases that the real issue is that the source we don’t want to infer from is too big. It seems like you want to run inference inferring from as much as possible, excluding only the thing whose shape you’re actively changing—in the case of a string literal, just that string literal. In the case of an object literal key, maybe the whole object literal. I think it’s currently decided per call argument, but you can always restructure multiple arguments into a single options bag and you’d hope that inference and completions behave the same. I wonder if we could refine the inference-blocking logic to say “infer from this node, but when you get to this particular sub-node, its type is the anti-any (assignable to and from nothing—we may already have a marker type like this, not sure)”?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it’s currently decided per call argument, but you can always restructure multiple arguments into a single options bag and you’d hope that inference and completions behave the same.

Totally agree 👍

I wonder if we could refine the inference-blocking logic to say “infer from this node, but when you get to this particular sub-node, its type is the anti-any (assignable to and from nothing—we may already have a marker type like this, not sure)”?

On principle, I agree that the algorithm could be somehow smarter and that we should try to block as least as possible. However, I also feel that the use case from the PR that regressed this is also quite important (see the test case here). In there, we want to allow strings from the constraint. If we'd only block the "lea string node" then another place in the argument(s) could "fix" the inference for this type parameter and we likely wouldn't be able to provide autocomplete there.

I'm open to brainstorming the improved blocking algorithm for the purpose of autocompletes - but perhaps that should be moved to a discussion thread? We might not be able to follow this rabbit right now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, the fix here of “just take whichever mode returns results” makes me worried that there are many other cases that are broken and won’t be fixed by this. I was most concerned that this could introduce new completions that are actually not valid, but thinking about it more, that seems unlikely—results coming from ContextFlags.None should be accurate, and usually too accurate. I agree this is probably an improvement.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, the fix here of “just take whichever mode returns results” makes me worried that there are many other cases that are broken and won’t be fixed by this.

I agree with that too. It is quite challenging to figure out those other use cases though, it's likely best to drive this based on the user needs. OTOH I think that completions are something that is often overlooked by people and many might not request improvements in this area.

I was most concerned that this could introduce new completions that are actually not valid, but thinking about it more, that seems unlikely

This "fallback" request matches the previous behavior - so at the very least, this shouldn't make the experience worse. The overall behavior (since 4.9) has still changed though (!) because this is now only the "fallback" request and not the primary one.

results coming from ContextFlags.None should be accurate, and usually too accurate.

What do you mean by "too accurate" here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By “too accurate” I mean you only get completions that are valid for the actual types inferred from doing a real inference pass using all available information, and this often has a tendency to “lock in” what you’ve already typed, as #48410 and #36556 were intended to fix.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and this often has a tendency to “lock in” what you’ve already typed

Yeah. I will probably just reiterate over what we both already probably have in minds here, but it might be helpful for others.

  1. it's all heuristics-based, overall the inference-related cases reuse the standard inference algorithm but it's influenced with flags to behave slightly differently to avoid at times "fixing"/"locking in" the inferences
  2. it's rather hard (impossible?) to know what's the actual intended usage of the given autocomplete and it's hard to collect multiple class of candidates for this purpose within the inference algorithm
  3. I think that we can distinguish at least two types of completions
  • one that impacts the inference itself (one that might act as a direct source for that particular inference) - for those we don't want to lock the inference because it would be nice for people to be able to choose different values from the ones available in the constraint etc. And if the current available value is already a valid one then without blocking etc we wouldn't be able to access the type from the constraint easily
  • the other type is when those completions are often completely derived from other types. They are not acting as any kind of inference sources.
  1. note that the relation to the constraint might very complex so it's not that easy to distinguish between those two cases upfront
  2. we could potentially achieve better results with a smarter blocking but I would still be worried that at times it could lead to unwanted results in cases like: function t<T extends 'foo' | 'bar'>({ a: T, b: T }): void. Even if we'd only block b, without blocking the parent object then it's likely that T would get fixed with a's value and ultimately we wouldn't receive 'foo' | 'bar' as valid completions.
  3. I think it's definitely worth experimenting further with this algorithm and all - but for the time being this double request approach is "good enough" and something that we could try to iterate on. It's also nice that this PR brings some new tests into the codebase and that they will easily inform the future experiments with this algorithm about the use cases/requirements.

@andrewbranch andrewbranch merged commit 1a76569 into microsoft:main Feb 23, 2023
@Andarist Andarist deleted the fix/completions-from-inferred-type branch February 23, 2023 22:15
@itsUndefined
Copy link

Is this fix relevant to this issue: #49680? It doesn't appear to fix it.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
For Uncommitted Bug PR for untriaged, rejected, closed or missing bug
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants