-
Notifications
You must be signed in to change notification settings - Fork 492
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
fix: faster cache key factory for range #536
Conversation
3a87cd9
to
d33af71
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you sure these are the only three options that matter? this.options
is passed to parseComparator
, and replaceGTE0
, as well as the Comparator constructor. I'm not even sure if rtl
is used in the code path currently, and it seems very dangerous for future maintenance to hardcode some options here. What happens if another option is added later?
update: i see after writing this that parseOptions
filters down the options to those three. Is the creation of that new options object actually needed anymore after this PR, now that the cache key isn't made via reflecting on the options object? Would keeping the reflection and avoiding the parseOptions clone have a better perf impact here, or a worse one?
d33af71
to
2bc0f82
Compare
@ljharb You have an interesting point, if the About the perf, I think it will not affect us since we already look for properties in the object that could doesn't exist, and if we just perform |
This refactor is great because our coverage tests are exposing the fact that we are evaluating for a parameter that doesn't matter to this class. The flag doesn't affect the data being memoized so it should not need to be accounted for in the memo key. |
parseOptions is doing two things, it is allowing for the I searched through the code to see if
Nothing I found would indicate that the ordering of parseOptions matters except in this one case where we are using it to memoize a cache key. I think that memo logic should be isolated to where it's used, and made to only account for flags that affect the results. |
2bc0f82
to
9b1364a
Compare
If those options affect what |
So far so good imho. I am going to ping @kurtextrem and @jakebailey since they were pretty well engaged with the other performance PR. If either of you don't want me to ping you on this and the next performance PR(s) please let me know. |
Always happy to be pinged 😃 I'll test these PRs out on my super stressful DefinitelyTyped monorepo case once I'm at my desk. |
I will push a fix to this.
I think the logic behind the testing is when we send the inc(version: string, release: string, options: any, identifier: string);
inc(version: string, release: string, identifier: string);
I already fix this case in this PR by calling
In this case, should I remove all the object freeze from the other PR and change the tests to accept any object?
So currently we don't need any additional test, right? |
I haven't looked at the other PR yet. Let's talk about that over there though.
Correct, imho. We would need new tests in the event that we added new options, at which point if we didn't also update the memoization they would fail. |
I don't think this is fixed though. |
9b1364a
to
6a0f846
Compare
6a0f846
to
58bb3f9
Compare
this is much simpler and thus much less risky, well done |
Agreed. The past two weeks @H4ad has been extremely patient and diligent responding to all of the feedback in the PRs they've made. The end results have been worth it and I am thankful for the effort being made here. I've given this a 👍 and will let it sit now through the weekend to give others time to catch up. |
Here's my DT stress test using pnpm. Before:
This PR (
So, seemingly no impact. This is probably not showing much improvement becuase I went and added a Range cache to pnpm (which yarn berry also has). I can try removing that cache and show before/after this PR, though, if people are interested. |
Eh, I just ran it anyway. Here's pnpm without the Range cache, instead using
Already a lot slower than my baseline. With this PR, it gets a little better:
Which, is still a light improvement, but not game changing like the options PR for pnpm anyway. Not to say this isn't a good change; absolutely take it 😄 |
Thanks @jakebailey it's good to have benchmarks that use a whole program. Sometimes an optimization helps a use case that isn't very common. The primary benefit of this PR is that it sets up the parse options refactor, since the memoization is now decoupled from that function. |
From my side, I would also say lgtm, even if it might not increase performance drastically, as it gives a good baseline in case in the future more options are added or for using more bit masks inside this module. |
@wraithgar Fixed! |
The old way of creating a cache key for
range.js
is too slow:node-semver/classes/range.js
Lines 82 to 85 in f4fa069
The performance was like this:
Now, I just changed to a simple function with a bunch of comparisons:
To reduce the code, isaacs suggested use flags, so I changed the implementation from the old PR to a version using flags.
The first version I wrote using flags was slower because I implemented using
.toString
, but just changing to0 + ''
was actually 10x faster, so I keep it as flags.Conclusion
In this PR, we could use the benefit of using flags to reduce the code, maybe this PR could open a possibility to rewrite all the options as flags.
References
Related to #528
benchmark.js