fix: change JSON patch implementation for memory improvement #1019
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I've done some digging into #985 and here's the outcome.
Actual Issue:
The issue seems to be in the algorithm that the
rfc6902
library is using for Arrays. Since it's recursive it will run out of memory when it encounters an array that is too big.Possible Solution:
I found in this issue chbrown/rfc6902#39 an alternative library (
fast-json-patch
) that could be used. It seems to work against my use case and according to these benchmarks it should be significantly more performant.Breaking Change:
As you can see in the updated tests, there are two changes to the output format compared to the other implementation:
path
for insertions in arrays ends with new index of item instead of-
. For example:/pixels/99
instead of/pixels/-
. I checked the actual standard and this behavior seems to be up to the actual standard: https://tools.ietf.org/html/rfc6902#appendix-A.2Let me know what you think.