Skip to content

Rebatch aggressively while dumping the store to a stream of LogMsg #1894

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Closed
Tracked by #1619 ...
teh-cmc opened this issue Apr 18, 2023 · 1 comment · Fixed by #6570
Closed
Tracked by #1619 ...

Rebatch aggressively while dumping the store to a stream of LogMsg #1894

teh-cmc opened this issue Apr 18, 2023 · 1 comment · Fixed by #6570
Assignees
Labels
📉 performance Optimization, memory use, etc ⛃ re_datastore affects the datastore itself

Comments

@teh-cmc
Copy link
Member

teh-cmc commented Apr 18, 2023

The save-feature walks the store and converts each bucket back into a DataTable that is then serialized and dumped as a LogMsg.

This is a great opportunity to batch things even further.

@teh-cmc teh-cmc changed the title Rebatch aggressively while dumping to disk Rebatch aggressively while dumping the store to a stream of LogMsg Apr 18, 2023
@teh-cmc teh-cmc added ⛃ re_datastore affects the datastore itself 📉 performance Optimization, memory use, etc labels Apr 18, 2023
@teh-cmc
Copy link
Member Author

teh-cmc commented Nov 7, 2023

More importantly, we need the save process to be RowId-driven so that we can forbid re-use of RowIds once and for all, which will remove the need to count unique RowIds for downstream subscribers of store events, which is a blocker for the StoreView/StoreEvent stuff.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
📉 performance Optimization, memory use, etc ⛃ re_datastore affects the datastore itself
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant