Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Configure defaults in captive core library for bucketlist cache configuration #5600

Open
tamirms opened this issue Feb 12, 2025 · 1 comment · May be fixed by #5632
Open

Configure defaults in captive core library for bucketlist cache configuration #5600

tamirms opened this issue Feb 12, 2025 · 1 comment · May be fixed by #5632

Comments

@tamirms
Copy link
Contributor

tamirms commented Feb 12, 2025

In the upcoming minor release of core, there will be a cache introduced for the bucketlist. This cache will affect the memory footprint of stellar-core. The default configuration will actually increase memory usage of captive core from ~2.5 GB to ~5-6 GB. However, these caches will be mostly of benefit to validators and not watcher nodes. In the captive core library we want to ensure the default configuration of the caches does not result in an increased memory footprint.

see https://stellarfoundation.slack.com/archives/C02B04RMK/p1739299208980229 for more context

@tamirms tamirms added this to the platform sprint 56 milestone Feb 12, 2025
@urvisavla urvisavla self-assigned this Feb 18, 2025
@urvisavla urvisavla moved this from To Do to In Progress in Platform Scrum Feb 18, 2025
@urvisavla urvisavla moved this from In Progress to To Do in Platform Scrum Feb 18, 2025
@SirTyson
Copy link
Contributor

Specifically, the option BUCKETLIST_DB_MEMORY_FOR_CACHING should be set to 0 starting with captive-core 22.2.

@tamirms tamirms assigned tamirms and unassigned urvisavla Mar 11, 2025
@tamirms tamirms moved this from To Do to In Progress in Platform Scrum Mar 13, 2025
# for free to join this conversation on GitHub. Already have an account? # to comment
Projects
Status: In Progress
Development

Successfully merging a pull request may close this issue.

3 participants