-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Disk volume for BSC node #1218
Comments
Hello, the node disk storage size for optimal performance based on testing is ~1.5 TB, once your node reaches that, it is advised to perform some pruning. Please refer to the following for storage optimization suggestions. https://docs.bnbchain.org/docs/validator/node-maintenance/#storage |
Hello @kris-ss, I tried your instruction to prune the data but get following error after 1 mins packaging.
|
Hello @hejing. Your node didn't catch the latest block right? |
Download a snapshot and sync up first https://github.com/bnb-chain/bsc-snapshots |
I think it stopped days back since no disk space. So I need to remove current data directory and sync from the snapshot first? |
I see. You can:
|
prune-block (block-amount-reserved=128) succeeded, but the space didn't spared much.
prune-state failed as before
so I have to expand the disk and sync to the latest block now? |
Correct, I think the best option for you is to expand the disk and sync again. |
Hello, do you synchronize a fullnode? |
I have tried to do the prune-block before but what it did was clear out the ancient folder... is that what prune-block is supposed to do? the ancient folder had more than 1TB and after pruning it was just a few MB. |
@daica75 prune-block will prune undesired ancient block data. It will discard block, receipt, and header in the ancient database to save space. The value you specified in block-amount-reserved is how many ancient block data that will be left after pruning. |
May I know the suitable volume of disk for BSC node? The node data size increased sharply in recent 2 months. Now it is exceeding 5T.
The text was updated successfully, but these errors were encountered: