Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Performance of modify large files #515

Closed
AzenkChina opened this issue Jan 8, 2021 · 1 comment
Closed

Performance of modify large files #515

AzenkChina opened this issue Jan 8, 2021 · 1 comment

Comments

@AzenkChina
Copy link

When I create a large file (about 500kB) in LFS and then modify some bytes in this file, the LFS takes too long time (about 4 seconds) to complete it, it's really a long time for most of the bare-metal systems I think, is there any way to improve it?
Here is my config:
static const struct lfs_config lfs_cfg =
{
.read = lfs_low_read,
.prog = lfs_low_prog,
.erase = lfs_low_erase,
.sync = lfs_low_sync,
.block_size = 512,
.block_count = 8192,
.read_size = 128,
.prog_size = 256,
.cache_size = 512,
.lookahead_size = 16,
.block_cycles = 300,
};

@geky
Copy link
Member

geky commented Jan 8, 2021

Hi @AzenkChina, thanks for raising an issue.

Right now LittleFS behaves very poorly with random-writes. Random writes effectively end up rewriting the rest of the file after the modified data.

More info here:
#27

Unfortunately there's not really a workaround other than restructuring your data to avoid random writes, either by appended data like a log, or by splitting the data up into multiple files.

# for free to join this conversation on GitHub. Already have an account? # to comment
Projects
None yet
Development

No branches or pull requests

2 participants