Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Performance issue when overwriting #528

Open
wtywtykk opened this issue Feb 21, 2021 · 2 comments
Open

Performance issue when overwriting #528

wtywtykk opened this issue Feb 21, 2021 · 2 comments

Comments

@wtywtykk
Copy link

wtywtykk commented Feb 21, 2021

Hi, I'm using littlefs to record some logs. The log is a ring buffer, so it overwrites the oldest one when the file reaches the max size. However I found the performanc drops significantly when overwriting. Below is my code to reproduce the problem.

	lfs_file f;
	int err = 0;

	err = lfs_file_open(&lfs, &f, "tt", LFS_O_CREAT | LFS_O_RDWR);
	assert(err == LFS_ERR_OK);
	uint8_t pb[100 * 1024];
	for (uint32_t j = 0; j < sizeof(pb); j++)
	{
		pb[j] = rand();
	}
	err = lfs_file_seek(&lfs, &f, 0, LFS_SEEK_SET);
	assert(err >= 0);
	err = lfs_file_write(&lfs, &f, pb, sizeof(pb));
	assert(err >= 0);
	err = lfs_file_close(&lfs, &f);
	assert(err == LFS_ERR_OK);

	for (wc = 0; wc < 64000; wc++)
	{
		err = lfs_file_open(&lfs, &f, "tt", LFS_O_CREAT | LFS_O_RDWR);
		assert(err == LFS_ERR_OK);
		uint8_t b[16 * 5];
		for (uint32_t j = 0; j < sizeof(b); j++)
		{
			b[j] = rand();
		}
		err = lfs_file_seek(&lfs, &f, sizeof(b) * (wc % (32000 / 5)), LFS_SEEK_SET);
		assert(err >= 0);
		err = lfs_file_write(&lfs, &f, b, sizeof(b));
		assert(err >= 0);
		err = lfs_file_close(&lfs, &f);
		assert(err == LFS_ERR_OK);

		ShowCost();
	}

And the configuration is

	.read_size = 512,
	.prog_size = 512,
	.block_size = 4096,
	.block_count = 256,
	.block_cycles = 500,
	.cache_size = 512,
	.lookahead_size = 512,

This is the performance data that I collected: https://pastebin.com/2EsJ9Ey9

And the chart drawn from the data:
rwchart

The first peak is caused by overwriting the 100K data written first. And the second peak is caused by overwriting the data from the beginning.

Seems that write operation that changes existing data causing littlefs to rewrite all data after. Is this some intended behavior? And is it possible to avoid this?

@geky
Copy link
Member

geky commented Mar 23, 2021

Ah yes, this is one of the main areas of LittleFS that needs work. Random writes are implemented by effectively rewriting all data that follows. As you can imagine this leads to terrible performance when you modify a small part of the beginning of a file. (tracked here: #27)

There are plans to improve this, but unfortunately this just how it is at the moment.

For your use case, one option is to have 2 files that you switch between so you are never rewriting.

Something like:

if (wc >= 32000 / 5) {
    // note that lfs_rename implicitly deletes tt-old if it exists
    err = lfs_rename(&lfs, "tt", "tt-old")
    assert(err >= 0);
}

@lcse66
Copy link

lcse66 commented Oct 7, 2021

The problem also affects micropython, where the littlefs filesystem is the default. micropython/micropython#7880

# for free to join this conversation on GitHub. Already have an account? # to comment
Projects
None yet
Development

No branches or pull requests

3 participants