-
Notifications
You must be signed in to change notification settings - Fork 819
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
[Question] What is the maximum value for lookahead_size? #454
Comments
Hi @gofortime, Just to clarify, lookahead_size > block_count has no benefit. littlefs isn't designed to keep track of free blocks in RAM, so it will still scan the disk to find new blocks. lookahead_size == block_count should be the best performance. It sounds like lookahead_size == block_count is causing problems? What is the block_count for your device? Also what is the behavior after allocating a ctz-list block again? Does the filesystem end up corrupted? littlefs tries to reuse blocks in a file that have unchanged data, so it may be intentional. |
Another quick note: the lookahead scanning is a particularly bad performance problem for littlefs, especially on larger devices. This is something we intend to improve in the future. Here's a related issue: #75 |
Hi @geky , Thanks a lot for the reply. The block count on my device is around 1312. After allocating ctz-list block again, I observe that it gets invalid block error (block index exceeds the block_count) when I tried to read the first file (which use the ctz-list block first). The content on the ctz-list block is for the second file now. Is it intentional? By the way, I find the lookahead size should be block_count/8, right? Looks like it is using lookahead buffer as bitmap, so one bit represents a block. Thanks, |
Ah yes, you are correct. lookahead is in units of bytes, but each byte describes 8 blocks.
Out of curiousity, if you set lookahead_size = 1312/8, do you still see the problem? It looks like littlefs may actually break when lookahead_size > block_count.
This sounds like it might be a bug fixed in v2.2.0, which version are you using? |
Hi @geky , I am using v2.2.0. I tried to decrease the lookahead_size to be smaller number (1312/16), and I still saw broken ctz issue. So it might not be caused by the big lookahead number. I am still doing investigation. I will get back to you when I have a conclusion. Thanks for your response so far. Thanks, |
Also, if it's easy to test, do you see a similar issue with a significantly smaller lookahead? Like say just 32? I wonder if the issue is elsewhere, maybe something is changing the CTZ pointer or it is getting corrupted before being written to disk? |
Hi @geky , All my fault. We add a layer similar to NFTL (since we are using NAND, and the physical erasable block is a little bit large), and the NFTL layer unexpectedly erases some blocks, which have the ctz block included. And then, when system reboot and scan the lookahead buffer, this kind of ctz blocks are treated as broken and then be throw away, and then might be used by other file/dir. This is why I saw the a ctz block used by file A, and again be allocated to be used by file B later. Really appreciate your time and kindly guide. Thanks! Thanks, |
Issue closed. Thanks for help. |
Hi experts,
Quick question:
What is the maximum value for lookahead_size?
===>
Detailed description:
I notice it is likely to take long time when traversing the littlefs to find the new lookahead window if the partition is very big and it is almost fully occupied by files (I am using NAND, so the partition is large in my case). I understand it is because reading flash takes long. I tried to increase the lookahead_size, so it won't need to update lookahead window too frequently. I my project, I increase this number to the the same as block_count. I think this will help improving performance. But sometimes I will run into a problem that a in-use block (in ctz list from my observation) will be allocated again even though it is still is use (the ctz listed file is not removed). I am not what the problem is. Is it because I set a too large lookahead_size number?
Thanks,
Wenjun
The text was updated successfully, but these errors were encountered: