-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Writing stops after hitting ValueLogFileSize #293
Comments
I tried running against the latest version at 2dc0fdc, but am seeing the same results. I also set
Setting it to Lines 211 to 213 in 5240a8f
Using
I'm guessing this only works until the data exceeds the new max file limit, so it's not really fixed, but at least I can work with this right now. |
I'm not 100% sure yet, but preliminary results show that switching to manually managing the transactions instead of using the I don't have any real metrics to report other than seeing that I no longer have the issue after rewriting some parts of the logic (and doing away with the above functions). If I find anything else, I'll let you know. |
^ Part of this rewrite was also to split up the |
Can you paste your code here? Are you using prefetch in your iteration? Without it, the locks over the value log would not be released until txn ends. For a long-running iteration, which is also simultaneously updating, these read locks would block any value log update; which is why your updates are stopping once the value log size is reached (unable to create a new value log file). |
Actually, in my original post I wasn't using any of the iterators, I was doing my own for loop, and made sure the transactions where small enough to not hit the "transaction too big" error. Still though, I did start a lot of goroutines to handle concurrent updates, so perhaps I was hitting a limit there. But, the fact that it stopt exactly at the Either way, I'll see if I can get some code from the setup I had before, to share with you. |
Okay. If your code is working around this limitation, great. Otherwise, we could add an API to explicitly release the item after a |
Not sure if anything more needs to be done here. If so, pls re-open. |
I'm storing about ~100 million records in Badger. I can see the initial log file increasing in size, and using debug output, I can see all the key/value pairs being written. But then once the max size for the first log file hits, the loop blocks, and nothing is written anymore.
I'm still debugging, so I hope to find the solution, but still wanted to report it anyway.
Version
5240a8f
Data directory
Options
The text was updated successfully, but these errors were encountered: