-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Deadlock in scan / rescan #918
Comments
Thanks for the report. This is something I have never seen and no-one else has reported for the Music app before. But there is a report with similar error messages on the Nextcloud server: nextcloud/server#6899. That issue has been closed after some fixes were made on the Nextcloud server a long time ago. But there are also comments, that problems like this have resurfaced in NC22. Edit: However, as this happens while Music is updating its own DB tables, it's probably more likely that this does not have the same root cause as the Nextcloud server issues with the same error message, but rather this is now caused by the Music app itself. One guess could be that this is interference between the automatic cleanup task running periodically in the background and the scanning process. |
Is there any way I could support to find the root cause e.g. by changing the log level? |
Enabling this option and checking the MariaDB error.log might give us some hints: https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_print_all_deadlocks Also, if you can systematically reproduce the error, then testing the scanning with nextcloud cron disabled could prove my previous theory right or wrong. |
I could now reproduce the deadlock - it is happening the very same moment, the scheduled cron job of nextcloud (cron.php) starts running. Disabling the cron job of nextcloud while doing the rescan helped out and the deadlock disappears. - So this is working as a valid workaround for me. |
Okay, thanks, that's valuable information. One way or another, I'll need to prevent the cleanup task from accessing the same DB rows which are being modified by the scan task. However, as I can't reproduce the issue myself, I will not be able to tell if any changes made actually solve the problem. That will be found out only afterwards. |
I had a deadlock as well. I ran
It kept scanning without any errors, but when it reached this song
I received this error:
I though it was caused by the quotes in both the directory name and the quote in the file name. So I removed both quotes and ran the command again. This solved it. Which is a bit surprising because this is not the only song with quotes in the directory and in a file name. |
@vasilisvg Did you try it several times so that you always got the problem on this same file? Because if this happened only once, then the chances are that the file or its name were not important, but this was rather a random timing issue. And the likely explanation would be, that the automatic background cleanup task just happened to start at that time and cause the issue like with the original reporter. |
having the same issue today after adding a big bunch of songs and "reshaping" the library with beets. I got serialisation failure when running this: EDIT: this happened multiple times. now I am testing after running an OCC files:scan, because I did not do that and this is clearly a mistake because music reads OC files not filesystem files. EDIT 2: after |
Same issue here : Running occ music:scan --all on a 30000 collection In ExceptionConverter.php line 39: An exception occurred while executing a query: SQLSTATE[40001]: Serialization failure: 1213 Deadl In Exception.php line 28: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarti In Statement.php line 130: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarti The cron job is running on a fixed schedule in the docker container (not actually cron but a while sleep loop), so not really feasible to stop it... |
No fixes for this have been made, so it's no surprise that the problem is still there. This issue has kind of been buried under the loads of other reported issues and feature requests but thanks for the reminder. |
With MariaDB, it sometimes happened that executing the background cleanup job while scanning the music library caused the said scan job to fail with the message "SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction". To prevent this, the cleanup job now skips touching the library DB tables if there are any ongoing scan jobs. It was first attempted to prevent only touching the DB rows of the specific user doing the scan but this was not enough to solve the issue. Presumably, it's now some table-level locking which causes this issue. The fix is not completely thread-safe in the sense that if multiple scan jobs are run simultaneously for the same user, then the cleanup job may still get to access the tables. But this is quite pathological case, so probably not a real-life problem. And even if this happens, then one of the scan jobs just fails and the other one probably runs normally. refs #918
This issue should now be finally fixed in Music v2.1.0. |
Scanning / rescanning my larger (approx. 25,000 files) music collection I frequently get the following messages:
Command line I use is:
sudo -u www-data php occ music:scan --clean-obsolete --all
_In ExceptionConverter.php line 37:
An exception occurred while executing a query: SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get l
ock; try restarting transaction
In Exception.php line 26:
SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction
In Statement.php line 92:
SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction_
Using Nextcloud 22, PHP 8.0, mariadb, latest Music app 1.4.1
The text was updated successfully, but these errors were encountered: