-
Notifications
You must be signed in to change notification settings - Fork 462
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
replace hook processes with tokio::process #1047
Conversation
Previously, handling too many librespot events in a row, the spawned hook might not have been finished. In this case, the events would stack up, since the event channel then would not be polled. By using the tokio::process module, we gain support for waiting for processes asynchronously and even if an event would not be handled directly, it will be handled after the previous hook process has finished.
Thanks very much, I needed the endoftrack PLAYER_EVENT for a script. This fixes the player event hook for me. |
nice! looks good but will have to test later |
Regarding this PR, I was thinking wether a queue based model wouldn't make more sense. Currently, if the hook takes incredibly long, the librespot events will still stack up in the channel and there will still be an increasing delay / offset, since we only run a single process at a time. (Of course, if the script takes that long, it should definitely be fixed.) However, especially in regard to #1025, where the DBUS implementation depends on the hook programs running fast enough (since it uses the same logic as the hooks), it might be preferable to maintain a separate (bounded) queue for the hook programs, on top of which new events are added and that would then be "consumed" by the hook programs one after the other. This would ensure that the DBUS events are sent as fast as possible and that hook programs can take their time to process the events properly. All this wouldn't change the fact that we'd still need If you think that, what I wrote above, sounds reasonable, I can try to implement something like this. (Would probably wait for #1025 being merged though to avoid too many conflicts.) (Thanks for reviewing the PRs by the way, @robinvd) |
if i read it correctly, you mean running the hook asynchronously? so dont block on the hook. That does sound good, but does make a more complicated |
Sorry, I could have phrased my proposal more clearly. What I actually meant to say, is:
This way, we would ensure that:
I hope that this was somewhat understandable. 😅 |
tested works good! |
Thanks for this PR ! It sounds like this PR would address this issue. Unfortunately it seems that Spotifyd is getting neglected a bit, with the best PRs taking ages to get merged. Since this PR and #1025 are so related, could you please consider implementing this PR in your fork and then making a new unified PR? That might help getting these two PRs merged more quickly. Thanks ! |
Yeah, this also happened to me once, but since I'm currently not using that branch daily, I did not yet have time to investigate and did not assume that this was related to the PR. If it happens to me again, I might investigate a bit.
I don't think so, since this PR only helps if you have Edit: I don't know much about the intrinsics of async rust / tokio, so I imagine that something weird could indeed be happening if the event channel isn't polled properly. Having the two of them enabled at the same time might indeed have an impact (?) and in some cases cause the issue you were describing.
I actually was considering merging some of the recent PRs in a branch to be able to use them all on my machine. If you are interested in using this, I could publish that in my fork. However, I don't think that creating one big PR really helps with getting them merged more quickly, since this does only mean more difficulties reviewing them for the maintainers. |
that would be great, thanks! I tried merging them myself but didn’t get very far. I’ll test it and let you know if I come across any problems |
Thanks ! I have tested this branch, unfortunately the issue is still present. It seems like it takes longer for it to fail, but it eventually does. The player events continue to show in stdout - perhaps that information helps. |
@NNEU-1 It happened to me again today and I investigated a bit: It seems that this has something to do with token expiration for the Spotify API. I added some debug information to the output of spotifyd:
So the DbusServer is properly requesting a new token after the old one has expired but apparently the other things don't really take notice of this (?) and continue using the invalid token. Strangely, the DBus loop doesn't print the |
Ahh i forgot that Spotifyd still communicates with the Spotify api separately from librespot. |
This has been merged together with #1059. 🎉 |
As outlined in #959 (comment), handling too many events in a row can lead to the channel not being polled enough and the events then stacking up in the channel.
This fixes the behavior by implementing the hook process management asynchronously.
After the process has finished, the output is written with blockingio::stdout()
to stdout (as before). I'll probably reimplement this in a non-blocking manner, thus still a "draft".I would be happy, if someone else could already test the PR, for me it's working fine!
related: #959, #913