Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

replace hook processes with tokio::process #1047

Closed
wants to merge 4 commits into from

Conversation

eladyn
Copy link
Member

@eladyn eladyn commented Jan 26, 2022

As outlined in #959 (comment), handling too many events in a row can lead to the channel not being polled enough and the events then stacking up in the channel.

This fixes the behavior by implementing the hook process management asynchronously. After the process has finished, the output is written with blocking io::stdout() to stdout (as before). I'll probably reimplement this in a non-blocking manner, thus still a "draft".

I would be happy, if someone else could already test the PR, for me it's working fine!

related: #959, #913

Previously, handling too many librespot events in a row, the spawned hook might not have been finished. In this case, the events would stack up, since the event channel then would not be polled.

By using the tokio::process module, we gain support for waiting for processes asynchronously and even if an event would not be handled directly, it will be handled after the previous hook process has finished.
@eladyn eladyn marked this pull request as ready for review January 26, 2022 17:57
@temax6
Copy link

temax6 commented Feb 1, 2022

Thanks very much, I needed the endoftrack PLAYER_EVENT for a script. This fixes the player event hook for me.

@robinvd
Copy link
Contributor

robinvd commented Feb 8, 2022

nice! looks good but will have to test later

@eladyn
Copy link
Member Author

eladyn commented Feb 8, 2022

Regarding this PR, I was thinking wether a queue based model wouldn't make more sense. Currently, if the hook takes incredibly long, the librespot events will still stack up in the channel and there will still be an increasing delay / offset, since we only run a single process at a time. (Of course, if the script takes that long, it should definitely be fixed.)

However, especially in regard to #1025, where the DBUS implementation depends on the hook programs running fast enough (since it uses the same logic as the hooks), it might be preferable to maintain a separate (bounded) queue for the hook programs, on top of which new events are added and that would then be "consumed" by the hook programs one after the other.

This would ensure that the DBUS events are sent as fast as possible and that hook programs can take their time to process the events properly.

All this wouldn't change the fact that we'd still need tokio::process for the loop waking part, so this PR would still be relevant.

If you think that, what I wrote above, sounds reasonable, I can try to implement something like this. (Would probably wait for #1025 being merged though to avoid too many conflicts.)

(Thanks for reviewing the PRs by the way, @robinvd)

@robinvd
Copy link
Contributor

robinvd commented Feb 9, 2022

if i read it correctly, you mean running the hook asynchronously? so dont block on the hook. That does sound good, but does make a more complicated

@eladyn
Copy link
Member Author

eladyn commented Feb 9, 2022

Sorry, I could have phrased my proposal more clearly.

What I actually meant to say, is:

  • when new events arrive, push them in a VecDeque; if the VecDeque exceeds a certain length (e.g. 5), discard the oldest events
  • when no process is running (or the last one has finished), remove the oldest event from the VecDeque and start a new hook with this event

This way, we would ensure that:

  • even if the hook program does take a bit longer, no events are discarded
  • if the hook program is taking way too long, the "event delay" never grows too big
  • MPRIS signals are sent as fast as possible
  • only one process is running at a time (prevents possible race conditions in the hook, keeps the logic simple and close to the current implementation)

I hope that this was somewhat understandable. 😅

@robinvd
Copy link
Contributor

robinvd commented Feb 24, 2022

tested works good!

@robinvd robinvd requested review from robinvd, JojiiOfficial, slondr and a team February 24, 2022 15:59
@NNEU-1
Copy link
Contributor

NNEU-1 commented Mar 3, 2022

Regarding this PR, I was thinking wether a queue based model wouldn't make more sense. Currently, if the hook takes incredibly long, the librespot events will still stack up in the channel and there will still be an increasing delay / offset, since we only run a single process at a time. (Of course, if the script takes that long, it should definitely be fixed.)

However, especially in regard to #1025, where the DBUS implementation depends on the hook programs running fast enough (since it uses the same logic as the hooks), it might be preferable to maintain a separate (bounded) queue for the hook programs, on top of which new events are added and that would then be "consumed" by the hook programs one after the other.

This would ensure that the DBUS events are sent as fast as possible and that hook programs can take their time to process the events properly.

All this wouldn't change the fact that we'd still need tokio::process for the loop waking part, so this PR would still be relevant.

If you think that, what I wrote above, sounds reasonable, I can try to implement something like this. (Would probably wait for #1025 being merged though to avoid too many conflicts.)

(Thanks for reviewing the PRs by the way, @robinvd)

Thanks for this PR !
I have noticed that (using #1025) DBus properties changed messages seem to stop after Spotifyd ran for some time.

It sounds like this PR would address this issue.

Unfortunately it seems that Spotifyd is getting neglected a bit, with the best PRs taking ages to get merged.

Since this PR and #1025 are so related, could you please consider implementing this PR in your fork and then making a new unified PR?

That might help getting these two PRs merged more quickly.

Thanks !

@eladyn
Copy link
Member Author

eladyn commented Mar 4, 2022

Thanks for this PR ! I have noticed that (using #1025) DBus properties changed messages seem to stop after Spotifyd ran for some time.

Yeah, this also happened to me once, but since I'm currently not using that branch daily, I did not yet have time to investigate and did not assume that this was related to the PR. If it happens to me again, I might investigate a bit.

It sounds like this PR would address this issue.

I don't think so, since this PR only helps if you have on_song_change_hook enabled and even then, it should not stop firing the events, it would just have an increasing offset in the events that are fired. The two PRs are definitely related as #1025 does rely on the events being polled in the same loop that is also used for the hook.

Edit: I don't know much about the intrinsics of async rust / tokio, so I imagine that something weird could indeed be happening if the event channel isn't polled properly. Having the two of them enabled at the same time might indeed have an impact (?) and in some cases cause the issue you were describing.

Since this PR and #1025 are so related, could you please consider implementing this PR in your fork and then making a new unified PR?

That might help getting these two PRs merged more quickly.

I actually was considering merging some of the recent PRs in a branch to be able to use them all on my machine. If you are interested in using this, I could publish that in my fork. However, I don't think that creating one big PR really helps with getting them merged more quickly, since this does only mean more difficulties reviewing them for the maintainers.

@NNEU-1
Copy link
Contributor

NNEU-1 commented Mar 4, 2022

If you are interested in using this, I could publish that in my fork.

that would be great, thanks!

I tried merging them myself but didn’t get very far.

I’ll test it and let you know if I come across any problems

@eladyn
Copy link
Member Author

eladyn commented Mar 4, 2022

@NNEU-1 FYI, i pushed the branch to https://github.com/eladyn/spotifyd/tree/pr_collection. It contains this PR, #1025, #1039, #1059 and #954.

@NNEU-1
Copy link
Contributor

NNEU-1 commented Mar 5, 2022

Thanks ! I have tested this branch, unfortunately the issue is still present. It seems like it takes longer for it to fail, but it eventually does.

The player events continue to show in stdout - perhaps that information helps.

@eladyn
Copy link
Member Author

eladyn commented Mar 10, 2022

@NNEU-1 It happened to me again today and I investigated a bit:

It seems that this has something to do with token expiration for the Spotify API. I added some debug information to the output of spotifyd:

...
spotifyd[75027]: <Song> (220000 ms) loaded
spotifyd[75027]: loop <- this means that the dbus loop is receiving and handling the player events
spotifyd[75027]: loop
spotifyd[75027]: loop
spotifyd[75027]: loop
spotifyd[75027]: token is expired
spotifyd[75027]: starting token request
spotifyd[75027]: token is expired
spotifyd[75027]: token result: Pending
spotifyd[75027]: token is expired
spotifyd[75027]: token result: Pending
spotifyd[75027]: token is expired
spotifyd[75027]: token result: Ready(Ok(Token { ... }))
spotifyd[75027]: loop
spotifyd[75027]: Loading <Song> with Spotify URI <spotify:track:0D93F938DEADNB938Dxnd2>
spotifyd[75027]: <Song> (220000 ms) loaded
spotifyd[75027]: loop
spotifyd[75027]: loop
spotifyd[75027]: loop
spotifyd[75027]: loop
spotifyd[75027]: loop
spotifyd[75027]: Loading <Song> with Spotify URI <spotify:track:0D93F938DEADNB938Dxnd2>
spotifyd[75027]: <Song> (220000 ms) loaded
spotifyd[75027]: loop
# manually requesting the Metadata over dbus leads to the following
spotifyd[75027]: Couldn't fetch metadata from spotify: Err(Unauthorized)
spotifyd[75027]: loop
spotifyd[75027]: loop
...

So the DbusServer is properly requesting a new token after the old one has expired but apparently the other things don't really take notice of this (?) and continue using the invalid token. Strangely, the DBus loop doesn't print the Couldn't fetch metadata from spotify. I might dig further into this, but this is almost definitely not a problem with this PR or #1025 (I actually noticed previously - even before #1025 - that the DBus API stopped returning sensible results).

@NNEU-1
Copy link
Contributor

NNEU-1 commented Mar 11, 2022

Ahh i forgot that Spotifyd still communicates with the Spotify api separately from librespot.
Good to know where the error lies - would be great if it were fixed.

@eladyn
Copy link
Member Author

eladyn commented Sep 8, 2022

This has been merged together with #1059. 🎉

@eladyn eladyn closed this Sep 8, 2022
@eladyn eladyn deleted the process_spawning_fix branch September 8, 2022 23:54
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants