-
Notifications
You must be signed in to change notification settings - Fork 608
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
observe
could be simplified
#2778
Comments
So, some doubts about In the definition of the Regarding the change from The main point, to simplify the implementation to remove one queue, is very valuable. A Pull Request would be welcome :) |
Thanks for clarifying @diesalbla , I'll leave out the I'm a bit cautious of the refactoring, so am taking a bit of a digression to model this in PlusCal to find the interesting edge cases. I'll let you know what I find. |
Apologies for the delay, I've had a lot of fun modeling this with TLA+ and PlusCal. It took a bit of effort to model the different state changes, including interruptions and error states, but I think it was worthwhile. It turns out that there's at least one difference in behaviour between the proposed and current code. The proposed code swallows errors if the observer handles them. In the following snippet, the error isn't propagated to the output stream: val output = input.observe(_.handleErrorWith(handler)) There are also a few odd behaviours relating to interruption. The following code causes the input stream to be cancelled, but the output stream to succeed: val output = input.observe(_.interruptAfter(1.second)) These behaviours both have the same cause ⸺ by removing the extra channel, we introduce a relationship between the scope of the With that in mind, I think it would be better to leave the code as is. Other interesting behavioursThe model checker also picked up on a few incorrect assumptions I had. In particular, I assumed that if the output and observer stream terminated successfully, all elements pulled from the input would be received by the output. This is not the case for finite observers. The following code succeeds, but never prints val input: Stream[IO, Int] = Stream(1).covary[IO].debug(_ => "Pulled from input")
val observer: Pipe[IO, Int, Nothing] = _.take(1).debug(_ => "Pulled through observer").drain
input.observe(observer)
.take(1).debug(_ => "Pulled through output")
.compile.toList There's a test in group("handle finite observing sink") {
test("2") {
forAllF { (s: Stream[Pure, Int]) =>
observer(Stream(1, 2) ++ s.covary[IO])(_.take(1).drain).compile.toList
.assertEquals(Nil)
}
}
} Is this behaviour intended? If not, it would be possible to avoid it by sending the element to the output channel before it is output to the observer. TerminationSomewhat reassuringly, the model checker proved that the code always terminates. It also proved a bunch of other invariants, at least for the simplified system I coded up. If you're interested, you can have a look at the PlusCal code (I'm fairly new to it, and so the code isn't as clean as it could be, and likely has a few bugs). |
@zainab-ali Agreed on leaving the code as is given your discoveries. |
This is a follow up of #2765.
I think it's possible to simplify the implementation of
observe
to use a single channel instead of two, as demonstrated in this branch.@diesalbla also raised the question of how observe should behave for non-standard pipes, and whether the current implementation was intuitive.
A brief history of observe
From trawling through the code and issues, this is how I gather the current version came into being.
Observe was first implemented using diamond. It was then reimplemented using a bounded and unbounded queue in 9f97145.
The semaphore was introduced in 619c21e as part of Bugfix/hanging nested observe #1067
It was bugfixed to not be eager in 1292fe1 as part of Fixes eager observe that was causing early cleanup of resources. #1106
It was rewritten to be clearer in 01153df
It was ported to cats effect 3 6f95533
It was modified to use channels d0cc740
I'm not sure why two queues were used in the first place.
The text was updated successfully, but these errors were encountered: