Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

real-time synchronous streams #18

Closed
ahdinosaur opened this issue Aug 28, 2016 · 46 comments
Closed

real-time synchronous streams #18

ahdinosaur opened this issue Aug 28, 2016 · 46 comments

Comments

@ahdinosaur
Copy link
Member

ahdinosaur commented Aug 28, 2016

hey @audiojs/core @audiojs/devs, thanks for the invite.

i'm Mikey. i do audio / visual programming whenever i feel, i host a local meetup called Art~Hack Wellington, @mmckegg and i collaborate in a livejs org, i use ndarray as a common data structure between these modules to represent ndsamples or ndpixels, and (mostly thanks to Matt playing Loop Drop) it turns out like this, this, or this.

so anyways, why did i make this issue? basically, i like what's happening here, but i have a tiny concern about exporting modules as node streams, because that's what i used to do as well.

why are node streams bad? a few reasons, most relevant here is that node streams are meant to be asynchronous over many event ticks. this is a problem in my most common use case, where the moment i read a "frame" of audio i want to immediately process it and output pixels to the screen (or LEDs). all of this i want to happen synchronously, because the faster the pixels are displayed the better (enough millisecond delay and everyone will notice it's not "on time", even if subconsciously). even a single process.nextTick along the pipeline is wasted time, zalgo be damned, although i can see y'all have worked around this (set highWaterMark to zero, etc).

what do i recommend instead? so in my modules, i've started doing two things: either a) export a simple function, even if it will commonly be used as a stream. examples: pixels-apa102, rainbow-pixels, ndsamples-frequencies, ndpixels-convert. or b) export a pull-stream which by default are unbuffered synchronous streams (and have many other benefits). examples: read-audio.

exporting a simple function is nice, because then it's useful for anyone regardless of which stream implementation they favor (@mmckegg uses observ-style observables for performance reasons, i want Matt to use my modules 😄). this is becoming my preference now. exporting a pull stream is nice because it's a consistent interface, handles back-pressure and error propagation very well, can be used synchronously or asynchronously, and has an vibrant ecosystem. luckily it's trivial to wrap simple functions as pull streams (using pull.map, pull.infinite, pull.drain, etc).

anyways, lots of text. curious to hear what y'all think. thanks again for the invite. 😺

@jamen
Copy link
Contributor

jamen commented Aug 28, 2016

First off, awesome work in livejs, and glad you joined. 😄

If I understand correctly: Node streams are good at chunking things for larger amounts of data, but for some usecases in audio they would be too slow regardless, where a synchronous version would be faster.

I like your idea of using individual synchronous functions, and to some degree I think we already have that with things like pcm-util (and the *-util packages in general). I'm curious to see what @dfcreative thinks about creating more of these maybe, and then basing the stream components around them.

I still think it is important to support some level of Node streams so they can work inside the ecosystem (i.e. with sockets or fs streams), but I'm not against the idea of using pull-stream either if there is a nice way to include it (I haven't used it before yet, so I'm open to ideas).

I'm still kind of new to these packages myself, so what are your thoughts @dfcreative?

@ahdinosaur
Copy link
Member Author

ahdinosaur commented Aug 28, 2016

So, if I understand correctly: Node streams are good at chunking things for larger amounts of data, but for some usecases (especially in audio) they would be too slow.

yes, Node streams buffers data by default and in some cases add unnecessary clock ticks in processing a pipeline of streams (in the worst case each transform adds a clock tick), both are not particularly helpful for a real-time audio use case, but i think possible to workaround as it seems y'all have done. in pull streams this becomes explicit, since the default is unbuffered synchronous streams (and waaaaay less magic happening, the source code is so small ✨).

I still think it is important to support some level of Node streams so they can work inside the ecosystem (i.e. with sockets or fs streams)

yep agree, for this we have stream-to-pull-stream and pull-stream-to-stream. or there's also pull-net, pull-file and pull-write-file, or pull-ws for the same in native pull streams.

@dy
Copy link
Member

dy commented Aug 28, 2016

@ahdinosaur oh thanks for the info and links, I appreciate, I’ve read them, very nice!

That delay is indeed bad, and tbh I don’t like how streams perform along with web-audio - occasional glitches etc, but it might be not streams, but syncing, GC etc, scriptProcessor is glitchy regardless of the code we use, I hoped upcoming audio-worker could help with that. Also that might be browserify-caused delays etc, I am going to rewrite mcjs to get smaller and faster bundles to see.

Some time ago in audio-through or about I had tests of function calls sequence vs raw/object streams and as I got it, streams in flow mode do not necessary cause processor ticks and can be synchronous, actually performance was identical with functions, so I decided that streams were better choice due to conventional API, inheriting Emitter, builtin pressure control etc.

As for pull-stream, zen-observable, rxjs, cycle.js, observable-proposal, etc - there are too many and honestly I am lost. I tried to research, compare, but that was difficult, so I opted for the most standard solution. Actually my hope still is to make as less wrappers and be as less opinionated as possible, in favor of native/polyfill solutions. That is why AudioBuffer :)

I like the idea of function interface and pull-streams, they are definitely worth a try. But I don’t see clearly why streams are bad. With audio-through they are reduced to a single function, which can be sync or async decided by user, in mocha-tests style, (tbh I am not sold on with Zalgo, mocha sync/async API is very nice):

const Through = require('audio-through');
const Speaker = require('audio-speaker');
const util = require('audio-buffer-utils');

//sync style
Through(buffer => util.fill(buffer, Math.random))
.pipe(Speaker());

//async style
Through((buffer, done) => processOnGPU(buffer, done))
.pipe(Speaker());

Async style here is not applicable for realtime, but that might be useful for offline sound rendering, preset-based post-processing, decoding/encoding, network streaming etc.

So I don’t see apparent benefits of pull-streams compared to regular streams in terms of performance here, neither in terms API simplicity, but they are definitely nicer in terms of code simplicity and magic. It needs research/comparison/benchmark.

But that seems reasonable to provide simple function exports for packages, if possible, and a stream wrapper like audio-*/stream, as it is done in audio-decode. Not all the modules can be done so though.

So what I would suggest - to hold on any refactoring without tangible digits proving why streams are bad and what is better.

@dy
Copy link
Member

dy commented Aug 28, 2016

@ahdinosaur hi, could you help with composing a simple pull-stream sine generator → volume control → speaker.
I try to make readme example run and I don’t get it:

const pull = require('pull-stream');

//pull-steam

//a stream of random numbers.
function source (n) {
    return function (end, cb) {
        if(end) return cb(end)

        //only read n times, then stop.
        if(0 > --n) return cb(true)

        cb(null, Math.random())
    }
}

//volume changer
function through (read, map) {
    //return a readable function!
    return function (end, cb) {
        read(end, function (end, data) {
            cb(end, data != null ? map(data) : null)
        });
    }
}

//read source and log it.
function sink () {
    return function (read) {
        read(null, function next(end, data) {
            if(end === true) return
            if(end) throw end

            console.log(data)
            read(null, next)
        });
    }
}

pull(source(), through(), sink());

Gives an error "read" is not a function. What should I pass as a read argument? How am I supposed to create through instance? This functional thing creates a bit of confusion which function call is what.

UPD. Got it https://github.com/dominictarr/pull-stream-examples/blob/master/pull.js, the readme example is broken

@dy
Copy link
Member

dy commented Aug 28, 2016

@ahdinosaur here is the setup I have for streams https://github.com/dfcreative/stream-contest/blob/gh-pages/stream.js, I want to come up with the same for pull-stream. I think pull-stream should be better for speaker’s case.

@ahdinosaur
Copy link
Member Author

@dfcreative awesome! for most real-world use cases i'd recommend using the helper functions within pull-stream:

const pull = require('pull-stream')

//pull-steam

//a stream of random numbers.
function source (n) {
    return pull(
        pull.infinite(Math.random),
        pull.take(n)
    )
}

//volume changer
function through (map) {
    return pull.map(map)
}

//read source and log it.
function sink () {
    return pull.log()
}

pull(source(), through(), sink())

you can read those functions' source code for how to do this 'right': pull.infinite, pull.take, pull.map, pull.log (which calls pull.drain)

@ahdinosaur
Copy link
Member Author

ahdinosaur commented Aug 28, 2016

here's a first pass of your stream-contest:

const pull = require('pull-stream');
const context = require('audio-context');
const util = require('audio-buffer-utils');

let frameSize = 1024;

function sine () {
  return pull.infinite(function () {
    return util.noise(util.create(frameSize))
  })
}

function volume () {
  return pull.map(function (data) {
    util.fill(data, v => v * .01);
    return data
  })
}

//create speaker routine
function speaker (context) {
  return function (read) {
    var bufferNode = context.createBufferSource()
    bufferNode.loop = true;
    bufferNode.buffer = util.create(2, frameSize)
    var node = context.createScriptProcessor(frameSize)
    node.addEventListener('audioprocess', function (e) {
      read(null, function (err, data) {
        util.copy(data, e.outputBuffer)
      })
    })
    bufferNode.connect(node)
    node.connect(context.destination)
    bufferNode.start()
  }
}

pull(sine(), volume(), speaker(context))

@dy
Copy link
Member

dy commented Aug 28, 2016

Alright, some preliminary results show that pull-streams have about 35% less overhead than streams, and ~40% more overhead than plain functions. stream-contest

@dy
Copy link
Member

dy commented Aug 28, 2016

@ahdinosaur I am trying to add pull-stream-to-stream test, and trying to convert sink stream to node stream, but I can’t wrap my head over it:

let toStream = require('pull-stream-to-stream');
toStream.source(sine).pipe(toStream(volume)).pipe(toStream(speaker));

Does not work.
What is the best way to do so?

@ahdinosaur
Copy link
Member Author

toStream is by default for duplex streams, but also has methods .source and .sink for readable and writable, respectively.

it looks like toStream.through doesn't exist, probably because no reason to implement it yet.

so maybe try:

let toStream = require('pull-stream-to-stream');
toStream.source(sine).pipe(toStream.sink(speaker));

@dy
Copy link
Member

dy commented Aug 29, 2016

Nope, neither this nor sine.pipe(volume).pipe(toStream.sink(speaker)); work, where sine and volume are node streams. Feels like pull-streams don’t really want to be compatible.

Btw @ahdinosaur @mmckegg @jamen could you look at digits in stream-contest? I can’t understand their significance, whether refactoring of streams to pull-streams is a good idea for audiojs. That is O(c) overhead, it is not going to blow up with the data or processing intensity.

Also I really want to estimate the price of pull-stream → stream conversion, in case if we need to output sound in stream-compatible way, but seems that now it is a bit tricky. Probably it is the sum of both latencies.

@jamen
Copy link
Contributor

jamen commented Aug 29, 2016

Those numbers look good to me and for what I would use this for. However, I wouldn't mind transitioning for the little added benefit, but only if everyone else wanted to as well.

Edit: If those numbers don't change like you say, then I don't know if it is worth the time, but I still wouldn't personally mind. Pull-streams do look nice though. :'(

@ahdinosaur
Copy link
Member Author

Nope, neither this nor sine.pipe(volume).pipe(toStream.sink(speaker)); work, where sine and volume are node streams. Feels like pull-streams don’t really want to be compatible.

hmm, yeah seems like a bug, will want to get that working if we did make this transition.

i think with regards to the benchmarks, a relevant number to compare is much time you have to render each frame given 60 frames per second: 16.67 milliseconds. @dfcreative is a 0.5 ms increase in time per stream in a pipeline or per pipeline? as in, if you had many transform streams in between, would that be an extra 0.5ms per transform?

i really appreciate you all listening and even considering something like a transition to pull streams, but i don't want to add more work on you all if it doesn't feel right. on the flip side if it does feel right i'd like to make it work as best as possible, given my own adoption of pull streams. at the very least, i'm very happy with an outcome of exporting more simple functions that are not tied to a particular stream implementation. 😄

@dy
Copy link
Member

dy commented Aug 29, 2016

One more thing to note - web-audio-api does not actually care about how long it takes for js to process buffer, it will just spit out the data we managed to put into outputBuffer, whatever was it in time or not, and as for node there is a huge buffering in speaker anyways, at least in windows.

But it appears that stream is a bit heavy interface for common use, and pull-streams are hip but without certainty. Basically there is no big difference between the two, it is like left or right hand in theory, the first is just with heritage and the second is with errors/end propagation.

We can place stream implementation to ./stream and pull-stream to ./pull files for each component (or more crazy ./left and ./right? or ./pull and ./push?), but what should remain in the ./index.js?

Many audio-* right now use audio-through as a dependency, it is possible to make them like:

//audio-smth/stream.js
module.exports = Through(require('./index'));

//audio-smth/pull.js
//through wrapper putting ./index.js in use

//audio-smth - direct processor or constructor of processor if stateful impl needed
module.exports = function (buffer, cb) {
return util.noise(buffer);
}

@ahdinosaur yeah, I think that would be great to have pull-streams supported, we should try some components, like the main ones are audio-speaker, web-audio-stream, audio-generator.
The only point - does not it seem a bit cumbersome to have audio-thingy, audio-thingy/pull and audio-thingy/stream? (For me not really :)

@ahdinosaur
Copy link
Member Author

ahdinosaur commented Aug 29, 2016

@dfcreative there are three general types of through streams i'm aware of:

sync map

function (buffer) {
  return buffer
}

async map

function (buffer, cb) {
  cb(null, buffer)
}

through (like through, through2, or pull-through). when you want to queue multiple outputs per input and/or push something at the end of a stream. (this could also be done as a flat map: returning a stream in the map and flattening to items)

function write (buffer, cb) {
  cb(null, buffer)
}

function end (err, cb) {
  cb(true)
}

either we export the one that makes the most sense and have audio-through convert this appropriately to a node stream or export the through version or ...?

@dy
Copy link
Member

dy commented Aug 29, 2016

audio-through now supports the both sync and async map format, depending on number of arguments.
If it is possible to make them work well with pull-streams, that is awesome:

//sync
module.exports = function (buffer) {
  return buffer; //← enough just to modify btw, not necessary to return
}

//async
module.exports = function (buffer, cb) {
  cb(null, buffer);
}

//stateful
module.exports = function (opts) {
  //some state
  count = 0;
  return (buffer, cb?) => {
    count++;
    return buffer;
  }
}

@dy
Copy link
Member

dy commented Aug 29, 2016

It’s alright then? I will try to refactor audio-generator. (audio-generate would be a better name though.)

@ahdinosaur
Copy link
Member Author

ahdinosaur commented Aug 29, 2016

yep exporting a factory function that returns a sync or async map function is perfect. i'm also happy with ./stream and ./pull if we want to add stream exports using ./, although it practice i stopped doing that in favor of just the function exports which i'd wrap myself.

i'll add to my TODO: implement pull-stream compatible versions of the following:

  • audio-speaker (sink)
  • web-audio-stream (sink and source)
  • audio-generator (source)
  • audio-through (through

this will give us another alternative stream implementation to play with while exporting modules.

although, not sure when i'll do this, since i should be working on a full-time contract at the moment, and am playing with LEDs in my free time 😉 but definitely keen to contribute here!

@dy
Copy link
Member

dy commented Aug 29, 2016

Sure, I am glad that we came up with some convention, and actually happy to have more-or-less universal interface for modules. I think to make it main package exports and stream/pull a secondary. Though it will make greenkeeper shit bricks again sorry for today's spam.

In the meantime I will try to bring these 4 modules to the map fn.

@dy
Copy link
Member

dy commented Aug 29, 2016

To recap convention:

//sync
function write (buffer) {
  return buffer;
}
function end (buffer) {
  return null; //or true?
}

//async
function write (buffer, cb) {
  cb(null, buffer)
}

function end (err, cb) {
  cb(true)
}

@ahdinosaur right?

@ahdinosaur
Copy link
Member Author

ahdinosaur commented Aug 29, 2016

@dfcreative if everything works as either a sync or async map, i'd suggest the convention is to export:

// sync through
module.exports = function (options) {
  return function (buffer) {
    return buffer
  }
}

// async through
module.exports = function (options) {
  return function (buffer, cb) {
    cb(null, buffer)
  }
}

which can be used as node streams in the same way as through2-map and through2-asyncmap.

@dfcreative are there use cases that require the more advanced features that through2 allows for, namely this.push of multiple outputs per input or pushing output when the stream has ended? that was where the write and end functions come in, but if we can avoid that's even better.

@ahdinosaur
Copy link
Member Author

ahdinosaur commented Aug 29, 2016

above comment is for transform / through streams.

for a readable / source stream like audio-generator, we'd want something that works with from2

// sync source
module.exports = function (options) {
  return function (size) {
    return buffer
  }
}

// async source
module.exports = function (options) {
  return function (size, cb) {
    cb(null, buffer)
  }
}

@dy
Copy link
Member

dy commented Aug 29, 2016

What is the way to end these streams? return null and cb(true)?

There are plans for audio-splitter and audio-merger components, also audio-mixer. But the API is not designed yet.
I am not sure I understand this.push of multiple outputs per input, you mean that a stream outputs more items than it gets, but in a single output, right? Like some decoder?

@jamen
Copy link
Contributor

jamen commented Aug 29, 2016

I think you just return without doing cb?

@ahdinosaur
Copy link
Member Author

ahdinosaur commented Aug 29, 2016

and lastly, for writable / sink streams like audio-speaker:

// sync sink
module.exports = function (options) {
  return function (buffer) {
    // do stuff
  }
}

// async sink
module.exports = function (options) {
  return function (buffer, cb) {
    // do stuff
    cb()
  }
}

What is the way to end this streams?

what use cases for ending a stream do you have in mind?

so far, for sync functions you can end by throwing an error, for async functions you can end by calling back with an error, but i can imagine the stream wrappers could add external .abort (or other named) functions (like pull-abortable) to abort from the outside.

I am not sure I understand this.push of multiple outputs per input, you mean that stream outputs more items than it gets?

yep, simple example:

const pipe = require('pump')
const stream = require('stream')

pipe(
  process.stdin,
  new stream.Transform({
    write: function (chunk, enc, cb) {
      this.push(chunk)
      this.push(chunk)
      cb()
    }
  }),
  process.stdout
)

@dy
Copy link
Member

dy commented Aug 29, 2016

We need to put this in wiki.

use cases for ending a stream

E. g. audio-slice, which ends stream after nth second (similar to take in pull-streams).

Generate().pipe(Slice(2)).pipe(Speaker())

I usually do that by returning null or cb(null, null); indeed, very similar to pull-streams as I get it.

@ahdinosaur
Copy link
Member Author

ahdinosaur commented Aug 29, 2016

okay cool. so one important difference between node streams and pull streams is how to signal the end without an error.

in node streams, you signal end with cb(null, null). the first slot of the callback is only meant for errors, the second slot is meant for data with the exception of a null meaning "the end the stream" (like EOF in unix).

in pull streams, you signal end with cb(true). the first slot of the callback is meant for the end, whether an error or "the end of the stream", which means the second slot is free to have null data.

personally i prefer the latter (cb(true) to signal end), but this would be a slight deviation from raw node streams, so we'd have to make our node stream wrappers support this appropriately.

@dy
Copy link
Member

dy commented Aug 29, 2016

What if in functional code we just consider returned null-data as an end? It would be similar to sync style where we return null for end:

//sync
return <data|null>;

//async
cb(null, <data|null>);

It takes a bit of parsing in pull-stream wrapper, though

@ahdinosaur
Copy link
Member Author

What if in functional code we just consider returned null-data as an end?

yeah, i noticed that's what you are doing already, is good with me. 👍

@dy
Copy link
Member

dy commented Aug 29, 2016

Added wiki. Feel free to correct me.
How do we end sink streams? And throwing errors in sync streams - just usual throw? Or return an error?
Tbh if I were to design streams from scratch, I would get rid of 2 args in callback in favor of

cb(null) //end stream
cb(err) //propagate error
cb(data) //push chunk

@dy
Copy link
Member

dy commented Aug 29, 2016

@ahdinosaur I’ve refactored audio-generator, so now it exports simple function as audio-generator/index, main file is left for ./stream for now.

I think to make major switch when all components will have pull-streams.

@ahdinosaur
Copy link
Member Author

How do we end sink streams?

i guess return null like the other streams? in pull-stream, since sinks read from a source (hence the name pull streams, the sinks pull the data from the sources), they end simply by stopping to read from the source.

And throwing errors in sync streams - just usual throw? Or return an error?

yeah, i reckon throw errors. personally i'm experimenting with returning errors, but i think it's far more common to throw. however i could be convinced otherwise, since throwing implies a catch which will also catch "programmer errors" (incorrect arguments to a function) which is the problem with promises.

Tbh if I were to design streams from scratch, I would get rid of 2 args in callback in favor of

haha, maybe. but what if you data is an error or null? i personally am okay with this separation, hehe.

@ahdinosaur I’ve refactored audio-generator, so now it exports simple function as audio-generator/index, main file is left for ./stream for now.

woo awesome! 🎉

@ahdinosaur
Copy link
Member Author

ahdinosaur commented Aug 29, 2016

@dy
Copy link
Member

dy commented Aug 29, 2016

I am fine to cast errors and to stick with node convention of 2-arg callbacks, that's just fast fix

@dy
Copy link
Member

dy commented Aug 31, 2016

So far I’ve rewritten audio-generator, audio-sink and web-audio-stream in functional style, and it feels great.
Somewhat inspired by Crockford’s better js (no self and stuff)). That is one of the best decisions of this year. Thanks @ahdinosaur!

@dy
Copy link
Member

dy commented Sep 2, 2016

I’ve faced an issue during implementing web-audio-stream/reader.
The code is the following:

function Reader(options) {
    //...skipped inits
    let node = context.createScriptProcessor(samplesPerFrame, channels, channels);
    let release;
    node.addEventListener('audioprocess', e => {
        let cb = release;
        release = null;
        cb && cb(null, e.inputBuffer);
    });
    return function read (cb) {
        release = cb;
    }
}

From outside I call it like so:

let read = Reader();
let count = 0;
read(function getData(err, buffer) {
    if (count++ < 5) {
        read(getData);
    }
    //here we need to end read stream
    else {
    }
});

In that case I would follow the usual pull-stream style, just stop reading, but I want also to disconnect and dispose the node. What is a good way do that? Passing null as a callback? Or I have to use through, where I can pass null to indicate end?

Also some sources, like audio-generator, might actually take already created buffer to fill, instead of creating a new buffer each call. That is a practice to avoid memory churn, e. g. scriptProcessorNode just swaps inputBuffer and outputBuffer instead of creating a new one.

So I would suggest extend sources convention to something like:

// sync source
module.exports = function (options) {
  return function read (buffer?) {
    //return null to end
    return buffer
  }
}

// async source
module.exports = function (options) {
  return function read (cb, buffer?) {
    //cb(null, null) to end
    //cb(error) to throw error
    cb(null, buffer)
  }
}

Idk actually, I don’t like the uncertainty of arguments here. But swapping the arguments order to read(cb, buffer?) interferes with the rest of API. So I’m puzzled what to do in this case.

UPD. read(null) is fine for indicating the end. And read(buffer) is fine for synchronous reader. Rubber duck decision making in action :)

@ahdinosaur
Copy link
Member Author

@dfcreative great to see your progress. 😃

i like everything you've said above, would be happy with any approach you have in mind.

another option is being able to optionally attach an .abort(err, cb?) function to the return value of the constructor (so the read(cb?), transform(buffer, cb?), or write(buffer, cb?) functions).

prior art:

@ahdinosaur
Copy link
Member Author

Also some sources, like audio-generator, might actually take already created buffer to fill, instead of creating a new buffer each call. That is a practice to avoid memory churn, e. g. scriptProcessorNode just swaps inputBuffer and outputBuffer instead of creating a new one.

is it possible for audio-generator to handle this on it's own? such as by taking a buffer in the constructor to be returned on each read or if no buffer given then creating it's own buffers. in the case of a scriptProcessorNode (like your new web-audio-stream/reader) the source stream does this within the read function and doesn't need to be passed a buffer, yeah? i'm only wondering if it's possible to keep read(cb?) instead of read(buffer, cb?), i ran into this with audio-generator/pull.

@dy
Copy link
Member

dy commented Sep 2, 2016

Sounds very nice! Agreed about generator.
Glad to know about '.abort' method, it makes for explicity of closing a stream. I would add 'end' method to every of them.

@dy
Copy link
Member

dy commented Sep 2, 2016

So I changed generator notation, now buffer is optional. Also returned function provides end method. (should we call it close or end is fine?)

@jamen
Copy link
Contributor

jamen commented Sep 2, 2016

Nice work :o and .end seems good to me.

@ahdinosaur
Copy link
Member Author

hey @dominictarr, just wondering: i seem to recall one time you saying you were considering making it part of the pull-stream spec that every sink stream had a .abort function; was this real life and if so do you care to unpack the story more for us? thanks. 🐱

@dominictarr
Copy link

wow, very interesting thread.

First let me clear up a few things. @dfcreative in #18 (comment) pull-streams/node-streams compatibility - the problem is currently just converting pull-streams to node-streams. I wrote that module a long time ago, but mostly did it the other way - I converted node-streams to pull-streams with stream-to-pull-stream (which is rock solid) recently people transitioning to pull-streams have found that pull-stream-to-stream had some bugs, but we are fixing those currently.

And single arg: #18 (comment) the problem with a single argument is that you then have to use type checking. typechecking in javascript is reliable for primitive types, but because of the way node modules work, you might get two different versions of a module, which can make instanceof not work as expected. https://github.com/Gozala/reducers used a single arg pattern, and got into trouble because of it.

@ahdinosaur pull.drain already has abort, https://github.com/pull-stream/pull-stream/blob/master/sinks/drain.js#L40-L45 and so any sinks which are implemented via drain. also, if you have a handle on a source stream, you can do source(true, cb) to abort.

@dy
Copy link
Member

dy commented Sep 4, 2016

@dominictarr #18 (comment) is no more a big deal since we decided to provide pull and stream for each audio-component. Btw thanks for pull-streams, they are incredibly good!

@dominictarr
Copy link

oh, awesome!

@dy
Copy link
Member

dy commented Sep 7, 2016

So seems we decided on this, other concerns in separate tickets and @ahdinosaur’s list seems to be done, we can close the issue?

# for free to join this conversation on GitHub. Already have an account? # to comment
Projects
None yet
Development

No branches or pull requests

5 participants