-
Notifications
You must be signed in to change notification settings - Fork 545
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Rendering to loopback w/ simultaneous playback on hardware device #182
Comments
So if I'm understanding right, you want to play and spatialize the audio from video objects completely separate from the audio logic of the rest of the app, just using the same audio output? Most audio systems are capable of opening the same device multiple times and mixing what they're given to the output, even if the hardware itself isn't capable of mixing. In that case, you should be able to simply call The main thing to watch out for with this is the current context. If your audio handling is all on one thread (i.e. both the video object audio and the main audio are handled on the same thread), just make sure the correct context is current before making the appropriate Or instead of opening two independent If I'm misunderstanding, or if you still have questions about what exactly to do, feel free to ask. |
Thanks for the reply -- yes for the most part we have a common understanding here, but there are a few points of confusion:
Thanks! |
I see. I suppose it's not possible to replace/override the video object's audio output component with something that's controlled by the audio engine? Things like DirectShow and GStreamer can do that using a custom sink, which get the synchronized samples and handles the audio output. Aside from simplifying the audio engine, it would also avoid another potential issue, of the video's audio format. If the video is stereo and the video object plays it as stereo, then you'll have to use stereo spatialization regardless of what the hardware output actually is. Or worse, if the video is mono and is played back as mono, you won't be able to give it spatialized audio (at most it could have mono effects and distance attenuation, but no panning).
It won't cause any glitches or gaps for the audio engine, no. For normal playback devices, OpenAL Soft mixes and feeds samples to the hardware asynchronously, and isn't affected by what the current context is (the current context just influences what the Without knowing anything about how the video object actually works, I have no idea how it will react if it takes a bit longer than usual for it to get the replaced audio samples.
When you install OpenAL using Creative's installer (oalinst.exe), it provides two files: Using OpenAL Soft directly simply means you don't use the router. |
Yeah, that would be ideal. I can't seem to find a way to do that at the moment; the only point at which I seem to be able to access samples is prior to the sync. I'll keep digging, but for now it seems unlikely.
Yeah you're absolutely right; I'm currently 'forced' to not support video with mono tracks...
OK, along these lines -- I'm also curious/concerned about introducing performance issues, if any. I will likely have multiple loopback devices, and multiple sources playing in the audio engine, so say for instance I have 10 videos and 50 sounds playing from the audio engine. That's 11 devices open and 11 contexts to continually switch between. I have no idea about the estimate of the memory/performance implications/impact, if any. Do you have a rough guesstimate about the kinds of lag in the audio engine update / perf impact using these number of devices & contexts?
OK, great -- so I am not using the router DLL. Does this mean I can use the thread-local context, and then not have to worry about context switching at all? I think I saw somewhere that |
Not sure if you're still following this thread, but I had a couple of more questions in addition to my last comment here:
|
Sorry for lack of response.
11 devices/contexts will certainly have higher memory/cpu use, since each one maintains a dry buffer (the "master" mixing buffer all its contexts' sources and effects write to) which is then processed and written to the output. How much impact that'll have will depend on your hardware, and how much load the system is already under.
It can change the position of some source of the context that is current, generate an error, or no-op. In theory it could also crash if there is no current context, but I don't think that will actually happen with any current implementation.
All contexts associated with the device, not just the current context. This mirrors behavior of normal playback devices, which continues mixing the sources and effects for all its contexts as well, not just the current ones (for OpenAL Soft, at least). |
Thanks a lot for the replies kcat. Will re-open if I have more questions. Cheers. |
I have the following situation, and I'm curious to know what the 'correct' way to implement a solution is:
I have a video object that streams (outputs) video and audio data - I am able to grab the audio data (samples) from this video object. I would like to spatialize this audio. To that end, I create an OpenAL loopback device, fill an OpenAL buffer and a source with the audio data (from the video object), play the source, render it on the loopback device, and transfer the rendered (i.e. now spatialized) samples back to the video object. The video object then plays this audio, spatialized, with no issues on my default playback hardware device. Cool.
Now: I also have an audio engine that also uses OpenAL, but is completely separate to the video object. The audio engine uses the default playback hardware device directly, with regular "non-streaming" files (.wav, etc.) to create buffers and sources and to spatialize and play them. Note that the audio engine handles multiples buffers and multiple sources. Cool.
Here's my eventual goal: I will have multiple video objects. I would like the audio from each to be spatialized, and for all objects to playback without sync issues etc. At the same time, the audio engine should also concurrently be able to play it's own sources & buffers and not 'interfere' with the audio being output from all the video objects. All audio, from the audio engine and the video objects, eventually is output via the same and only available hardware device (i.e. my laptop speakers, with a single sound card in it).
This is achievable, right?
My main confusion lies with the number of contexts -- or loopback devices -- I should be creating. Here's my current understanding, and I would love some clarity on it:
The audio engine opens and initializes the default device. It has a single context. Let's call this Context A. So any time the audio engine needs to create a buffer or play a source etc., it makes Context A the active context, creates the buffer/calls play on the source etc.
For the video objects, I need to either:
OR
So my main question is whether I should go ahead with solution # 1 or # 2 above. Again, I'm assuming either can be implemented concurrently with the audio engine doing it's thing on the playback hardware directly. Please let me know if my understanding of all this makes sense!
Thanks
The text was updated successfully, but these errors were encountered: