|
| 1 | +################################################################################ |
| 2 | +# SPDX-FileCopyrightText: Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. |
| 3 | +# SPDX-License-Identifier: Apache-2.0 |
| 4 | +# |
| 5 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
| 6 | +# you may not use this file except in compliance with the License. |
| 7 | +# You may obtain a copy of the License at |
| 8 | +# |
| 9 | +# http://www.apache.org/licenses/LICENSE-2.0 |
| 10 | +# |
| 11 | +# Unless required by applicable law or agreed to in writing, software |
| 12 | +# distributed under the License is distributed on an "AS IS" BASIS, |
| 13 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
| 14 | +# See the License for the specific language governing permissions and |
| 15 | +# limitations under the License. |
| 16 | +################################################################################ |
| 17 | + |
| 18 | +Prerequisites: |
| 19 | +- DeepStreamSDK 6.1.1 |
| 20 | +- Python 3.8 |
| 21 | +- Gst-python |
| 22 | + |
| 23 | +To run: |
| 24 | + $ python3 deepstream_demux_multi_in_multi_out.py -i <uri1> [uri2] ... [uriN] |
| 25 | +e.g. |
| 26 | + $ python3 deepstream_demux_multi_in_multi_out.py -i file:///home/ubuntu/video1.mp4 file:///home/ubuntu/video2.mp4 |
| 27 | + $ python3 deepstream_demux_multi_in_multi_out.py -i rtsp://127.0.0.1/video1 rtsp://127.0.0.1/video2 |
| 28 | + |
| 29 | +This document describes the sample deepstream_demux_multi_in_multi_out application. |
| 30 | + |
| 31 | +This sample builds on top of the deepstream-test3 sample to demonstrate how to: |
| 32 | + |
| 33 | +* Uses multiple sources in the pipeline. |
| 34 | +* The pipeline uses `nvstreamdemux` to split batches and output separate buffer/streams. |
| 35 | +* `nvstreamdemux` helps when separate output is required for each input stream. |
| 36 | + |
| 37 | +Refer to the deepstream-test1 sample documentation for an example of simple |
| 38 | +single-stream inference, bounding-box overlay, and rendering. |
| 39 | + |
| 40 | +Nvstreamdemux reference - https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvstreamdemux.html |
| 41 | + |
| 42 | +This sample accepts one or more H.264/H.265 video streams as input. It creates |
| 43 | +a source bin for each input and connects the bins to an instance of the |
| 44 | +"nvstreammux" element, which forms the batch of frames. The batch of |
| 45 | +frames is fed to "nvinfer" for batched inferencing. "nvstreamdemux" demuxes batched frames into individual buffers. |
| 46 | +It creates a separate Gst Buffer for each frame in the batch. For each input separate branch is created with the following elements in series |
| 47 | +`nvstreamdemux -> queue -> nvvidconv -> nvosd -> nveglglessink` |
| 48 | +So for two inputs, 2 separate output windows are created, likewise for N input N outputs are created. |
| 49 | + |
| 50 | +The "width" and "height" properties must be set on the stream-muxer to set the |
| 51 | +output resolution. If the input frame resolution is different from |
| 52 | +stream-muxer's "width" and "height", the input frame will be scaled to muxer's |
| 53 | +output resolution. |
| 54 | + |
| 55 | +The stream-muxer waits for a user-defined timeout before forming the batch. The |
| 56 | +timeout is set using the "batched-push-timeout" property. If the complete batch |
| 57 | +is formed before the timeout is reached, the batch is pushed to the downstream |
| 58 | +element. If the timeout is reached before the complete batch can be formed |
| 59 | +(which can happen in case of rtsp sources), the batch is formed from the |
| 60 | +available input buffers and pushed. Ideally, the timeout of the stream-muxer |
| 61 | +should be set based on the framerate of the fastest source. It can also be set |
| 62 | +to -1 to make the stream-muxer wait infinitely. |
| 63 | + |
0 commit comments