This example demonstrates how to create a robust system using iceoryx2. A central daemon pre-creates all communication resources to ensure that every required resource, such as memory, is available as soon as the application starts. Additionally, the subscriber is immediately informed if one of the processes it depends on has crashed. Even if the central daemon itself crashes, communication can continue without any restrictions. Thanks to the decentralized API of iceoryx2, the subscriber can take over the role of the central daemon and continue monitoring all processes.
The communication must also be reliable, and we expect publishers to provide updates at regular intervals. If a publisher misses a deadline, we want to be informed immediately. This situation can occur if the system is under heavy load or if a process has crashed.
This example is more advanced and consists of four components:
central_daemon
- Must run first. It creates all communication resources and monitors all nodes/processes.publisher_1
- Sends data at a specific frequency onservice_1
.publisher_2
- Sends data at a specific frequency onservice_2
.subscriber
- Connects toservice_1
andservice_2
and expects new samples within a specific time. If no sample arrives, it proactively checks for dead nodes.
+----------------+ creates ...........................
| central_daemon | ----------> : communication resources :
+----------------+ ...........................
| ^
| opens |
| +-----------------+--------------+
| | | |
| +-------------+ +-------------+ +------------+
| | publisher_1 | | publisher_2 | | subscriber |
| +-------------+ +-------------+ +------------+
| ^ ^ ^
| monitores | | |
+-------------+-------------------+-----------------+
Caution
Every payload you transmit with iceoryx2 must be compatible with shared memory. Specifically, it must:
- be self contained, no heap, no pointers to external sources
- have a uniform memory representation ->
#[repr(C)]
- not use pointers to manage their internal structure
Data types like String
or Vec
will cause undefined behavior and may
result in segmentation faults. We provide alternative data types that are
compatible with shared memory. See the
complex data type example for guidance on how to
use them.
For this example, you need to open five separate terminals.
Run the central daemon, which sets up all communication resources and monitors processes.
cargo run --example health_monitoring_central_daemon
Run the first publisher, which sends data on service_1
.
cargo run --example health_monitoring_publisher_1
Run the second publisher, which sends data on service_2
.
cargo run --example health_monitoring_publisher_2
Run the subscriber, which listens to both service_1
and service_2
.
cargo run --example health_monitoring_subscriber
Send a SIGKILL
signal to publisher_1
to simulate a fatal crash. This
ensures that the process is unable to clean up any resources.
killall -9 health_monitoring_publisher_1
After running this command:
-
The
central_daemon
will detect that the process has crashed and print:detected dead node: Some(NodeName { value: "publisher 1" })
The event service is configured to emit a
PubSub::ProcessDied
event when a process is identified as dead. -
On the
subscriber
side, you will see the message:ServiceName { value: "service_1" }: process died!
-
Since
publisher_1
is no longer sending messages, the subscriber will also regularly print another message indicating thatservice_1
has violated the contract because no new samples are being received.
Feel free to run multiple instances of publisher or subscriber processes simultaneously to explore how iceoryx2 handles publisher-subscriber communication efficiently.
You may hit the maximum supported number of ports when too many publisher or subscriber processes run. Take a look at the iceoryx2 config to set the limits globally or at the API of the Service builder to set them for a single service.