The Eclipse Zenoh: Zero Overhead Pub/sub, Store/Query and Compute.
Zenoh (pronounce /zeno/) unifies data in motion, data at rest, and computations. It carefully blends traditional pub/sub with geo-distributed storages, queries and computations, while retaining a level of time and space efficiency that is well beyond any of the mainstream stacks.
Check the website zenoh.io and the roadmap for more detailed information.
zenoh-pico is the Eclipse zenoh implementation that targets constrained devices, offering a native C API. It is fully compatible with its main Rust Zenoh implementation, providing a lightweight implementation of most functionalities.
Currently, zenoh-pico provides support for the following (RT)OSs and protocols:
(RT)OS | Transport Layer | Network Layer | Data Link Layer |
---|---|---|---|
Unix | UDP (unicast and multicast), TCP | IPv4, IPv6, 6LoWPAN | WiFi, Ethernet, Thread |
Windows | UDP (unicast and multicast), TCP | IPv4, IPv6 | WiFi, Ethernet |
Zephyr | UDP (unicast and multicast), TCP | IPv4, IPv6, 6LoWPAN | WiFi, Ethernet, Thread, Serial |
Arduino | UDP (unicast and multicast), TCP | IPv4, IPv6 | WiFi, Ethernet, Bluetooth (Serial profile), Serial |
ESP-IDF | UDP (unicast and multicast), TCP | IPv4, IPv6 | WiFi, Ethernet, Serial |
MbedOS | UDP (unicast and multicast), TCP | IPv4, IPv6 | WiFi, Ethernet, Serial |
OpenCR | UDP (unicast and multicast), TCP | IPv4 | WiFi |
Emscripten | Websocket | IPv4, IPv6 | WiFi, Ethernet |
FreeRTOS-Plus-TCP | UDP (unicast), TCP | IPv4 | Ethernet |
Check the website zenoh.io and the roadmap for more detailed information.
The Eclipse zenoh-pico library is available as Debian, RPM, and tgz packages in the Eclipse zenoh-pico download area. Those packages are built using manylinux2010 x86-32 and x86-64 for compatibility with most Linux platforms. There are two kind of packages:
- libzenohpico: only contains the library file (.so)
- libzenohpico-dev: contains the zenoh-pico header files for development and depends on the libzenohpico package
For other platforms - like RTOS for embedded systems / microcontrollers -, you will need to clone and build the sources. Check below for more details.
⚠️ WARNING⚠️ : Zenoh and its ecosystem are under active development. When you build from git, make sure you also build from git any other Zenoh repository you plan to use (e.g. binding, plugin, backend, etc.). It may happen that some changes in git are not compatible with the most recent packaged Zenoh release (e.g. deb, docker, pip). We put particular effort in mantaining compatibility between the various git repositories in the Zenoh project.
To build the zenoh-pico library, you need to ensure that cmake is available on your platform -- if not please install it.
Once the cmake dependency is satisfied, just do the following for CMake version 3 and higher:
-- CMake version 3 and higher --
$ cd /path/to/zenoh-pico
$ make
$ make install # on Linux use **sudo**
If you want to build with debug symbols, set the BUILD_TYPE=Debug
environment variable before to run make:
$ cd /path/to/zenoh-pico
$ BUILD_TYPE=Debug make
$ make install # on Linux use **sudo**
For those that still have CMake version 2.8, do the following commands:
$ cd /path/to/zenoh-pico
$ mkdir build
$ cd build
$ cmake -DCMAKE_BUILD_TYPE=Release ../cmake-2.8
$ make
$ make install # on Linux use **sudo**
In order to manage and ease the process of building and deploying into a a variety of platforms and frameworks for embedded systems and microcontrollers, PlatformIO can be used as a supporting platform.
Once the PlatformIO dependency is satisfied, follow the steps below for the tested micro controllers.
Note: tested with reel_board, nucleo-f767zi, nucleo-f420zi, and nRF52840 boards.
A typical PlatformIO project for Zephyr framework must have the following structure:
project_dir
├── include
├── lib
├── src
│ └── main.c
├── zephyr
│ ├── prj.conf
│ └── CMakeLists.txt
└── platformio.ini
To initialize this project structure, execute the following commands:
$ mkdir -p /path/to/project_dir
$ cd /path/to/project_dir
$ platformio init -b reel_board
$ platformio run
Include the CMakelist.txt and prj.conf in the project_dir/zephyr folder as shown in the structure above,
$ cp /path/to/zenoh-pico/docs/zephyr/reel_board/CMakelists.txt /path/to/project_dir/zephyr/
$ cp /path/to/zenoh-pico/docs/zephyr/reel_board/prj.conf /path/to/project_dir/zephyr/
and add zenoh-pico as a library by doing:
$ ln -s /path/to/zenoh-pico /path/to/project_dir/lib/zenoh-pico
or just include the following line in platformio.ini:
lib_deps = https://github.com/eclipse-zenoh/zenoh-pico
Finally, your code should go into project_dir/src/main.c. Check the examples provided in examples directory.
To build and upload the code into the board, run the following command:
platformio run
platformio run -t upload
Note: tested with az-delivery-devkit-v4 ESP32 board
A typical PlatformIO project for Arduino framework must have the following structure:
project_dir
├── include
├── lib
├── src
│ └── main.ino
└── platformio.ini
To initialize this project structure, execute the following commands:
$ mkdir -p /path/to/project_dir
$ cd /path/to/project_dir
$ platformio init -b az-delivery-devkit-v4
$ platformio run
Add zenoh-pico as a library by doing:
$ ln -s /path/to/zenoh-pico /path/to/project_dir/lib/zenoh-pico
or just include the following line in platformio.ini:
lib_deps = https://github.com/eclipse-zenoh/zenoh-pico
Finally, your code should go into project_dir/src/main.ino. Check the examples provided in examples directory.
To build and upload the code into the board, run the following command:
platformio run
platformio run -t upload
Note: tested with az-delivery-devkit-v4 ESP32 board
A typical PlatformIO project for ESP-IDF framework must have the following structure:
project_dir
├── include
├── lib
├── src
| ├── CMakeLists.txt
│ └── main.c
├── CMakeLists.txt
└── platformio.ini
To initialize this project structure, execute the following commands:
$ mkdir -p /path/to/project_dir
$ cd /path/to/project_dir
$ platformio init -b az-delivery-devkit-v4
$ platformio run
Add zenoh-pico as a library by doing:
$ ln -s /path/to/zenoh-pico /path/to/project_dir/lib/zenoh-pico
or just include the following line in platformio.ini:
lib_deps = https://github.com/eclipse-zenoh/zenoh-pico
Finally, your code should go into project_dir/src/main.ino. Check the examples provided in examples directory.
To build and upload the code into the board, run the following command:
platformio run
platformio run -t upload
Note: tested with nucleo-f747zi and nucleo-f429zi boards
A typical PlatformIO project for MbedOS framework must have the following structure:
project_dir
├── include
├── src
│ └── main.ino
└── platformio.ini
To initialize this project structure, execute the following commands:
$ mkdir -p /path/to/project_dir
$ cd /path/to/project_dir
$ platformio init -b az-delivery-devkit-v4
$ platformio run
Add zenoh-pico as a library by doing:
$ ln -s /path/to/zenoh-pico /path/to/project_dir/lib/zenoh-pico
or just include the following line in platformio.ini:
lib_deps = https://github.com/eclipse-zenoh/zenoh-pico
Finally, your code should go into project_dir/src/main.ino. Check the examples provided in examples directory.
To build and upload the code into the board, run the following command:
platformio run
platformio run -t upload
Note: tested with ROBOTIS OpenCR 1.0 board
A typical PlatformIO project for OpenCR framework must have the following structure:
project_dir
├── include
├── lib
├── src
│ └── main.ino
└── platformio.ini
Note: to add support for OpenCR in PlatformIO, follow the steps presented in our blog.
To initialize this project structure, execute the following commands:
$ mkdir -p /path/to/project_dir
$ cd /path/to/project_dir
$ platformio init -b opencr
$ platformio run
Add zenoh-pico as a library by doing:
$ ln -s /path/to/zenoh-pico /path/to/project_dir/lib/zenoh-pico
or just include the following line in platformio.ini:
lib_deps = https://github.com/eclipse-zenoh/zenoh-pico
Finally, your code should go into project_dir/src/main.ino. Check the examples provided in examples directory.
To build and upload the code into the board, run the following command:
platformio run
platformio run -t upload
The simplest way to run some of the example is to get a Docker image of the zenoh router (see http://zenoh.io/docs/getting-started/quick-test/) and then to run the examples on your machine.
Assuming you've pulled the Docker image of the zenoh router on a Linux host (to leverage UDP multicast scouting as explained here, then simply do:
$ docker run --init --net host eclipse/zenoh:main
To see the zenoh manual page, simply do:
$ docker run --init --net host eclipse/zenoh:main --help
--net host
option in Docker is restricted to Linux only.
The cause is that Docker doesn't support UDP multicast between a container and its host (see cases moby/moby#23659, moby/libnetwork#2397 or moby/libnetwork#552). The only known way to make it work is to use the --net host
option that is only supported on Linux hosts.
Assuming that (1) you are running the zenoh router, and (2) you are under the build directory, do:
$ ./z_sub
And on another shell, do:
$ ./z_pub
Assuming you are running the zenoh router, do:
$ ./z_queryable
And on another shell, do:
$ ./z_get
Zenoh-Pico can also work in P2P mode over UDP multicast. This allows a Zenoh-Pico application to communicate directly with another Zenoh-Pico application without requiring a Zenoh Router.
Assuming that (1) you are under the build directory, do:
$ ./z_sub -m peer -l udp/224.0.0.123:7447#iface=lo0
And on another shell, do:
$ ./z_pub -m peer -l udp/224.0.0.123:7447#iface=lo0
where lo0
is the network interface you want to use for multicast communication.
Warning
Multicast communication does not perform any negotiation upon group joining. Because of that, it is important that all transport parameters are the same to make sure all your nodes in the system can communicate. One common parameter to configure is the batch size since its default value depends on the actual platform when operating on multicast:
- with zenoh-pico you can configure it via the
BATCH_MULTICAST_SIZE
build option (see below) - with other Zenoh APIs, set the "transport/link/tx/batch_size" value in configuration file
E.g., the batch size on Linux and Windows is 65535 bytes, on Mac OS X is 9216, anything else is 8192.
To allow Zenoh-Pico unicast clients to talk to Zenoh-Pico multicast peers, as well as with any other Zenoh client/peer, you need to start a Zenoh Router that listens on both multicast and unicast:
$ docker run --init --net host eclipse/zenoh:main -l udp/224.0.0.123:7447#iface=lo0 -l tcp/127.0.0.1:7447
Assuming that (1) you are running the zenoh router as indicated above, and (2) you are under the build directory, do:
$ ./z_sub -m client -e tcp/127.0.0.1:7447
A subscriber will connect in client mode to the zenoh router over TCP unicast.
And on another shell, do:
$ ./z_pub -m peer -l udp/224.0.0.123:7447#iface=lo0
A publisher will start publishing over UDP multicast and the zenoh router will take care of forwarding data from the Zenoh-Pico publisher to the Zenoh-Pico subscriber.
By default debug logs are deactivated but if you're encountering issues they can help you finding the cause. To activate them you need to pass the build flag value: -DZENOH_DEBUG=3
If you get an error when opening the session even though everything is setup correctly, it might be because the default buffer sizes are too large for the limited memory available on your system.
The first thing to try is to reduce the values of the following configuration options (found in CMakeLists.txt
):
- BATCH_UNICAST_SIZE: The maximum size of a packet in client mode.
- BATCH_MULTICAST_SIZE: The maximum size of a packet in peer mode.
- FRAG_MAX_SIZE: The maximum size of a message that can be fragmented into multiple packets.
Until you find values that suits both your app requirements and your system memory constraints.
These values can also be passed directly as cmake args. For example, in a platformio.ini
you might write:
board_build.cmake_extra_args=
-DBATCH_UNICAST_SIZE=1024
-DFRAG_MAX_SIZE=2048