Skip to content

ROS2 nodes for LLM, VLM, VLA

License

MIT, Apache-2.0 licenses found

Licenses found

MIT
LICENSE
Apache-2.0
LICENSE.md
Notifications You must be signed in to change notification settings

NVIDIA-AI-IOT/ros2_nanollm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ros2_nanollm

ROS2 nodes for LLM, VLM, VLA

NanoLLM optimizes many different models to run on the NVIDIA Jetson Orin. This project provides a ROS 2 package for scene description using NanoLLM.

Setup

  1. Run through these setup steps to get your Docker engine configured.
  2. Set up your ROS development environment by following the instructions here.
  3. Install the NanoLLM Docker Container by following the installation instructions here.

Dev Mode

By default, the ros2_nanollm package is built into the container and installed under /ros2_workspace (which is an environment automatically sourced on container startup). But if you would like to edit this package interactively, you can do so by cloning it to an external workspace on your host device, and then mounting that workspace overlaid on the original location.

# make a ROS workspace somewhere outside container, and clone ros2_nanollm
mkdir -p ~/ros2_workspace/src
cd ~/ros2_workspace/src
git clone https://github.com/NVIDIA-AI-IOT/ros2_nanollm

# start nano_llm:humble container, mounting in your own workspace
jetson-containers run -v ~/ros2_workspace:/ros2_workspace $(autotag nano_llm:humble)

# build the mounted workspace (this is running inside container at this point)
cd /ros2_workspace
colcon build --symlink-install --base-paths src
bash /ros2_workspace/install/setup.bash

# check that the nodes are still there
ros2 pkg list | grep ros2_nanollm
ros2 pkg executables ros2_nanollm

Usage

jetson-containers run $(autotag nano_llm:humble) /
    ros2 launch ros2_nanollm camera_input_example.launch.py /
        model:=<path-to-model> api:=<model-backend> quantization:=<method>

ROS Parameters

ROS Parameter Type Default Description
model string Efficient-Large-Model/Llama-3-VILA1.5-8B The model to use
api string mlc The model backend to use
quantization string q4f16_ft Quantization method to use

Topics Subscribed

ROS Topic Interface Description
input_image sensor_msgs/Image The image on which analysis is to be performed
input_query std_msgs/String A prompt for the model to generate a corresponding response

Topics Published

ROS Topic Interface Description
output_msg std_msgs/String The output message summarizing the model's conclusions from the inputs

About

ROS2 nodes for LLM, VLM, VLA

Resources

License

MIT, Apache-2.0 licenses found

Licenses found

MIT
LICENSE
Apache-2.0
LICENSE.md

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages