Skip to content

The official code for "Olympus: A Universal Task Router for Computer Vision Tasks"

Notifications You must be signed in to change notification settings

yuanze-lin/Olympus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

69 Commits
 
 
 
 

Repository files navigation

icon

Olympus: A Universal Task Router for Computer Vision Tasks

PDF arXiv Project Page Weights

Official implementation of "Olympus: A Universal Task Router for Computer Vision Tasks"

♥️ If you find our project is helpful for your research, please kindly give us a 🌟 and cite our paper 📑 : )

📣 News

  • Release the training code.
  • Release Olympus datasets.
  • Release the inference code of Olympus.
  • We've released the weights of Olympus.

📄 Abstract

We introduce Olympus, a new approach that transforms Multimodal Large Language Models (MLLMs) into a unified framework capable of handling a wide array of computer vision tasks. Utilizing a controller MLLM, Olympus delegates over 20 specialized tasks across images, videos, and 3D objects to dedicated modules. This instruction-based routing enables complex workflows through chained actions without the need for training heavy generative models. Olympus easily integrates with existing MLLMs, expanding their capabilities with comparable performance. Experimental results demonstrate that Olympus achieves an average routing accuracy of 94.75% across 20 tasks and precision of 91.82% in chained action scenarios, showcasing its effectiveness as a universal task router that can solve a diverse range of computer vision tasks.

🔅 Overview

image

🔮 Suppored Capacities (Covering 20 tasks)

image

🏂 Diverse Applications

image

Citation

If you find our work useful in your research or applications, please consider citing our paper using the following BibTeX:

@article{lin2024olympus,
  title={Olympus: A Universal Task Router for Computer Vision Tasks},
  author={Lin, Yuanze and Li, Yunsheng and Chen, Dongdong and Xu, Weijian and Clark, Ronald and Torr, Philip HS},
  journal={arXiv preprint arXiv:2412.09612},
  year={2024}
}