|
| 1 | +<!--Copyright 2024 The HuggingFace Team. All rights reserved. |
| 2 | +
|
| 3 | +Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with |
| 4 | +the License. You may obtain a copy of the License at |
| 5 | +
|
| 6 | +http://www.apache.org/licenses/LICENSE-2.0 |
| 7 | +
|
| 8 | +Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on |
| 9 | +an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the |
| 10 | +specific language governing permissions and limitations under the License. |
| 11 | +
|
| 12 | +⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be |
| 13 | +rendered properly in your Markdown viewer. |
| 14 | +
|
| 15 | +--> |
| 16 | + |
| 17 | +# Prompt Depth Anything |
| 18 | + |
| 19 | +## Overview |
| 20 | + |
| 21 | +The Prompt Depth Anything model was introduced in [Prompting Depth Anything for 4K Resolution Accurate Metric Depth Estimation](https://arxiv.org/abs/2412.14015) by Haotong Lin, Sida Peng, Jingxiao Chen, Songyou Peng, Jiaming Sun, Minghuan Liu, Hujun Bao, Jiashi Feng, Xiaowei Zhou, Bingyi Kang. |
| 22 | + |
| 23 | + |
| 24 | +The abstract from the paper is as follows: |
| 25 | + |
| 26 | +*Prompts play a critical role in unleashing the power of language and vision foundation models for specific tasks. For the first time, we introduce prompting into depth foundation models, creating a new paradigm for metric depth estimation termed Prompt Depth Anything. Specifically, we use a low-cost LiDAR as the prompt to guide the Depth Anything model for accurate metric depth output, achieving up to 4K resolution. Our approach centers on a concise prompt fusion design that integrates the LiDAR at multiple scales within the depth decoder. To address training challenges posed by limited datasets containing both LiDAR depth and precise GT depth, we propose a scalable data pipeline that includes synthetic data LiDAR simulation and real data pseudo GT depth generation. Our approach sets new state-of-the-arts on the ARKitScenes and ScanNet++ datasets and benefits downstream applications, including 3D reconstruction and generalized robotic grasping.* |
| 27 | + |
| 28 | +<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/prompt_depth_anything_architecture.jpg" |
| 29 | +alt="drawing" width="600"/> |
| 30 | + |
| 31 | +<small> Prompt Depth Anything overview. Taken from the <a href="https://arxiv.org/pdf/2412.14015">original paper</a>.</small> |
| 32 | + |
| 33 | +## Usage example |
| 34 | + |
| 35 | +The Transformers library allows you to use the model with just a few lines of code: |
| 36 | + |
| 37 | +```python |
| 38 | +>>> import torch |
| 39 | +>>> import requests |
| 40 | +>>> import numpy as np |
| 41 | + |
| 42 | +>>> from PIL import Image |
| 43 | +>>> from transformers import AutoImageProcessor, AutoModelForDepthEstimation |
| 44 | + |
| 45 | +>>> url = "https://github.com/DepthAnything/PromptDA/blob/main/assets/example_images/image.jpg?raw=true" |
| 46 | +>>> image = Image.open(requests.get(url, stream=True).raw) |
| 47 | + |
| 48 | +>>> image_processor = AutoImageProcessor.from_pretrained("depth-anything/prompt-depth-anything-vits-hf") |
| 49 | +>>> model = AutoModelForDepthEstimation.from_pretrained("depth-anything/prompt-depth-anything-vits-hf") |
| 50 | + |
| 51 | +>>> prompt_depth_url = "https://github.com/DepthAnything/PromptDA/blob/main/assets/example_images/arkit_depth.png?raw=true" |
| 52 | +>>> prompt_depth = Image.open(requests.get(prompt_depth_url, stream=True).raw) |
| 53 | +>>> # the prompt depth can be None, and the model will output a monocular relative depth. |
| 54 | + |
| 55 | +>>> # prepare image for the model |
| 56 | +>>> inputs = image_processor(images=image, return_tensors="pt", prompt_depth=prompt_depth) |
| 57 | + |
| 58 | +>>> with torch.no_grad(): |
| 59 | +... outputs = model(**inputs) |
| 60 | + |
| 61 | +>>> # interpolate to original size |
| 62 | +>>> post_processed_output = image_processor.post_process_depth_estimation( |
| 63 | +... outputs, |
| 64 | +... target_sizes=[(image.height, image.width)], |
| 65 | +... ) |
| 66 | + |
| 67 | +>>> # visualize the prediction |
| 68 | +>>> predicted_depth = post_processed_output[0]["predicted_depth"] |
| 69 | +>>> depth = predicted_depth * 1000 |
| 70 | +>>> depth = depth.detach().cpu().numpy() |
| 71 | +>>> depth = Image.fromarray(depth.astype("uint16")) # mm |
| 72 | +``` |
| 73 | + |
| 74 | +## Resources |
| 75 | + |
| 76 | +A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Prompt Depth Anything. |
| 77 | + |
| 78 | +- [Prompt Depth Anything Demo](https://huggingface.co/spaces/depth-anything/PromptDA) |
| 79 | +- [Prompt Depth Anything Interactive Results](https://promptda.github.io/interactive.html) |
| 80 | + |
| 81 | +If you are interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
| 82 | + |
| 83 | +## PromptDepthAnythingConfig |
| 84 | + |
| 85 | +[[autodoc]] PromptDepthAnythingConfig |
| 86 | + |
| 87 | +## PromptDepthAnythingForDepthEstimation |
| 88 | + |
| 89 | +[[autodoc]] PromptDepthAnythingForDepthEstimation |
| 90 | + - forward |
| 91 | + |
| 92 | +## PromptDepthAnythingImageProcessor |
| 93 | + |
| 94 | +[[autodoc]] PromptDepthAnythingImageProcessor |
| 95 | + - preprocess |
| 96 | + - post_process_depth_estimation |
0 commit comments