Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

[notebooks] Update installation instructions for colab #1297

Merged
merged 2 commits into from
Jan 28, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 12 additions & 4 deletions notebooks/Oumi - Deploying a Job.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -51,11 +51,19 @@
"metadata": {},
"source": [
"### Oumi Installation\n",
"First, let's install Oumi. You can find detailed instructions [here](https://github.com/oumi-ai/oumi/blob/main/README.md), but it should be as simple as:\n",
"First, let's install Oumi. You can find more detailed instructions [here](https://oumi.ai/docs/en/latest/get_started/installation.html). \n",
"\n",
"```bash\n",
"pip install -e \".[dev]\"\n",
"```\n"
"If you have a GPU, you can run the following commands to install Oumi:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install uv -q\n",
"!uv pip install oumi[gpu] --no-progress --system"
]
},
{
Expand Down
19 changes: 13 additions & 6 deletions notebooks/Oumi - Distill a Large Model.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -55,13 +55,20 @@
"We recommend 8xA100-80GB GPUs to complete in a timely manner with adequate performance.\n",
"\n",
"## Oumi Installation\n",
"First, let's install Oumi. You can find detailed instructions [here](https://github.com/oumi-ai/oumi/blob/main/README.md), but it should be as simple as:\n",
"\n",
"```bash\n",
"pip install -e \".[gpu]\" # if you have an nvidia or AMD GPU\n",
"# OR\n",
"pip install -e \".\" # if you don't have a GPU\n",
"```"
"First, let's install Oumi. You can find more detailed instructions [here](https://oumi.ai/docs/en/latest/get_started/installation.html). \n",
"\n",
"If you have a GPU, you can run the following commands to install Oumi:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install uv -q\n",
"!uv pip install oumi[gpu] --no-progress --system"
]
},
{
Expand Down
19 changes: 13 additions & 6 deletions notebooks/Oumi - Finetuning Tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -48,13 +48,20 @@
"source": [
"# Prerequisites\n",
"## Oumi Installation\n",
"First, let's install Oumi. You can find detailed instructions [here](https://github.com/oumi-ai/oumi/blob/main/README.md), but it should be as simple as:\n",
"\n",
"```bash\n",
"pip install -e \".[gpu]\" # if you have an nvidia or AMD GPU\n",
"# OR\n",
"pip install -e \".\" # if you don't have a GPU\n",
"```"
"First, let's install Oumi. You can find more detailed instructions [here](https://oumi.ai/docs/en/latest/get_started/installation.html). \n",
"\n",
"If you have a GPU, you can run the following commands to install Oumi:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install uv -q\n",
"!uv pip install oumi[gpu] vllm --no-progress --system"
]
},
{
Expand Down
19 changes: 11 additions & 8 deletions notebooks/Oumi - Launching Jobs on Custom Clusters.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -45,19 +45,22 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Prerequisites\n"
"# Prerequisites\n",
"## Oumi Installation\n",
"\n",
"First, let's install Oumi. You can find more detailed instructions [here](https://oumi.ai/docs/en/latest/get_started/installation.html). \n",
"\n",
"If you have a GPU, you can run the following commands to install Oumi:"
]
},
{
"cell_type": "markdown",
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"## Oumi Installation\n",
"First, let's install Oumi. You can find detailed instructions [here](https://github.com/oumi-ai/oumi/blob/main/README.md), but it should be as simple as:\n",
"\n",
"```bash\n",
"pip install -e \".[dev]\"\n",
"```"
"%pip install uv -q\n",
"!uv pip install oumi[gpu] --no-progress --system"
]
},
{
Expand Down
21 changes: 14 additions & 7 deletions notebooks/Oumi - Training CNN on Custom Dataset.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -45,15 +45,22 @@
"id": "fHDr11SqSgtP"
},
"source": [
"# Prerequisites\n",
"# 📋 Prerequisites\n",
"## Oumi Installation\n",
"First, let's install Oumi. You can find detailed instructions [here](https://github.com/oumi-ai/oumi/blob/main/README.md), but it should be as simple as:\n",
"\n",
"```bash\n",
"pip install -e \".[gpu]\" # if you have an nvidia or AMD GPU\n",
"# OR\n",
"pip install -e \".\" # if you don't have a GPU\n",
"```"
"First, let's install Oumi. You can find more detailed instructions [here](https://oumi.ai/docs/en/latest/get_started/installation.html). \n",
"\n",
"If you have a GPU, you can run the following commands to install Oumi:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install uv -q\n",
"!uv pip install oumi[gpu] --no-progress --system"
]
},
{
Expand Down
23 changes: 16 additions & 7 deletions notebooks/Oumi - Using vLLM Engine for Inference.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -48,16 +48,25 @@
"make gcpcode ARGS=\"--resources.accelerators A100:4\" # 4 A100-40GB GPUs, enough for 70B model. Can also use 2x \"A100-80GB\"\n",
"```\n",
"\n",
"## Oumi Installation\n",
"First, let's install Oumi and other dependencies needed for this notebook. You can find detailed instructions [here](https://github.com/oumi-ai/oumi/blob/main/README.md), but it should be as simple as:\n",
"## Llama Access\n",
"\n",
"```bash\n",
"pip install oumi vllm # Install Oumi, vLLM, and Ray\n",
"```\n",
"Llama 3.3 70B is a gated model on HuggingFace Hub. To run this notebook, you must first complete the [agreement](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) on HuggingFace, and wait for it to be accepted. Then, specify `HF_TOKEN` below to enable access to the model.\n",
"\n",
"## Llama Access\n",
"## Oumi Installation\n",
"\n",
"Llama 3.3 70B is a gated model on HuggingFace Hub. To run this notebook, you must first complete the [agreement](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) on HuggingFace, and wait for it to be accepted. Then, specify `HF_TOKEN` below to enable access to the model."
"First, let's install Oumi. You can find more detailed instructions [here](https://oumi.ai/docs/en/latest/get_started/installation.html). \n",
"\n",
"If you have a GPU, you can run the following commands to install Oumi:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install uv -q\n",
"!uv pip install oumi[gpu] vllm --no-progress --system"
]
},
{
Expand Down
21 changes: 14 additions & 7 deletions notebooks/Oumi - Vision Language Models.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -61,15 +61,22 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Prerequisites\n",
"# 📋 Prerequisites\n",
"## Oumi Installation\n",
"First, let's install Oumi and some additional packages. For this notebook you will need access to at least one NVIDIA or AMD GPU **with ~30GBs of memory**.\n",
"\n",
"You can find detailed installation instructions [here](https://github.com/oumi-ai/oumi/blob/main/README.md), but for Oumi it should be as simple as:\n",
"First, let's install Oumi. You can find more detailed instructions [here](https://oumi.ai/docs/en/latest/get_started/installation.html). \n",
"\n",
"```bash\n",
"pip install -e \".[gpu]\"\n",
"```"
"If you have a GPU, you can run the following commands to install Oumi:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install uv -q\n",
"!uv pip install oumi[gpu] --no-progress --system"
]
},
{
Expand All @@ -79,7 +86,7 @@
"outputs": [],
"source": [
"# Additionally, install the following packages for widget visualization.\n",
"!pip install ipywidgets\n",
"%pip install ipywidgets\n",
"\n",
"# And deactivate the parallelism warning from the tokenizers library.\n",
"import os\n",
Expand Down