From 360eb108686cea513f644f3d2c1bd0117c33fbb5 Mon Sep 17 00:00:00 2001 From: Oussama Elachqar Date: Tue, 28 Jan 2025 15:33:05 -0800 Subject: [PATCH 1/2] update --- notebooks/Oumi - Deploying a Job.ipynb | 16 ++++++++++---- notebooks/Oumi - Distill a Large Model.ipynb | 19 +++++++++++------ ... - Launching Jobs on Custom Clusters.ipynb | 19 ++++++++++------- ...umi - Training CNN on Custom Dataset.ipynb | 21 ++++++++++++------- notebooks/Oumi - Vision Language Models.ipynb | 21 ++++++++++++------- 5 files changed, 64 insertions(+), 32 deletions(-) diff --git a/notebooks/Oumi - Deploying a Job.ipynb b/notebooks/Oumi - Deploying a Job.ipynb index c4f6bcf4e..a7ae2f357 100644 --- a/notebooks/Oumi - Deploying a Job.ipynb +++ b/notebooks/Oumi - Deploying a Job.ipynb @@ -51,11 +51,19 @@ "metadata": {}, "source": [ "### Oumi Installation\n", - "First, let's install Oumi. You can find detailed instructions [here](https://github.com/oumi-ai/oumi/blob/main/README.md), but it should be as simple as:\n", + "First, let's install Oumi. You can find more detailed instructions [here](https://oumi.ai/docs/en/latest/get_started/installation.html). \n", "\n", - "```bash\n", - "pip install -e \".[dev]\"\n", - "```\n" + "If you have a GPU, you can run the following commands to install Oumi:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install uv -q\n", + "!uv pip install oumi[gpu] --no-progress --system" ] }, { diff --git a/notebooks/Oumi - Distill a Large Model.ipynb b/notebooks/Oumi - Distill a Large Model.ipynb index 38c776079..693390ef0 100644 --- a/notebooks/Oumi - Distill a Large Model.ipynb +++ b/notebooks/Oumi - Distill a Large Model.ipynb @@ -55,13 +55,20 @@ "We recommend 8xA100-80GB GPUs to complete in a timely manner with adequate performance.\n", "\n", "## Oumi Installation\n", - "First, let's install Oumi. You can find detailed instructions [here](https://github.com/oumi-ai/oumi/blob/main/README.md), but it should be as simple as:\n", "\n", - "```bash\n", - "pip install -e \".[gpu]\" # if you have an nvidia or AMD GPU\n", - "# OR\n", - "pip install -e \".\" # if you don't have a GPU\n", - "```" + "First, let's install Oumi. You can find more detailed instructions [here](https://oumi.ai/docs/en/latest/get_started/installation.html). \n", + "\n", + "If you have a GPU, you can run the following commands to install Oumi:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install uv -q\n", + "!uv pip install oumi[gpu] --no-progress --system" ] }, { diff --git a/notebooks/Oumi - Launching Jobs on Custom Clusters.ipynb b/notebooks/Oumi - Launching Jobs on Custom Clusters.ipynb index 9a775d77d..8626e2a54 100644 --- a/notebooks/Oumi - Launching Jobs on Custom Clusters.ipynb +++ b/notebooks/Oumi - Launching Jobs on Custom Clusters.ipynb @@ -45,19 +45,22 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Prerequisites\n" + "# Prerequisites\n", + "## Oumi Installation\n", + "\n", + "First, let's install Oumi. You can find more detailed instructions [here](https://oumi.ai/docs/en/latest/get_started/installation.html). \n", + "\n", + "If you have a GPU, you can run the following commands to install Oumi:" ] }, { - "cell_type": "markdown", + "cell_type": "code", + "execution_count": null, "metadata": {}, + "outputs": [], "source": [ - "## Oumi Installation\n", - "First, let's install Oumi. You can find detailed instructions [here](https://github.com/oumi-ai/oumi/blob/main/README.md), but it should be as simple as:\n", - "\n", - "```bash\n", - "pip install -e \".[dev]\"\n", - "```" + "%pip install uv -q\n", + "!uv pip install oumi[gpu] --no-progress --system" ] }, { diff --git a/notebooks/Oumi - Training CNN on Custom Dataset.ipynb b/notebooks/Oumi - Training CNN on Custom Dataset.ipynb index 09ee34c3b..74f1c25ea 100644 --- a/notebooks/Oumi - Training CNN on Custom Dataset.ipynb +++ b/notebooks/Oumi - Training CNN on Custom Dataset.ipynb @@ -45,15 +45,22 @@ "id": "fHDr11SqSgtP" }, "source": [ - "# Prerequisites\n", + "# 📋 Prerequisites\n", "## Oumi Installation\n", - "First, let's install Oumi. You can find detailed instructions [here](https://github.com/oumi-ai/oumi/blob/main/README.md), but it should be as simple as:\n", "\n", - "```bash\n", - "pip install -e \".[gpu]\" # if you have an nvidia or AMD GPU\n", - "# OR\n", - "pip install -e \".\" # if you don't have a GPU\n", - "```" + "First, let's install Oumi. You can find more detailed instructions [here](https://oumi.ai/docs/en/latest/get_started/installation.html). \n", + "\n", + "If you have a GPU, you can run the following commands to install Oumi:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install uv -q\n", + "!uv pip install oumi[gpu] --no-progress --system" ] }, { diff --git a/notebooks/Oumi - Vision Language Models.ipynb b/notebooks/Oumi - Vision Language Models.ipynb index 7bc8f386b..34e2a336c 100644 --- a/notebooks/Oumi - Vision Language Models.ipynb +++ b/notebooks/Oumi - Vision Language Models.ipynb @@ -61,15 +61,22 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Prerequisites\n", + "# 📋 Prerequisites\n", "## Oumi Installation\n", - "First, let's install Oumi and some additional packages. For this notebook you will need access to at least one NVIDIA or AMD GPU **with ~30GBs of memory**.\n", "\n", - "You can find detailed installation instructions [here](https://github.com/oumi-ai/oumi/blob/main/README.md), but for Oumi it should be as simple as:\n", + "First, let's install Oumi. You can find more detailed instructions [here](https://oumi.ai/docs/en/latest/get_started/installation.html). \n", "\n", - "```bash\n", - "pip install -e \".[gpu]\"\n", - "```" + "If you have a GPU, you can run the following commands to install Oumi:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install uv -q\n", + "!uv pip install oumi[gpu] --no-progress --system" ] }, { @@ -79,7 +86,7 @@ "outputs": [], "source": [ "# Additionally, install the following packages for widget visualization.\n", - "!pip install ipywidgets\n", + "%pip install ipywidgets\n", "\n", "# And deactivate the parallelism warning from the tokenizers library.\n", "import os\n", From 31a6d66b2cc672fd0283618a03867a591fcc69a5 Mon Sep 17 00:00:00 2001 From: Oussama Elachqar Date: Tue, 28 Jan 2025 15:39:51 -0800 Subject: [PATCH 2/2] missing notebooks --- notebooks/Oumi - Finetuning Tutorial.ipynb | 19 ++++++++++----- ...mi - Using vLLM Engine for Inference.ipynb | 23 +++++++++++++------ 2 files changed, 29 insertions(+), 13 deletions(-) diff --git a/notebooks/Oumi - Finetuning Tutorial.ipynb b/notebooks/Oumi - Finetuning Tutorial.ipynb index d0c7d8d67..9f7536906 100644 --- a/notebooks/Oumi - Finetuning Tutorial.ipynb +++ b/notebooks/Oumi - Finetuning Tutorial.ipynb @@ -48,13 +48,20 @@ "source": [ "# Prerequisites\n", "## Oumi Installation\n", - "First, let's install Oumi. You can find detailed instructions [here](https://github.com/oumi-ai/oumi/blob/main/README.md), but it should be as simple as:\n", "\n", - "```bash\n", - "pip install -e \".[gpu]\" # if you have an nvidia or AMD GPU\n", - "# OR\n", - "pip install -e \".\" # if you don't have a GPU\n", - "```" + "First, let's install Oumi. You can find more detailed instructions [here](https://oumi.ai/docs/en/latest/get_started/installation.html). \n", + "\n", + "If you have a GPU, you can run the following commands to install Oumi:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install uv -q\n", + "!uv pip install oumi[gpu] vllm --no-progress --system" ] }, { diff --git a/notebooks/Oumi - Using vLLM Engine for Inference.ipynb b/notebooks/Oumi - Using vLLM Engine for Inference.ipynb index d8399865d..331d5a713 100644 --- a/notebooks/Oumi - Using vLLM Engine for Inference.ipynb +++ b/notebooks/Oumi - Using vLLM Engine for Inference.ipynb @@ -48,16 +48,25 @@ "make gcpcode ARGS=\"--resources.accelerators A100:4\" # 4 A100-40GB GPUs, enough for 70B model. Can also use 2x \"A100-80GB\"\n", "```\n", "\n", - "## Oumi Installation\n", - "First, let's install Oumi and other dependencies needed for this notebook. You can find detailed instructions [here](https://github.com/oumi-ai/oumi/blob/main/README.md), but it should be as simple as:\n", + "## Llama Access\n", "\n", - "```bash\n", - "pip install oumi vllm # Install Oumi, vLLM, and Ray\n", - "```\n", + "Llama 3.3 70B is a gated model on HuggingFace Hub. To run this notebook, you must first complete the [agreement](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) on HuggingFace, and wait for it to be accepted. Then, specify `HF_TOKEN` below to enable access to the model.\n", "\n", - "## Llama Access\n", + "## Oumi Installation\n", "\n", - "Llama 3.3 70B is a gated model on HuggingFace Hub. To run this notebook, you must first complete the [agreement](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) on HuggingFace, and wait for it to be accepted. Then, specify `HF_TOKEN` below to enable access to the model." + "First, let's install Oumi. You can find more detailed instructions [here](https://oumi.ai/docs/en/latest/get_started/installation.html). \n", + "\n", + "If you have a GPU, you can run the following commands to install Oumi:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install uv -q\n", + "!uv pip install oumi[gpu] vllm --no-progress --system" ] }, {