Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Main #28

Merged
merged 2 commits into from
Sep 13, 2024
Merged

Main #28

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 9 additions & 8 deletions Instructions/Exercises/01-analyze-images.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,18 +24,19 @@ If you have not already cloned the **Azure AI Vision** code repository to the en
If you don't already have one in your subscription, you'll need to provision an **Azure AI Services** resource.

1. Open the Azure portal at `https://portal.azure.com`, and # using the Microsoft account associated with your Azure subscription.
2. In the top search bar, search for *Azure AI services*, select **Azure AI Services**, and create an Azure AI services multi-service account resource with the following settings:
2. Select **Create a resource**.
3. In the search bar, search for *Azure AI services*, select **Azure AI Services**, and create an Azure AI services multi-service account resource with the following settings:
- **Subscription**: *Your Azure subscription*
- **Resource group**: *Choose or create a resource group (if you are using a restricted subscription, you may not have permission to create a new resource group - use the one provided)*
- **Region**: *Choose from East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US, or East Asia\**
- **Region**: *Choose from East US, West US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, or East Asia\**
- **Name**: *Enter a unique name*
- **# tier**: Standard S0

\*Azure AI Vision 4.0 features are currently only available in these regions.
\*Azure AI Vision 4.0 full feature sets are currently only available in these regions.

3. Select the required checkboxes and create the resource.
4. Wait for deployment to complete, and then view the deployment details.
5. When the resource has been deployed, go to it and view its **Keys and Endpoint** page. You will need the endpoint and one of the keys from this page in the next procedure.
4. Select the required checkboxes and create the resource.
5. Wait for deployment to complete, and then view the deployment details.
6. When the resource has been deployed, go to it and view its **Keys and Endpoint** page. You will need the endpoint and one of the keys from this page in the next procedure.

## Prepare to use the Azure AI Vision SDK

Expand All @@ -49,15 +50,15 @@ In this exercise, you'll complete a partially implemented client application tha
**C#**

```
dotnet add package Azure.AI.Vision.ImageAnalysis -v 1.0.0-beta.1
dotnet add package Azure.AI.Vision.ImageAnalysis -v 1.0.0-beta.3
```

> **Note**: If you are prompted to install dev kit extensions, you can safely close the message.

**Python**

```
pip install azure-ai-vision-imageanalysis==1.0.0b1
pip install azure-ai-vision-imageanalysis==1.0.0b3
```

> **Tip**: If you are doing this lab on your own machine, you'll also need to install `matplotlib` and `pillow`.
Expand Down
9 changes: 6 additions & 3 deletions Instructions/Exercises/02-image-classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,8 @@ We also need a storage account to store the training images.
- **Storage Account Name**: customclassifySUFFIX
- *note: replace the `SUFFIX` token with your initials or another value to ensure the resource name is globally unique.*
- **Region**: *Choose the same region you used for your Azure AI Service resource*
- **Primary service**: Azure Blob Storage or Azure Data Lake Storage Gen 2
- **Primary workload**: Other
- **Performance**: Standard
- **Redundancy**: Locally-redundant storage (LRS)
1. While your storage account is being created, go to Visual studio code, and expand the **Labfiles/02-image-classification** folder.
Expand All @@ -58,18 +60,19 @@ We also need a storage account to store the training images.
1. Close both the JSON and PowerShell file, and go back to your browser window.
1. Your storage account should be complete. Go to your storage account.
1. Enable public access on the storage account. In the left pane, navigate to **Configuration** in the **Settings** group, and enable *Allow Blob anonymous access*. Select **Save**
1. In the left pane, select **Containers** and create a new container named `fruit`, and set **Anonymous access level** to *Container (anonymous read access for containers and blobs)*.
1. In the left pane, in **Data storage**, select **Containers** and create a new container named `fruit`, and set **Anonymous access level** to *Container (anonymous read access for containers and blobs)*.

> **Note**: If the **Anonymous access level** is disabled, refresh the browser page.

1. Navigate to `fruit`, and upload the images (and the one JSON file) in **Labfiles/02-image-classification/training-images** to that container.
1. Navigate to `fruit`, select **Upload**, and upload the images (and the one JSON file) in **Labfiles/02-image-classification/training-images** to that container.

## Create a custom model training project

Next, you will create a new training project for custom image classification in Vision Studio.

1. In the web browser, navigate to `https://portal.vision.cognitive.azure.com/` and # with the Microsoft account where you created your Azure AI resource.
1. Select the **Customize models with images** tile (can be found in the **Image analysis** tab if it isn't showing in your default view), and if prompted select the Azure AI resource you created.
1. Select the **Customize models with images** tile (can be found in the **Image analysis** tab if it isn't showing in your default view).
1. Select the Azure AI Services account you created.
1. In your project, select **Add new dataset** on the top. Configure with the following settings:
- **Dataset name**: training_images
- **Model type**: Image classification
Expand Down
4 changes: 2 additions & 2 deletions Instructions/Exercises/04-face-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,13 +50,13 @@ In this exercise, you'll complete a partially implemented client application tha
**C#**

```
dotnet add package Azure.AI.Vision.ImageAnalysis -v 1.0.0-beta.1
dotnet add package Azure.AI.Vision.ImageAnalysis -v 1.0.0-beta.3
```

**Python**

```
pip install azure-ai-vision==1.0.0b1
pip install azure-ai-vision==1.0.0b3
```

3. View the contents of the **computer-vision** folder, and note that it contains a file for configuration settings:
Expand Down
6 changes: 3 additions & 3 deletions Instructions/Exercises/05-ocr.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,15 +49,15 @@ In this exercise, you'll complete a partially implemented client application tha
**C#**

```
dotnet add package Azure.AI.Vision.ImageAnalysis -v 1.0.0-beta.1
dotnet add package Azure.AI.Vision.ImageAnalysis -v 1.0.0-beta.3
```

> **Note**: If you are prompted to install dev kit extensions, you can safely close the message.

**Python**

```
pip install azure-ai-vision-imageanalysis==1.0.0b1
pip install azure-ai-vision-imageanalysis==1.0.0b3
```

3. View the contents of the **read-text** folder, and note that it contains a file for configuration settings:
Expand All @@ -70,7 +70,7 @@ In this exercise, you'll complete a partially implemented client application tha

## Use the Azure AI Vision SDK to read text from an image

one of the features of the **Azure AI Vision SDK** is to read text from an image. In this exercise, you'll complete a partially implemented client application that uses the Azure AI Vision SDK to read text from an image.
One of the features of the **Azure AI Vision SDK** is to read text from an image. In this exercise, you'll complete a partially implemented client application that uses the Azure AI Vision SDK to read text from an image.

1. The **read-text** folder contains a code file for the client application:

Expand Down
109 changes: 109 additions & 0 deletions mslearn-ai-vision.sln
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 17
VisualStudioVersion = 17.5.002.0
MinimumVisualStudioVersion = 10.0.40219.1
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Labfiles", "Labfiles", "{5B272ED6-5862-4898-99E0-B672CCFB612F}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "01-analyze-images", "01-analyze-images", "{0D3BAB32-052B-4783-AE3E-9AD7882F8842}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "C-Sharp", "C-Sharp", "{3A5BBE9E-D705-4995-9CAF-009CE6781CCF}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "image-analysis", "Labfiles\01-analyze-images\C-Sharp\image-analysis\image-analysis.csproj", "{83384744-8C11-443E-AE51-0699C8702D63}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "03-object-detection", "03-object-detection", "{147930F4-0459-462A-A38C-7BD28172F308}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "C-Sharp", "C-Sharp", "{0D8C647B-B810-46C0-8084-5B51D0A75B0B}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "test-detector", "Labfiles\03-object-detection\C-Sharp\test-detector\test-detector.csproj", "{BDA624FD-9297-4C7F-B094-D6B25B4E6902}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "train-detector", "Labfiles\03-object-detection\C-Sharp\train-detector\train-detector.csproj", "{96A7C936-BB14-4B7C-87E6-A222390FC0B9}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "04-face", "04-face", "{0AE5599D-08D9-40E9-98BF-76CD1B27BCF9}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "C-Sharp", "C-Sharp", "{20ADEA87-2B57-422B-8EC6-13F22164CFE3}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "detect-people", "Labfiles\04-face\C-Sharp\computer-vision\detect-people.csproj", "{BFAB01D7-6628-4EE1-B1C9-CAF464CA2A97}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "analyze-faces", "Labfiles\04-face\C-Sharp\face-api\analyze-faces.csproj", "{B1B5A1D2-32B5-4B74-A4BC-224FCFB6AF7A}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "05-ocr", "05-ocr", "{2C893AD4-1D89-46D7-B901-648E5C1BE74D}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "C-Sharp", "C-Sharp", "{2D8CA890-F9DF-47DA-AA43-B5889449EAB7}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "read-text", "Labfiles\05-ocr\C-Sharp\read-text\read-text.csproj", "{EFDA4501-E8F5-4582-983B-23A8FF0AB350}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "07-custom-vision-image-classification", "07-custom-vision-image-classification", "{59C6F11C-C57B-4E2D-8DD1-755C65DB9628}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "C-Sharp", "C-Sharp", "{CBA71DC4-7D22-46B0-830C-B911BD94AF9F}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "test-classifier", "Labfiles\07-custom-vision-image-classification\C-Sharp\test-classifier\test-classifier.csproj", "{674B2C54-E2E9-446D-A5A7-5DD8BF219AEE}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "train-classifier", "Labfiles\07-custom-vision-image-classification\C-Sharp\train-classifier\train-classifier.csproj", "{D8EBE32A-417C-4C13-AFF4-4FF4A3D6C84D}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Release|Any CPU = Release|Any CPU
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{83384744-8C11-443E-AE51-0699C8702D63}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{83384744-8C11-443E-AE51-0699C8702D63}.Debug|Any CPU.Build.0 = Debug|Any CPU
{83384744-8C11-443E-AE51-0699C8702D63}.Release|Any CPU.ActiveCfg = Release|Any CPU
{83384744-8C11-443E-AE51-0699C8702D63}.Release|Any CPU.Build.0 = Release|Any CPU
{BDA624FD-9297-4C7F-B094-D6B25B4E6902}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{BDA624FD-9297-4C7F-B094-D6B25B4E6902}.Debug|Any CPU.Build.0 = Debug|Any CPU
{BDA624FD-9297-4C7F-B094-D6B25B4E6902}.Release|Any CPU.ActiveCfg = Release|Any CPU
{BDA624FD-9297-4C7F-B094-D6B25B4E6902}.Release|Any CPU.Build.0 = Release|Any CPU
{96A7C936-BB14-4B7C-87E6-A222390FC0B9}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{96A7C936-BB14-4B7C-87E6-A222390FC0B9}.Debug|Any CPU.Build.0 = Debug|Any CPU
{96A7C936-BB14-4B7C-87E6-A222390FC0B9}.Release|Any CPU.ActiveCfg = Release|Any CPU
{96A7C936-BB14-4B7C-87E6-A222390FC0B9}.Release|Any CPU.Build.0 = Release|Any CPU
{BFAB01D7-6628-4EE1-B1C9-CAF464CA2A97}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{BFAB01D7-6628-4EE1-B1C9-CAF464CA2A97}.Debug|Any CPU.Build.0 = Debug|Any CPU
{BFAB01D7-6628-4EE1-B1C9-CAF464CA2A97}.Release|Any CPU.ActiveCfg = Release|Any CPU
{BFAB01D7-6628-4EE1-B1C9-CAF464CA2A97}.Release|Any CPU.Build.0 = Release|Any CPU
{B1B5A1D2-32B5-4B74-A4BC-224FCFB6AF7A}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{B1B5A1D2-32B5-4B74-A4BC-224FCFB6AF7A}.Debug|Any CPU.Build.0 = Debug|Any CPU
{B1B5A1D2-32B5-4B74-A4BC-224FCFB6AF7A}.Release|Any CPU.ActiveCfg = Release|Any CPU
{B1B5A1D2-32B5-4B74-A4BC-224FCFB6AF7A}.Release|Any CPU.Build.0 = Release|Any CPU
{EFDA4501-E8F5-4582-983B-23A8FF0AB350}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{EFDA4501-E8F5-4582-983B-23A8FF0AB350}.Debug|Any CPU.Build.0 = Debug|Any CPU
{EFDA4501-E8F5-4582-983B-23A8FF0AB350}.Release|Any CPU.ActiveCfg = Release|Any CPU
{EFDA4501-E8F5-4582-983B-23A8FF0AB350}.Release|Any CPU.Build.0 = Release|Any CPU
{674B2C54-E2E9-446D-A5A7-5DD8BF219AEE}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{674B2C54-E2E9-446D-A5A7-5DD8BF219AEE}.Debug|Any CPU.Build.0 = Debug|Any CPU
{674B2C54-E2E9-446D-A5A7-5DD8BF219AEE}.Release|Any CPU.ActiveCfg = Release|Any CPU
{674B2C54-E2E9-446D-A5A7-5DD8BF219AEE}.Release|Any CPU.Build.0 = Release|Any CPU
{D8EBE32A-417C-4C13-AFF4-4FF4A3D6C84D}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{D8EBE32A-417C-4C13-AFF4-4FF4A3D6C84D}.Debug|Any CPU.Build.0 = Debug|Any CPU
{D8EBE32A-417C-4C13-AFF4-4FF4A3D6C84D}.Release|Any CPU.ActiveCfg = Release|Any CPU
{D8EBE32A-417C-4C13-AFF4-4FF4A3D6C84D}.Release|Any CPU.Build.0 = Release|Any CPU
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(NestedProjects) = preSolution
{0D3BAB32-052B-4783-AE3E-9AD7882F8842} = {5B272ED6-5862-4898-99E0-B672CCFB612F}
{3A5BBE9E-D705-4995-9CAF-009CE6781CCF} = {0D3BAB32-052B-4783-AE3E-9AD7882F8842}
{83384744-8C11-443E-AE51-0699C8702D63} = {3A5BBE9E-D705-4995-9CAF-009CE6781CCF}
{147930F4-0459-462A-A38C-7BD28172F308} = {5B272ED6-5862-4898-99E0-B672CCFB612F}
{0D8C647B-B810-46C0-8084-5B51D0A75B0B} = {147930F4-0459-462A-A38C-7BD28172F308}
{BDA624FD-9297-4C7F-B094-D6B25B4E6902} = {0D8C647B-B810-46C0-8084-5B51D0A75B0B}
{96A7C936-BB14-4B7C-87E6-A222390FC0B9} = {0D8C647B-B810-46C0-8084-5B51D0A75B0B}
{0AE5599D-08D9-40E9-98BF-76CD1B27BCF9} = {5B272ED6-5862-4898-99E0-B672CCFB612F}
{20ADEA87-2B57-422B-8EC6-13F22164CFE3} = {0AE5599D-08D9-40E9-98BF-76CD1B27BCF9}
{BFAB01D7-6628-4EE1-B1C9-CAF464CA2A97} = {20ADEA87-2B57-422B-8EC6-13F22164CFE3}
{B1B5A1D2-32B5-4B74-A4BC-224FCFB6AF7A} = {20ADEA87-2B57-422B-8EC6-13F22164CFE3}
{2C893AD4-1D89-46D7-B901-648E5C1BE74D} = {5B272ED6-5862-4898-99E0-B672CCFB612F}
{2D8CA890-F9DF-47DA-AA43-B5889449EAB7} = {2C893AD4-1D89-46D7-B901-648E5C1BE74D}
{EFDA4501-E8F5-4582-983B-23A8FF0AB350} = {2D8CA890-F9DF-47DA-AA43-B5889449EAB7}
{59C6F11C-C57B-4E2D-8DD1-755C65DB9628} = {5B272ED6-5862-4898-99E0-B672CCFB612F}
{CBA71DC4-7D22-46B0-830C-B911BD94AF9F} = {59C6F11C-C57B-4E2D-8DD1-755C65DB9628}
{674B2C54-E2E9-446D-A5A7-5DD8BF219AEE} = {CBA71DC4-7D22-46B0-830C-B911BD94AF9F}
{D8EBE32A-417C-4C13-AFF4-4FF4A3D6C84D} = {CBA71DC4-7D22-46B0-830C-B911BD94AF9F}
EndGlobalSection
GlobalSection(ExtensibilityGlobals) = postSolution
SolutionGuid = {B57D6255-F0F8-49F6-A0C3-66F9CE9E4F6B}
EndGlobalSection
EndGlobal