Train for YOLO v7 instance segmentation models.
We strongly recommend using a virtual environment. If you're not sure where to start, we offer a tutorial here.
pip install ikomia
from ikomia.dataprocess.workflow import Workflow
# Init your workflow
wf = Workflow()
# Add dataset loader
coco = wf.add_task(name="dataset_coco")
coco.set_parameters({
"json_file": "path/to/json/annotation/file",
"image_folder": "path/to/image/folder",
"task": "instance_segmentation",
})
# Add training algorithm
train = wf.add_task(name="train_yolo_v7_instance_segmentation", auto_connect=True)
# Launch your training on your data
wf.run()
Ikomia Studio offers a friendly UI with the same features as the API.
-
If you haven't started using Ikomia Studio yet, download and install it from this page.
-
For additional guidance on getting started with Ikomia Studio, check out this blog post.
- train_imgsz (int) - default '640': Size of the training image.
- test_imgsz (int) - default '640': Size of the eval image.
- epochs (int) - default '10': Number of complete passes through the training dataset.
- batch_size (int) - default '16': Number of samples processed before the model is updated.
- dataset_split_ratio (float) – default '90': Divide the dataset into train and evaluation sets ]0, 100[.
- output_folder (str, optional): path to where the model will be saved.
- config_file (str, optional): Path to hyperparameters configuration file .yaml.
Parameters should be in strings format when added to the dictionary.
from ikomia.dataprocess.workflow import Workflow
# Init your workflow
wf = Workflow()
# Add dataset loader
coco = wf.add_task(name="dataset_coco")
coco.set_parameters({
"json_file": "path/to/json/annotation/file",
"image_folder": "path/to/image/folder",
"task": "instance_segmentation",
})
# Add training algorithm
train = wf.add_task(name="train_yolo_v7_instance_segmentation", auto_connect=True)
train.set_parameters({
"batch_size": "4",
"epochs": "5",
"train_imgsz": "640",
"test_imgsz": "640",
"dataset_split_ratio": "90"
})
# Launch your training on your data
wf.run()