- Automated ML Introduction
- Running samples in Azure Notebooks
- Running samples in a Local Conda environment
- Automated ML SDK Sample Notebooks
- Documentation
- Running using python command
- Troubleshooting
Automated machine learning (automated ML) builds high quality machine learning models for you by automating model and hyperparameter selection. Bring a labelled dataset that you want to build a model for, automated ML will give you a high quality machine learning model that you can use for predictions.
If you are new to Data Science, AutoML will help you get jumpstarted by simplifying machine learning model building. It abstracts you from needing to perform model selection, hyperparameter selection and in one step creates a high quality trained model for you to use.
If you are an experienced data scientist, AutoML will help increase your productivity by intelligently performing the model and hyperparameter selection for your training and generates high quality models much quicker than manually specifying several combinations of the parameters and running training jobs. AutoML provides visibility and access to all the training jobs and the performance characteristics of the models to help you further tune the pipeline if you desire.
-
Import sample notebooks into Azure Notebooks if they are not already there.
-
Create a workspace and its configuration file (config.json) using these instructions.
-
Select
+New
in the Azure Notebook toolbar to add your config.json file to the imported folder. -
Open the notebook.
Make sure the Azure Notebook kernal is set to
Python 3.6
when you open a notebook.
To run these notebook on your own notebook server, use these installation instructions.
The instructions below will install everything you need and then start a Jupyter notebook. To start your Jupyter notebook manually, use:
conda activate azure_automl
jupyter notebook
or on Mac:
source activate azure_automl
jupyter notebook
1. Install mini-conda from here, choose Python 3.7 or higher.
- Note: if you already have conda installed, you can keep using it but it should be version 4.4.10 or later (as shown by: conda -V). If you have a previous version installed, you can update it using the command: conda update conda. There's no need to install mini-conda specifically.
- Download the sample notebooks from GitHub as zip and extract the contents to a local directory. The AutoML sample notebooks are in the "automl" folder.
The automl/automl_setup script creates a new conda environment, installs the necessary packages, configures the widget and starts a jupyter notebook. It takes the conda environment name as an optional parameter. The default conda environment name is azure_automl. The exact command depends on the operating system. It can take about 30 minutes to execute.
Start a conda command windows, cd to the automl folder where the sample notebooks were extracted and then run:
automl_setup
Install "Command line developer tools" if it is not already installed (you can use the command: xcode-select --install
).
Start a Terminal windows, cd to the automl folder where the sample notebooks were extracted and then run:
bash automl_setup_mac.sh
cd to the automl folder where the sample notebooks were extracted and then run:
bash automl_setup_linux.sh
- Before running any samples you next need to run the configuration notebook. Click on 00.configuration.ipynb notebook
- Execute the cells in the notebook to Register Machine Learning Services Resource Provider and create a workspace. (instructions in notebook)
- Please make sure you use the Python [conda env:azure_automl] kernel when trying the sample Notebooks.
- Follow the instructions in the individual notebooks to explore various features in AutoML
-
- Register Machine Learning Services Resource Provider
- Create new Azure ML Workspace
- Save Workspace configuration file
-
01.auto-ml-classification.ipynb
- Dataset: scikit learn's digit dataset
- Simple example of using Auto ML for classification
- Uses local compute for training
-
- Dataset: scikit learn's diabetes dataset
- Simple example of using Auto ML for regression
- Uses local compute for training
-
03.auto-ml-remote-execution.ipynb
- Dataset: scikit learn's digit dataset
- Example of using Auto ML for classification using a remote linux DSVM for training
- Parallel execution of iterations
- Async tracking of progress
- Cancelling individual iterations or entire run
- Retrieving models for any iteration or logged metric
- Specify automl settings as kwargs
-
03b.auto-ml-remote-batchai.ipynb
- Dataset: scikit learn's digit dataset
- Example of using automated ML for classification using a remote Batch AI compute for training
- Parallel execution of iterations
- Async tracking of progress
- Cancelling individual iterations or entire run
- Retrieving models for any iteration or logged metric
- Specify automl settings as kwargs
-
04.auto-ml-remote-execution-text-data-blob-store.ipynb
- Dataset: Burning Man 2016 dataset
- handling text data with preprocess flag
- Reading data from a blob store for remote executions
- using pandas dataframes for reading data
-
05.auto-ml-missing-data-blacklist-early-termination.ipynb
- Dataset: scikit learn's digit dataset
- Blacklist certain pipelines
- Specify a target metrics to indicate stopping criteria
- Handling Missing Data in the input
-
06.auto-ml-sparse-data-custom-cv-split.ipynb
- Dataset: Scikit learn's 20newsgroup
- Handle sparse datasets
- Specify custom train and validation set
-
07.auto-ml-exploring-previous-runs.ipynb
- List all projects for the workspace
- List all AutoML Runs for a given project
- Get details for a AutoML Run. (Automl settings, run widget & all metrics)
- Downlaod fitted pipeline for any iteration
-
08.auto-ml-remote-execution-with-text-file-on-DSVM
- Dataset: scikit learn's digit dataset
- Download the data and store it in the DSVM to improve performance.
-
09.auto-ml-classification-with-deployment.ipynb
- Dataset: scikit learn's digit dataset
- Simple example of using Auto ML for classification
- Registering the model
- Creating Image and creating aci service
- Testing the aci service
-
10.auto-ml-multi-output-example.ipynb
- Dataset: scikit learn's random example using multi-output pipeline(http://scikit-learn.org/stable/auto_examples/ensemble/plot_random_forest_regression_multioutput.html#sphx-glr-auto-examples-ensemble-plot-random-forest-regression-multioutput-py)
- Simple example of using Auto ML for multi output regression
- Handle both the dense and sparse metrix
-
11.auto-ml-sample-weight.ipynb
- How to specifying sample_weight
- The difference that it makes to test results
-
12.auto-ml-retrieve-the-training-sdk-versions.ipynb
- How to get current and training env SDK versions
-
- Using DataPrep for reading data
- Automated ML Settings
- Cross validation split options
- Get Data Syntax
- Data pre-processing and featurization
Property | Description | Default |
---|---|---|
primary_metric | This is the metric that you want to optimize. Classification supports the following primary metrics accuracy AUC_weighted balanced_accuracy average_precision_score_weighted precision_score_weighted Regression supports the following primary metrics spearman_correlation normalized_root_mean_squared_error r2_score normalized_mean_absolute_error normalized_root_mean_squared_log_error |
Classification: accuracy Regression: spearman_correlation |
max_time_sec | Time limit in seconds for each iteration | None |
iterations | Number of iterations. In each iteration trains the data with a specific pipeline. To get the best result, use at least 100. | 100 |
n_cross_validations | Number of cross validation splits | None |
validation_size | Size of validation set as percentage of all training samples | None |
concurrent_iterations | Max number of iterations that would be executed in parallel | 1 |
preprocess | True/False Setting this to True enables preprocessing on the input to handle missing data, and perform some common feature extraction Note: If input data is Sparse you cannot use preprocess=True |
False |
max_cores_per_iteration | Indicates how many cores on the compute target would be used to train a single pipeline. You can set it to -1 to use all cores |
1 |
exit_score | double value indicating the target for primary_metric. Once the target is surpassed the run terminates |
None |
blacklist_algos | Array of strings indicating pipelines to ignore for Auto ML. Allowed values for Classification LogisticRegression SGDClassifierWrapper NBWrapper BernoulliNB SVCWrapper LinearSVMWrapper KNeighborsClassifier DecisionTreeClassifier RandomForestClassifier ExtraTreesClassifier gradient boosting LightGBMClassifier Allowed values for Regression ElasticNet GradientBoostingRegressor DecisionTreeRegressor KNeighborsRegressor LassoLars SGDRegressor RandomForestRegressor ExtraTreesRegressor |
None |
Use n_cross_validations setting to specify the number of cross validations. The training data set will be randomly split into n_cross_validations folds of equal size. During each cross validation round, one of the folds will be used for validation of the model trained on the remaining folds. This process repeats for n_cross_validations rounds until each fold is used once as validation set. Finally, the average scores accross all n_cross_validations rounds will be reported, and the corresponding model will be retrained on the whole training data set.
Use validation_size to specify the percentage of the training data set that should be used for validation, and use n_cross_validations to specify the number of cross validations. During each cross validation round, a subset of size validation_size will be randomly selected for validation of the model trained on the remaining data. Finally, the average scores accross all n_cross_validations rounds will be reported, and the corresponding model will be retrained on the whole training data set.
You can specify seperate train and validation set either through the get_data() or directly to the fit method.
The get_data() function can be used to return a dictionary with these values:
Key | Type | Dependency | Mutually Exclusive with | Description |
---|---|---|---|---|
X | Pandas Dataframe or Numpy Array | y | data_train, label, columns | All features to train with |
y | Pandas Dataframe or Numpy Array | X | label | Label data to train with. For classification, this should be an array of integers. |
X_valid | Pandas Dataframe or Numpy Array | X, y, y_valid | data_train, label | Optional All features to validate with. If this is not specified, X is split between train and validate |
y_valid | Pandas Dataframe or Numpy Array | X, y, X_valid | data_train, label | Optional The label data to validate with. If this is not specified, y is split between train and validate |
sample_weight | Pandas Dataframe or Numpy Array | y | data_train, label, columns | Optional A weight value for each label. Higher values indicate that the sample is more important. |
sample_weight_valid | Pandas Dataframe or Numpy Array | y_valid | data_train, label, columns | Optional A weight value for each validation label. Higher values indicate that the sample is more important. If this is not specified, sample_weight is split between train and validate |
data_train | Pandas Dataframe | label | X, y, X_valid, y_valid | All data (features+label) to train with |
label | string | data_train | X, y, X_valid, y_valid | Which column in data_train represents the label |
columns | Array of strings | data_train | Optional Whitelist of columns to use for features | |
cv_splits_indices | Array of integers | data_train | Optional List of indexes to split the data for cross validation |
If you use preprocess=True
, the following data preprocessing steps are performed automatically for you:
- Dropping high cardinality or no variance features
- Features with no useful information are dropped from training and validation sets. These include features with all values missing, same value across all rows or with extremely high cardinality (e.g., hashes, IDs or GUIDs).
- Missing value imputation
- For numerical features, missing values are imputed with average of values in the column.
- For categorical features, missing values are imputed with most frequent value.
- Generating additional features
- For DateTime features: Year, Month, Day, Day of week, Day of year, Quarter, Week of the year, Hour, Minute, Second.
- For Text features: Term frequency based on bi-grams and tri-grams, Count vectorizer.
- Transformations and encodings
- Numeric features with very few unique values are transformed into categorical features.
Jupyter notebook provides a File / Download as / Python (.py) option for saving the notebook as a Python file. You can then run this file using the python command. However, on Windows the file needs to be modified before it can be run. The following condition must be added to the main code in the file:
if __name__ == "__main__":
The main code of the file must be indented so that it is under this condition.
This can be caused by insufficient memory on the DSVM. AutoML loads all training data into memory. So, the available memory should be more than the training data size. If you are using a remote DSVM, memory is needed for each concurrent iteration. The concurrent_iterations setting specifies the maximum concurrent iterations. For example, if the training data size is 8Gb and concurrent_iterations is set to 10, the minimum memory required is at least 80Gb. To resolve this issue, allocate a DSVM with more memory or reduce the value specified for concurrent_iterations.
This can be caused by too many concurrent iterations for a remote DSVM. Each concurrent iteration usually takes 100% of a core when it is running. Some iterations can use multiple cores. So, the concurrent_iterations setting should always be less than the number of cores of the DSVM. To resolve this issue, try reducing the value specified for the concurrent_iterations setting.