Skip to content
This repository was archived by the owner on Jun 2, 2020. It is now read-only.

Dev #34

Open
wants to merge 12 commits into
base: master
Choose a base branch
from
Open

Dev #34

Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 0 additions & 21 deletions .github/ISSUE_TEMPLATE/bug_report.md

This file was deleted.

10 changes: 0 additions & 10 deletions .github/ISSUE_TEMPLATE/docs-issue.md

This file was deleted.

16 changes: 0 additions & 16 deletions .github/ISSUE_TEMPLATE/feature_request.md

This file was deleted.

5 changes: 2 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -9,7 +9,6 @@ workdir
doctrees/
.buildinfo
docs/source/_*
dist
/build
dist/
build/
neural_pipeline.egg-info

3 changes: 2 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
@@ -5,6 +5,7 @@ matrix:
- python: 3.5
- python: 3.6
- python: 3.7
- python: 3.8
dist: xenial
sudo: true

@@ -14,7 +15,7 @@ install:
- pip install coveralls

script:
- coverage run --source=neural_pipeline -m unittest -v tests/test.py
- coverage run --source=piepline -m unittest -v tests/test.py

after_success:
coveralls
44 changes: 23 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Neural Pipeline
# PiePline

Neural networks training pipeline based on PyTorch. Designed to standardize training process and accelerate experiments.

[![Build Status](https://travis-ci.org/toodef/neural-pipeline.svg?branch=master)](https://travis-ci.org/toodef/neural-pipeline)
[![Coverage Status](https://coveralls.io/repos/github/toodef/neural-pipeline/badge.svg?branch=master)](https://coveralls.io/github/toodef/neural-pipeline?branch=master)
[![Maintainability](https://api.codeclimate.com/v1/badges/1feaafcc614adf27c30f/maintainability)](https://codeclimate.com/github/toodef/neural-pipeline/maintainability)
[![Gitter chat](https://badges.gitter.im/neural-pipeline/gitter.png)](https://gitter.im/neural-pipeline/community)
[![Build Status](https://travis-ci.org/PiePline/piepline.svg?branch=master)](https://travis-ci.org/PiePline/piepline)
[![Coverage Status](https://coveralls.io/repos/github/PiePline/piepline/badge.svg?branch=master)](https://coveralls.io/github/PiePline/piepline?branch=master)
[![Maintainability](https://api.codeclimate.com/v1/badges/7da18cb28e7e7dc13268/maintainability)](https://codeclimate.com/github/PiePline/piepline/maintainability)
[![Gitter chat](https://badges.gitter.im/piepline/gitter.png)](https://gitter.im/piepline/community)

* Core is about 2K lines, covered by tests, that you don't need to write again
* Flexible and customizable training process
@@ -16,50 +16,52 @@ Neural networks training pipeline based on PyTorch. Designed to standardize trai

# Getting started:
### Documentation
[![Documentation Status](https://readthedocs.org/projects/neural-pipeline/badge/?version=master)](https://neural-pipeline.readthedocs.io/en/master/?badge=master)
* [See the full documentation there](https://neural-pipeline.readthedocs.io/en/master/)
* [Read getting started guide](https://neural-pipeline.readthedocs.io/en/master/getting_started/index.html)
[![Documentation Status](https://readthedocs.org/projects/piepline/badge/?version=stable)](https://piepline.readthedocs.io/en/stable/?badge=stable)
* [See the full documentation there](https://piepline.readthedocs.io/en/stable/)
* [Read getting started guide](https://piepline.readthedocs.io/en/stable/getting_started/index.html)

### See the examples
* MNIST classification - [notebook](https://github.com/toodef/neural-pipeline/blob/master/examples/notebooks/img_classification.ipynb), [file](https://github.com/toodef/neural-pipeline/blob/master/examples/files/img_classification.py), [Kaggle kernel](https://www.kaggle.com/toodef/cnn-training-with-less-code)
* Segmentation - [notebook](https://github.com/toodef/neural-pipeline/blob/master/examples/notebooks/img_segmentation.ipynb), [file](https://github.com/toodef/neural-pipeline/blob/master/examples/files/img_segmentation.py)
* Resume training process - [file](https://github.com/toodef/neural-pipeline/blob/master/examples/files/resume_train.py)
* MNIST classification - [notebook](https://github.com/toodef/piepline/blob/master/examples/notebooks/img_classification.ipynb), [file](https://github.com/toodef/piepline/blob/master/examples/files/img_classification.py), [Kaggle kernel](https://www.kaggle.com/toodef/cnn-training-with-less-code)
* Segmentation - [notebook](https://github.com/toodef/piepline/blob/master/examples/notebooks/img_segmentation.ipynb), [file](https://github.com/toodef/piepline/blob/master/examples/files/img_segmentation.py)
* Resume training process - [file](https://github.com/toodef/piepline/blob/master/examples/files/resume_train.py)

### Neural Pipeline short overview:
### PiePline short overview:
```python
import torch

from neural_pipeline.builtin.monitors.tensorboard import TensorboardMonitor
from neural_pipeline import DataProducer, AbstractDataset, TrainConfig, TrainStage,\
from neural_pipeline.monitoring import LogMonitor
from neural_pipeline import DataProducer, TrainConfig, TrainStage,\
ValidationStage, Trainer, FileStructManager

from somethig import MyNet, MyDataset

fsm = FileStructManager(base_dir='data', is_continue=False)
model = MyNet()
model = MyNet().cuda()

train_dataset = DataProducer([MyDataset()], batch_size=4, num_workers=2)
validation_dataset = DataProducer([MyDataset()], batch_size=4, num_workers=2)

train_config = TrainConfig([TrainStage(train_dataset), ValidationStage(validation_dataset)], torch.nn.NLLLoss(),
train_config = TrainConfig(model, [TrainStage(train_dataset),
ValidationStage(validation_dataset)], torch.nn.NLLLoss(),
torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.5))

trainer = Trainer(model, train_config, fsm, torch.device('cuda:0')).set_epoch_num(50)
trainer = Trainer(train_config, fsm, torch.device('cuda:0')).set_epoch_num(50)
trainer.monitor_hub.add_monitor(TensorboardMonitor(fsm, is_continue=False))\
.add_monitor(LogMonitor(fsm))
trainer.train()
```
This example of training MyNet on MyDataset with vizualisation in Tensorflow and with metrics logging for further experiments comparison.

# Installation:
[![PyPI version](https://badge.fury.io/py/neural-pipeline.svg)](https://badge.fury.io/py/neural-pipeline)
[![PyPI Downloads/Month](https://pepy.tech/badge/neural-pipeline/month)](https://pepy.tech/project/neural-pipeline)
[![PyPI Downloads](https://pepy.tech/badge/neural-pipeline)](https://pepy.tech/project/neural-pipeline)
[![PyPI version](https://badge.fury.io/py/piepline.svg)](https://badge.fury.io/py/piepline)
[![PyPI Downloads/Month](https://pepy.tech/badge/piepline/month)](https://pepy.tech/project/piepline)
[![PyPI Downloads](https://pepy.tech/badge/piepline)](https://pepy.tech/project/piepline)

`pip install neural-pipeline`
`pip install piepline`

##### For `builtin` module using install:
`pip install tensorboardX matplotlib`

##### Install latest version before it's published on PyPi
`pip install -U git+https://github.com/toodef/neural-pipeline`
`pip install -U git+https://github.com/PiePline/piepline`
2 changes: 1 addition & 1 deletion docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -4,4 +4,4 @@ sphinx_rtd_theme
tqdm
tensorboardX
numpy
https://download.pytorch.org/whl/cpu/torch-1.0.0-cp37-cp37m-linux_x86_64.whl
torch==1.5.0+cpu
1 change: 0 additions & 1 deletion docs/source/api/builtin/index.rst
Original file line number Diff line number Diff line change
@@ -5,4 +5,3 @@ In builtin module contains all modules that can't be tested, or have specific fi
.. toctree::

monitors
models
4 changes: 0 additions & 4 deletions docs/source/api/builtin/models.rst

This file was deleted.

4 changes: 2 additions & 2 deletions docs/source/api/builtin/monitors.rst
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
Tensorboard
==============================
.. automodule:: neural_pipeline.builtin.monitors.tensorboard
.. automodule:: piepline.builtin.monitors.tensorboard
:members:

Matplotlib
==============================
.. automodule:: neural_pipeline.builtin.monitors.mpl
.. automodule:: piepline.builtin.monitors.mpl
:members:
4 changes: 2 additions & 2 deletions docs/source/api/data_processor.rst
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
Data Processor
==============================
.. automodule:: neural_pipeline.data_processor.data_processor
.. automodule:: piepline.data_processor.data_processor
:members:

Model
==============================
.. automodule:: neural_pipeline.data_processor.model
.. automodule:: piepline.data_processor.model
:members:
2 changes: 1 addition & 1 deletion docs/source/api/data_producer.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Data Producer
==============================
.. automodule:: neural_pipeline.data_producer.data_producer
.. automodule:: piepline.data_producer.data_producer
:members:
2 changes: 1 addition & 1 deletion docs/source/api/monitoring.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Monitoring
==============================
.. automodule:: neural_pipeline.monitoring
.. automodule:: piepline.monitoring
:members:
2 changes: 1 addition & 1 deletion docs/source/api/predict.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Predictor
==============================
.. automodule:: neural_pipeline.predict
.. automodule:: piepline.predict
:members:
2 changes: 1 addition & 1 deletion docs/source/api/train.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Trainer
==============================
.. automodule:: neural_pipeline.train
.. automodule:: piepline.train
:members:
2 changes: 1 addition & 1 deletion docs/source/api/train_config.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Train Config
==============================
.. automodule:: neural_pipeline.train_config.train_config
.. automodule:: piepline.train_config.train_config
:members:
2 changes: 1 addition & 1 deletion docs/source/api/utils.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
File structure management utils
===============================
.. automodule:: neural_pipeline.utils.file_structure_manager
.. automodule:: piepline.utils.fsm
:members:
24 changes: 12 additions & 12 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
@@ -19,22 +19,22 @@

# -- Project information -----------------------------------------------------

project = 'Neural Pipeline'
project = 'PiePline'
copyright = '2019, Anton Fedotov'
author = 'Anton Fedotov'

# import importlib.util
#
# spec = importlib.util.spec_from_file_location("neural_pipeline",
# spec = importlib.util.spec_from_file_location("piepline",
# os.path.join(os.path.dirname(os.path.abspath(__file__)),
# '..', '..', 'neural_pipeline', '__init__.py'))
# neural_pipeline = importlib.util.module_from_spec(spec)
# spec.loader.exec_module(neural_pipeline)
# '..', '..', 'piepline', '__init__.py'))
# piepline = importlib.util.module_from_spec(spec)
# spec.loader.exec_module(piepline)

import neural_pipeline
import piepline

# The short X.Y version
version = neural_pipeline.__version__
version = piepline.__version__
# The full version, including alpha/beta/rc tags
release = version

@@ -127,7 +127,7 @@
# -- Options for HTMLHelp output ---------------------------------------------

# Output file base name for HTML help builder.
htmlhelp_basename = 'NeuralPipelinedoc'
htmlhelp_basename = 'PiePLinedoc'

# -- Options for LaTeX output ------------------------------------------------

@@ -153,7 +153,7 @@
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'NeuralPipeline.tex', 'Neural Pipeline Documentation',
(master_doc, 'PiePline.tex', 'PiePline Documentation',
'Anton Fedotov', 'manual'),
]

@@ -162,7 +162,7 @@
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'neuralpipeline', 'Neural Pipeline Documentation',
(master_doc, 'neuralpipeline', 'PiePline Documentation',
[author], 1)
]

@@ -172,8 +172,8 @@
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'NeuralPipeline', 'Neural Pipeline Documentation',
author, 'NeuralPipeline', 'Neural networks training pipeline based on PyTorch. Designed to standardize training process and to increase coding preformance.',
(master_doc, 'PiePline', 'PiePline Documentation',
author, 'PiePline', 'Neural networks training pipeline based on PyTorch. Designed to standardize training process and to increase coding preformance.',
'Miscellaneous'),
]

2 changes: 1 addition & 1 deletion docs/source/getting_started/continue.rst
Original file line number Diff line number Diff line change
@@ -25,6 +25,6 @@ If we need to do some more training epochs but doesn't have previously defined o


Parameter ``from_best_checkpoint=False`` tell Trainer, that it need continue from last checkpoint.
Neural Pipeline can save best checkpoints by specified rule. For more information about it read about `enable_lr_decaying <https://neural-pipeline.readthedocs.io/en/master/api/train.html#neural_pipeline.train.Trainer.enable_best_states_saving>`_ method of `Trainer`.
PiePline can save best checkpoints by specified rule. For more information about it read about `enable_lr_decaying <https://piepline.readthedocs.io/en/master/api/train.html#piepline.train.Trainer.enable_best_states_saving>`_ method of `Trainer`.

Don't worry about incorrect training history displaying. If history also exists - monitors just add new data to it.
4 changes: 2 additions & 2 deletions docs/source/getting_started/dataset.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Implement dataset class
=======================

In Neural Pipeline dataset is iterable class. This means, that class need contain ``__getitem__`` and ``__len__`` methods.
In PiePline dataset is iterable class. This means, that class need contain ``__getitem__`` and ``__len__`` methods.

For every i-th output, dataset need produce Python ``dict`` with keys 'data' and 'target'.

@@ -32,7 +32,7 @@ For work with this dataset we need wrap it by ``DataProducer``:

.. code:: python

from neural_pipeline import DataProducer
from piepline import DataProducer

# create train and validation datasets objects
train_dataset = DataProducer([MNISTDataset('data/dataset', True)], batch_size=4, num_workers=2)
14 changes: 7 additions & 7 deletions docs/source/getting_started/index.rst
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
Getting started guide
=====================

First of all look at main classes of Neural Pipeline:
First of all look at main classes of PiePline:

* `Trainer <https://neural-pipeline.readthedocs.io/en/master/api/train.html#neural_pipeline.train.Trainer>`_ - class, that implements training process
* `TrainConfig <https://neural-pipeline.readthedocs.io/en/master/api/train_config.html#neural_pipeline.train_config.train_config.TrainConfig>`_ - class, that store hyperparameters
* `AbstractTrainStage <https://neural-pipeline.readthedocs.io/en/master/api/train_config.html#neural_pipeline.train_config.train_config.AbstractStage>`_ - base class for single stage of training process. Don't worry, Neural Pipeline have predefined classes for common use cases: `TrainStage <https://neural-pipeline.readthedocs.io/en/master/api/train_config.html#neural_pipeline.train_config.train_config.TrainStage>`_, `ValidationStage <https://neural-pipeline.readthedocs.io/en/master/api/train_config.html#neural_pipeline.train_config.train_config.ValidationStage>`_ and more common - `StandardStage <https://neural-pipeline.readthedocs.io/en/master/api/train_config.html#neural_pipeline.train_config.train_config.StandardStage>`_
* `DataProducer <https://neural-pipeline.readthedocs.io/en/master/api/data_producer.html#neural_pipeline.data_producer.data_producer.DataProducer>`_ - class, that unite datasets and unite it's interface
* `FileStructManager <https://neural-pipeline.readthedocs.io/en/master/api/utils.html#neural_pipeline.utils.file_structure_manager.FileStructManager>`_ - class, that manage file structure
* `Trainer <https://piepline.readthedocs.io/en/master/api/train.html#piepline.train.Trainer>`_ - class, that implements training process
* `TrainConfig <https://piepline.readthedocs.io/en/master/api/train_config.html#piepline.train_config.train_config.TrainConfig>`_ - class, that store hyperparameters
* `AbstractTrainStage <https://piepline.readthedocs.io/en/master/api/train_config.html#piepline.train_config.train_config.AbstractStage>`_ - base class for single stage of training process. Don't worry, PiePline have predefined classes for common use cases: `TrainStage <https://piepline.readthedocs.io/en/master/api/train_config.html#piepline.train_config.train_config.TrainStage>`_, `ValidationStage <https://piepline.readthedocs.io/en/master/api/train_config.html#piepline.train_config.train_config.ValidationStage>`_ and more common - `StandardStage <https://piepline.readthedocs.io/en/master/api/train_config.html#piepline.train_config.train_config.StandardStage>`_
* `DataProducer <https://piepline.readthedocs.io/en/master/api/data_producer.html#piepline.data_producer.data_producer.DataProducer>`_ - class, that unite datasets and unite it's interface
* `FileStructManager <https://piepline.readthedocs.io/en/master/api/utils.html#piepline.utils.file_structure_manager.FileStructManager>`_ - class, that manage file structure

Training stages needed for customize training process. With it `Trainer` work by this scheme (dataflow scheme for single epoch):

@@ -23,4 +23,4 @@ Training stages needed for customize training process. With it `Trainer` work by
training
continue

After this tutorial look to `segmentation example <https://github.com/toodef/neural-pipeline/blob/master/examples/notebooks/img_segmentation.ipynb>`_ for explore how to work with specific metrics.
After this tutorial look to `segmentation example <https://github.com/toodef/piepline/blob/master/examples/notebooks/img_segmentation.ipynb>`_ for explore how to work with specific metrics.
2 changes: 1 addition & 1 deletion docs/source/getting_started/train_config.rst
Original file line number Diff line number Diff line change
@@ -8,7 +8,7 @@ Respectively ``ValidatioStage`` do same but in ``eval()`` mode.

.. code:: python

from neural_pipeline import TrainConfig, TrainStage, ValidationStage
from piepline import TrainConfig, TrainStage, ValidationStage

# define train stages
train_stages = [TrainStage(train_dataset), ValidationStage(validation_dataset)]
2 changes: 1 addition & 1 deletion docs/source/getting_started/trainer.rst
Original file line number Diff line number Diff line change
@@ -28,7 +28,7 @@ Now we need build our training process. It's done by implements ``Trainer`` clas

.. code:: python

from neural_pipeline import FileStructManager, Trainer
from piepline import FileStructManager, Trainer

# define file structure for experiment
fsm = FileStructManager(base_dir='data', is_continue=False)
4 changes: 2 additions & 2 deletions docs/source/getting_started/training.rst
Original file line number Diff line number Diff line change
@@ -14,7 +14,7 @@ That's all. Console output will look like that:
| Epoch: [3]; train: [0.000182, 0.180328, 5.218509]; validation: [0.000135, 0.155546, 2.512275]
| train: 31%|███ | 4651/15000 [00:31<01:07, 154.06it/s, loss=[0.154871]]

First 3 lines is standard output of `ConsoleMonitor <https://neural-pipeline.readthedocs.io/en/master/api/monitoring.html#neural_pipeline.monitoring.ConsoleMonitor>`_.
First 3 lines is standard output of `ConsoleMonitor <https://piepline.readthedocs.io/en/master/api/monitoring.html#piepline.monitoring.ConsoleMonitor>`_.
This monitor included for ``MonitorHub`` by default.
Every line show loss values of correspondence stage in format [min, mean, max] values.

@@ -29,7 +29,7 @@ For do it we need before training connect builtin `TensorboardMonitor` to `Train

.. code:: python

from neural_pipeline.builtin.monitors.tensorboard import TensorboardMonitor
from piepline.builtin.monitors.tensorboard import TensorboardMonitor

trainer.monitor_hub.add_monitor(TensorboardMonitor(fsm, is_continue=False))

Loading