Skip to content

Maintainer Notes

vfdev edited this page Aug 23, 2018 · 37 revisions

How to create a release

This should be very simple, just create a release using Github interface and a corresponding will be upload to pypi and conda.

Automatic PyPI wheels+tar upload

PyPI wheels and tars can be built and upload using Travis as deploy phase in the test stage:

# PyPI Deployment: https://docs.travis-ci.com/user/deployment/pypi/
deploy:
  provider: pypi
  user: vfdev-5
  # If password contains non alphanumeric characters
  # https://github.com/travis-ci/dpl/issues/377
  # pass it as secured variable
  password: $PYPI_TOKEN
  # otherwise, follow "How to encrypt the password": https://docs.travis-ci.com/user/encryption-keys/
  # `travis encrypt deploy.password="password"`
#    secure: "secured_password"

  skip_cleanup: true
  distributions: "sdist bdist_wheel"
  on:
    tags: true
    python: "3.5"

Automatic Conda wheels+tar upload

TODO

Documentation automatic generation

The documentation is automatically built and deployed when a PR is merged to master. Documentation is deployed at https://pytorch.org/ignite and corresponds to master of the repository and not the latest stable version. History of builds is not conserved, so if you push manually some changes, they will be rewritten by the next doc deployment.

Automatic deployment is done in .travis.yml in the stage docs:

    # GitHub Pages Deployment: https://docs.travis-ci.com/user/deployment/pages/
    - stage: docs
      python: "3.5"
      install:
        # Minimal install : ignite and dependencies just to build the docs
        - pip install -r docs/requirements.txt
        - pip install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp35-cp35m-linux_x86_64.whl
        # `pip install .` vs `python setup.py install` : 1st works better to produce _module/ignite with source links
        - pip install .
      script:
        - cd docs && make html
        # Create .nojekyll file to serve correctly _static and friends
        - touch build/html/.nojekyll
      after_success: # Nothing to do
      deploy:
        provider: pages
        skip-cleanup: true
        github-token: $GITHUB_TOKEN  # Set in the settings page of your repository, as a secure variable
        keep-history: false
        local_dir: docs/build/html
        on:
          branch: master

How to manually create a release

How to create and upload pip/conda builds

At first, we build universal wheels and tars:

git checkout vX.Y.Z
python setup.py sdist bdist_wheel

Upload to pypi

twine upload dist/*

or for testing purposes it is possible to upload to test.pypi:

twine upload --repository-url https://test.pypi.org/legacy/ dist/*

Build and upload conda package

NEED TO FIND SOME OTHER WAY TO BUILD IT

conda skeleton pypi pytorch-ignite

As conda pytorch dependency name (pytorch) is different of pip dependency torch, we need to modify pytorch-ignite/meta.yaml file and replace torch -> pytorch. If build with python3, comment also enum34.

conda config --set anaconda_upload yes
anaconda login
conda build . --python 3.6
conda build . --python 3.5
# Do not forget to uncomment enum34 dependency
conda build . --python 2.7

More info here

How to manually update documentation

All you have to do to update the site is to modify the gh-pages branch. For example, regenerating docs is:

cd docs
pip install -r requirements.txt
make clean
make html
# copy build/html into gh-pages branch, commit, push

README

Side-by-side code compare

Image is created with PyCharm (Dracula Theme) with "Compare files" function and a screenshot resized to 1248x.

Ignite (left side):

model = Net()

train_loader, val_loader = get_data_loaders(train_batch_size, val_batch_size)

optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.8)

criterion = torch.nn.NLLLoss()

max_epochs = 10
validate_every = 100
checkpoint_every = 100


trainer = create_supervised_trainer(model, optimizer, criterion)

evaluator = create_supervised_evaluator(model, metrics={'accuracy': BinaryAccuracy()})


@trainer.on(Events.ITERATION_COMPLETE)
def validate(trainer):
    if trainer.state.iteration % validate_every == 0:
        evaluator.run(val_loader)
        metrics = evaluator.state.metrics
        print("After {} iterations, binary accuracy = {:.2f}"
              .format(trainer.state.iteration, metrics['accuracy']))


checkpointer = ModelCheckpoint(checkpoint_dir, 'my_model',
                               save_interval=checkpoint_every, create_dir=True)
trainer.add_event_handler(Events.ITERATION_COMPLETE, checkpointer, {'mymodel': model})


trainer.run(train_loader, max_epochs=max_epochs)

and bare pytorch snippet (right side):

model = Net()

train_loader, val_loader = get_data_loaders(train_batch_size, val_batch_size)

optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.8)

criterion = torch.nn.NLLLoss()

max_epochs = 10
validate_every = 100
checkpoint_every = 100


def validate(model, val_loader):
    model = model.eval()
    num_correct = 0
    num_examples = 0
    for batch in val_loader:
        input, target = batch
        output = model(input)
        correct = torch.eq(torch.round(output).type(target.type()), target).view(-1)
        num_correct += torch.sum(correct).item()
        num_examples += correct.shape[0]
    return num_correct / num_examples


def checkpoint(model, optimizer, checkpoint_dir):
    # ...
    pass


def train(model, optimizer, loss,
          train_loader, val_loader,
          max_epochs, validate_every,
          checkpoint_every, checkpoint_dir):
    model = model.train()
    iteration = 0

    for epoch in range(max_epochs):
        for batch in train_loader:
            optimizer.zero_grad()
            input, target = batch
            output = model(input)
            loss = criterion(output, target)
            loss.backward()
            optimizer.step()

            if iteration % validate_every == 0:
                binary_accuracy = validate(model, val_loader)
                print("After {} iterations, binary accuracy = {:.2f}"
                      .format(iterations, binary_accuracy))

            if iteration % checkpoint_every == 0:
                checkpoint(model, optimizer, checkpoint_dir)
            iteration += 1