Skip to content

Maintainer Notes

vfdev edited this page Jun 19, 2018 · 37 revisions

How to create and upload pip/conda builds

At first, we build universal wheels and tars:

git checkout vX.Y.Z
python setup.py sdist bdist_wheel

Upload to pypi

twine upload dist/*

or for testing purposes it is possible to upload to test.pypi:

twine upload --repository-url https://test.pypi.org/legacy/ dist/*

Build and upload conda package

conda skeleton pypi pytorch-ignite

As conda pytorch dependency name (pytorch) is different of pip dependency torch, we need to modify pytorch-ignite/meta.yaml file and replace torch -> pytorch. If build with python3, remove also enum34.

conda config --set anaconda_upload yes
anaconda login
conda build .
conda build . --python 2.7

More info here

How to update documentation

All you have to do to update the site is modify the gh-pages branch. For example, regenerating docs is:

cd docs
pip install -r requirements.txt
make clean
make html
# copy build/html into gh-pages branch, commit, push

README

Side-by-side code compare

Image is created with PyCharm (Dracula Theme) with "Compare files" function and a screenshot resized to 1248x.

Ignite (left side):

model = Net()

train_loader, val_loader = get_data_loaders(train_batch_size, val_batch_size)

optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.8)

criterion = torch.nn.NLLLoss()

max_epochs = 10
validate_every = 100
checkpoint_every = 100


trainer = create_supervised_trainer(model, optimizer, criterion)

evaluator = create_supervised_evaluator(model, metrics={'accuracy': BinaryAccuracy()})


@trainer.on(Events.ITERATION_COMPLETE)
def validate(trainer):
    if trainer.state.iteration % validate_every == 0:
        evaluator.run(val_loader)
        metrics = evaluator.state.metrics
        print("After {} iterations, binary accuracy = {:.2f}"
              .format(trainer.state.iteration, metrics['accuracy']))


checkpointer = ModelCheckpoint(checkpoint_dir, 'my_model',
                               save_interval=checkpoint_every, create_dir=True)
trainer.add_event_handler(Events.ITERATION_COMPLETE, checkpointer, {'mymodel': model})


trainer.run(train_loader, max_epochs=max_epochs)

and bare pytorch snippet (right side):

model = Net()

train_loader, val_loader = get_data_loaders(train_batch_size, val_batch_size)

optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.8)

criterion = torch.nn.NLLLoss()

max_epochs = 10
validate_every = 100
checkpoint_every = 100


def validate(model, val_loader):
    model = model.eval()
    num_correct = 0
    num_examples = 0
    for batch in val_loader:
        input, target = batch
        output = model(input)
        correct = torch.eq(torch.round(output).type(target.type()), target).view(-1)
        num_correct += torch.sum(correct).item()
        num_examples += correct.shape[0]
    return num_correct / num_examples


def checkpoint(model, optimizer, checkpoint_dir):
    # ...
    pass


def train(model, optimizer, loss,
          train_loader, val_loader,
          max_epochs, validate_every,
          checkpoint_every, checkpoint_dir):
    model = model.train()
    iteration = 0

    for epoch in range(max_epochs):
        for batch in train_loader:
            optimizer.zero_grad()
            input, target = batch
            output = model(input)
            loss = criterion(output, target)
            loss.backward()
            optimizer.step()

            if iteration % validate_every == 0:
                binary_accuracy = validate(model, val_loader)
                print("After {} iterations, binary accuracy = {:.2f}"
                      .format(iterations, binary_accuracy))

            if iteration % checkpoint_every == 0:
                checkpoint(model, optimizer, checkpoint_dir)
            iteration += 1