Skip to content

Files

Latest commit

 

History

History
 
 

feature_examples

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 

Graphcore feature examples

The code examples demonstrate features which will enable you to make the most of the IPU. They are part of the Developer resources provided by Graphcore: https://www.graphcore.ai/developer.

Each of the examples contains its own README file with full instructions.

PopART

Efficiently use multiple IPUs and handle large models:

  • Phased execution: this example shows how to run a network over two IPUs by splitting it in several execution phases.
  • Pipelining: a simple model made of two dense layers, pipelined over two IPUs.
  • Recomputing: a demonstration of manual and automatic recomputing on the IPU.
  • Sharding: a simple model sharded on two IPUs.

Exchange data between host and IPU efficiently:

  • Callbacks: a simple computation graph that uses callbacks to feed data and retrieve the results.

Define custom operators:

Poplar

Exchange data between host and IPU efficiently:

  • Prefetch: a demonstration of prefetching data when a program runs several times.

Demonstrate advanced features of Poplar:

  • Advanced example: an example demonstrating several advanced features of Poplar, including saving and restoring Poplar executables, moving I/O into separate Poplar programs, and using our PopLibs framework.

TensorFlow 1

Debugging and analysis:

  • Inspecting tensors: an example that shows how outfeed queues can be used to return activation and gradient tensors to the host for inspection.

Efficiently use multiple IPUs and handle large models:

  • Pipelining: a simple model made of two dense layers, pipelined over two IPUs.
  • PopDist: an example showing how to make an application ready for distributed training and inference by using the PopDist library, and how to launch it with the PopRun distributed launcher.
  • Replication: an example showing how to use replication in TensorFlow to train a simple CIFAR-10 convolution model.
  • Sharding: a simple model sharded on two IPUs.

Use estimators:

  • IPU Estimator: an example showing how to use the IPUEstimator to train and evaluate a simple CNN.

Control IPU use:

  • Connection Type: a demonstration of controlling if and when an IPU device is acquired using the set_ipu_connection_type.

Define custom operators:

  • Custom operator: a simple custom operator that adds two vectors of arbitrary size, created in Poplar and used in a TensorFlow model.
  • Custom gradient: a custom operator for the batched dot product, defining both the forward operator and its gradient in Poplar, then used in a TensorFlow model.

TensorFlow 2

Debugging and analysis:

  • Inspecting tensors: an example that shows how outfeed queues can be used to return activation and gradient tensors to the host for inspection.

Use estimators:

  • IPU Estimator: an example showing how to use the IPUEstimator to train and evaluate a simple CNN.

Specific layers:

  • Embeddings: an example of a model with an embedding layer and an LSTM, trained on the IPU to predict the sentiment of an IMDB review.

PyTorch

Define custom operators:

  • Custom operators: an example showing how to make a PopART custom operator available to PopTorch and how to use it in a model.

Specific layers:

  • Octconv: an example showing how to use Octave Convolutions in PopTorch training and inference models.