Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Tutorial Documentation touch-up #155

Merged
merged 5 commits into from
Feb 22, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -111,9 +111,15 @@ cov:
# help: tutorials-dev - Build and start a docker container to run the tutorials
.PHONY: tutorials-dev
tutorials-dev:
@docker compose build tutorials
@docker compose build tutorials-dev
@docker run -p 8888:8888 smartsim-tutorials:dev-latest

# help: tutorials-prod - Build and start a docker container to run the tutorials (v0.4.0)
.PHONY: tutorials-prod
tutorials-prod:
@docker compose build tutorials-prod
@docker run -p 8888:8888 smartsim-tutorials:v0.4.0


# help:
# help: Test
Expand Down
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@

# SmartSim

SmartSim makes it easier to use common Machine Learning (ML) libraries
like PyTorch and TensorFlow, in High Performance Computing (HPC) simulations
SmartSim is a workflow library that makes it easier to use common Machine Learning (ML)
libraries, like PyTorch and TensorFlow, in High Performance Computing (HPC) simulations
and applications.

SmartSim provides an API to connect HPC workloads, particularly (MPI + X) simulations,
Expand Down Expand Up @@ -59,27 +59,27 @@ SmartSim supports the following ML libraries.
<tr>
<td rowspan="3">1.2.3-1.2.4</td>
<td>PyTorch</td>
<td>1.7.0</td>
<td>1.7.x</td>
</tr>
<tr>
<td>TensorFlow\Keras</td>
<td>2.5.2</td>
<td>2.4.x-2.5.x</td>
</tr>
<tr>
<td>ONNX</td>
<td>1.7.0</td>
<td>1.9.x</td>
</tr>
<td rowspan="3">1.2.5</td>
<td>PyTorch</td>
<td>1.9.1</td>
<td>1.9.x</td>
</tr>
<tr>
<td>TensorFlow\Keras</td>
<td>2.6.2</td>
<td>2.6.x</td>
</tr>
<tr>
<td>ONNX</td>
<td>1.9.0</td>
<td>1.9.x</td>
</tr>
</tbody>
</table>
Expand Down Expand Up @@ -569,7 +569,7 @@ For more information on the API, see the
## Examples

Although clients rely on the Orchestrator database to be running, it can be helpful
to see examples of how the API is used without concerning ourselves with the
to see examples of how the API is used without concerning ourselves with the
infrastructure code. The following examples provide samples of client usage
across different languages.

Expand Down
18 changes: 10 additions & 8 deletions doc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

.. toctree::
:maxdepth: 2
:caption: Overview of SmartSim
:caption: Getting Started

overview
installation
Expand All @@ -13,12 +13,11 @@
:maxdepth: 2
:caption: Tutorials

tutorials/01_getting_started/01_getting_started
tutorials/02_using_clients
tutorials/03_lattice_boltz_analysis
tutorials/04_inference
tutorials/05_training
tutorials/06_starting_ray/06_starting_ray_builtin
tutorials/getting_started/getting_started
tutorials/online_analysis/lattice/online_analysis
tutorials/ml_inference/Inference-in-SmartSim
tutorials/training
tutorials/ray/starting_ray


.. toctree::
Expand All @@ -35,6 +34,9 @@
:caption: SmartRedis

smartredis
sr_python_walkthrough
sr_cpp_walkthrough
sr_fortran_walkthrough
sr_data_structures
sr_runtime
api/smartredis_api
Expand All @@ -43,9 +45,9 @@
:maxdepth: 2
:caption: Reference

changelog
code_of_conduct
developer
changelog


Indices and tables
Expand Down
55 changes: 12 additions & 43 deletions doc/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,6 @@ Installation

The following will show how to install both SmartSim and SmartRedis

For instructions on installing SmartSim once for multiple users of
a shared system, see :ref:`this section below <site-wide>`.

=============
Prerequisites
=============
Expand Down Expand Up @@ -55,10 +52,13 @@ Supported Versions
- Nvidia
- 3.7 - 3.9


.. note::

Windows is not supported and there are currently no plans
to support Windows.
Windows is not supported and there are currently no plans
to support Windows.



SmartSim supports multiple machine learning libraries through
the use of RedisAI_. The following libraries are supported.
Expand Down Expand Up @@ -151,12 +151,14 @@ To see all the installation options:

smart

.. note::

.. note::
If the ``smart`` tool is not found. Look for it in places like
``~/.local/bin`` and other ``bin`` locations and add it to your
``$PATH``



CPU Install
-----------

Expand All @@ -177,12 +179,14 @@ To install the default ML backends for CPU, run
By default, ``smart`` will install PyTorch and TensorFlow backends
for use in SmartSim.

.. note::

.. note::
If a re-build is needed for any reason, ``smart clean`` will remove
all of the previous installs for the ML backends and ``smart clobber`` will
remove all pre-built dependencies as well as the ML backends.



GPU Install
-----------

Expand All @@ -208,47 +212,12 @@ For example, for bash do
smart build --device gpu --onnx # install all backends (PT, TF, ONNX) on gpu


.. note::

.. note::
Currently, SmartSim is solely compatible with NVIDIA GPUs on Linux systems
and ``CUDA >= 11`` is required to build.


Site-wide Installation
======================

.. _site-wide:

Some users may wish to build SmartSim once, and have it available to
all users of a system. When done, users will only ever have to install
the Python package for smartsim which is fairly quick and painless.
The following can be done by both a non-root and root user but for
shared sites, it is highly recommended to consult with the site admins.

To have a site wide install, do the following.

1. Build SmartSim from source :ref:`as shown here <install-source>` for
the desired ML backend and device (GPU/CPU) platform your users are expected to use.
2. Locate the `bin` and `lib` folders in `smartsim/_core/` and copy them
into a directory where you would like them to reside. Be sure this is a
location available to all compute nodes on the system (i.e. on the shared filesystem).
3. Create a bash profile or modulefile that will set the user SmartSim environment as follows

.. code-block:: bash

export RAI_PATH=/path/to/lib/redisai.so
export REDIS_PATH=/path/to/bin/redis-server
export REDIS_CLI_PATH=/path/to/bin/redis-cli

# optional settings
export SMARTSIM_LOG_LEVEL=debug # (more verbose outputs)
export SMARTSIM_JM_INTERVAL=20 # (control how often SmartSim pings schedulers like Slurm)


4. Lastly, have all users put this file into their .bashrc or .bash_profile
file. From then on, users will only have to run ``pip install smartsim`` and
everything will be installed each time.


----------------------------------------------------------------------

Expand Down
106 changes: 106 additions & 0 deletions doc/sr_cpp_walkthrough.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
***
C++
***


In this section, examples are presented using the SmartRedis C++
API to interact with the RedisAI tensor, model, and script
data types. Additionally, an example of utilizing the
SmartRedis ``DataSet`` API is also provided.



.. note::
The C++ API examples rely on the ``SSDB`` environment
variable being set to the address and port of the Redis database.


.. note::
The C++ API examples are written
to connect to a clustered database or clustered SmartSim Orchestrator.
Update the ``Client`` constructor ``cluster`` flag to `false`
to connect to a single shard (single compute host) database.




Tensors
=======

The following example shows how to send a receive a tensor using the
SmartRedis C++ client API.

.. literalinclude:: ../smartredis/examples/serial/cpp/smartredis_put_get_3D.cpp
:linenos:
:language: C++

DataSets
========

The C++ client can store and retrieve tensors and metadata in datasets.
For further information about datasets, please refer to the :ref:`Dataset
section of the Data Structures documentation page <data_structures_dataset>`.

The code below shows how to store and retrieve tensors and metadata
which belong to a ``DataSet``.

.. literalinclude:: ../smartredis/examples/serial/cpp/smartredis_dataset.cpp
:linenos:
:language: C++

.. _SR CPP Models:


Models
======

The following example shows how to store, and use a DL model
in the database with the C++ Client. The model is stored as a file
in the ``../../../common/mnist_data/`` path relative to the
compiled executable. Note that this example also sets and
executes a preprocessing script.

.. literalinclude:: ../smartredis/examples/serial/cpp/smartredis_model.cpp
:linenos:
:language: C++

.. _SR CPP Scripts:

Scripts
=======

The example in :ref:`SR CPP Models` shows how to store, and use a PyTorch script
in the database with the C++ Client. The script is stored a file
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"stored a file" -> "stored as a file"

in the ``../../../common/mnist_data/`` path relative to the
compiled executable. Note that this example also sets and
executes a PyTorch model.

.. _SR CPP Parallel MPI:

Parallel (MPI) execution
========================

In this example, the example shown in :ref:`SR CPP Models` and
:ref:`SR CPP Scripts` is adapted to run in parallel using MPI.
This example has the same functionality, however,
it shows how keys can be prefixed to prevent key
collisions across MPI ranks. Note that only one
model and script are set, which is shared across
all ranks.

For completeness, the pre-processing script
source code is also shown.

**C++ program**

.. literalinclude:: ../smartredis/examples/parallel/cpp/smartredis_mnist.cpp
:linenos:
:language: C++

**Python Pre-Processing**

.. literalinclude:: ../smartredis/examples/common/mnist_data/data_processing_script.txt
:linenos:
:language: Python
:lines: 15-20

Loading