Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

change FedML into TensorOpera AI #104

Merged
merged 1 commit into from
May 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions docs/federate/cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ fedml version

```

## 1. Login to the FedML MLOps platform (fedml.ai)
## 1. Login to the TensorOpera AI platform (fedml.ai)
login as client with local pip mode:
```
fedml login userid(or API Key)
Expand All @@ -52,19 +52,19 @@ login as edge server with docker mode:
fedml login userid(or API Key) -s --docker --docker-rank rank_index
```

### 1.1. Examples for Logging in to the FedML MLOps platform (fedml.ai)
### 1.1. Examples for Logging in to the TensorOpera AI platform (fedml.ai)

```
fedml login 90
Notes: this will login the production environment for FedML MLOps platform
Notes: this will login the production environment for TensorOpera AI platform
```

```
fedml login 90 --docker --docker-rank 1
Notes: this will login the production environment with docker mode for FedML MLOps platform
Notes: this will login the production environment with docker mode for TensorOpera AI platform
```

## 2. Build the client and server package in the FedML MLOps platform (fedml.ai)
## 2. Build the client and server package in the TensorOpera AI platform (fedml.ai)

```
fedml build -t client(or server) -sf source_folder -ep entry_point_file -cf config_folder -df destination_package_folder --ignore ignore_file_and_directory(concat with ,)
Expand Down
20 changes: 10 additions & 10 deletions docs/federate/cross-device/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,12 +67,12 @@ Next show you the step-by-step user experiment of using FedML Beehive.

![./../_static/image/launch_android_app.png](./../_static/image/launch_android_app.png)

## 2. Bind FedML Android App to FedML MLOps Platform
## 2. Bind FedML Android App to TensorOpera AI Platform

This section guides you through 1) installing Android Apk, 2) binding your Android smartphone devices to FedML MLOps Platform, and 3) set the data path for training.
This section guides you through 1) installing Android Apk, 2) binding your Android smartphone devices to TensorOpera AI Platform, and 3) set the data path for training.

### 2.1 Connect Android App with FedML MLOps Platform
After installing FedML Android App ([https://github.com/FedML-AI/FedML/tree/master/android/app](https://github.com/FedML-AI/FedML/tree/master/android/app)), please go to the MLOps platform ([https://open.fedml.ai](https://open.fedml.ai)) - Beehive and switch to the `Edge Devices` page, you can see a list of **My Edge Devices** at the bottom, as well as a QR code and **Account Key** at the top right.
### 2.1 Connect Android App with TensorOpera AI Platform
After installing FedML Android App ([https://github.com/FedML-AI/FedML/tree/master/android/app](https://github.com/FedML-AI/FedML/tree/master/android/app)), please go to the MLOps platform ([https://TensorOpera.ai](https://TensorOpera.ai)) - Beehive and switch to the `Edge Devices` page, you can see a list of **My Edge Devices** at the bottom, as well as a QR code and **Account Key** at the top right.

![./../_static/image/beehive-device.png](./../_static/image/beehive-device.png)

Expand Down Expand Up @@ -122,11 +122,11 @@ To set data path on your device, click the top green bar. Set it as the path to

#### 3. **Deploy FL Server**

- Create an account at FedML MLOps Platform ([https://open.fedml.ai](https://open.fedml.ai))
- Create an account at TensorOpera AI Platform ([https://TensorOpera.ai](https://TensorOpera.ai))

- Run local test fo

- Build Python Server Package and Upload to FedML MLOps Platform ("Create Application")
- Build Python Server Package and Upload to TensorOpera AI Platform ("Create Application")

Our example code is provided at:
[https://github.com/FedML-AI/FedML/tree/master/python/examples/federate/quick_start/beehive]https://github.com/FedML-AI/FedML/tree/master/python/examples/federate/quick_start/beehive)
Expand All @@ -143,11 +143,11 @@ bash build_mlops_pkg.sh
```
After correct execution, you can find the package `server-package.zip` under `mlops` folder.

3) Then you need to upload the `server-package.zip` package to FedML MLOps Platform as the UI shown below.
3) Then you need to upload the `server-package.zip` package to TensorOpera AI Platform as the UI shown below.

![./../_static/image/android-pkg-uploading.png](./../_static/image/android-pkg-uploading.png)

- Launch the training by using FedML MLOps ([https://open.fedml.ai](https://open.fedml.ai))
- Launch the training by using TensorOpera AI Platform ([https://TensorOpera.ai](https://TensorOpera.ai))

Steps at MLOps: create group -> create project -> create run -> select application (the one we uploaded server package for Android) -> start run

Expand Down Expand Up @@ -188,7 +188,7 @@ or
<meta-data android:name="fedml_account" android:resource="@string/fed_ml_account" />
```

You can find your account ID at FedML Open Platform (https://open.fedml.ai):
You can find your account ID at FedML Open Platform (https://TensorOpera.ai):
![account](./../_static/image/beehive_account.png)

4. initial FedML Android SDK on your `Application` class.
Expand Down Expand Up @@ -234,7 +234,7 @@ This is the message flow to interact between FedML Android SDK and your host APP

- ai.fedml.edge.request.RequestManager

This is used to connect your Android SDK with FedML Open Platform (https://open.fedml.ai), which helps you to simplify the deployment, edge collaborative training, experimental tracking, and more.
This is used to connect your Android SDK with TensorOpera AI Platform (https://TensorOpera.ai), which helps you to simplify the deployment, edge collaborative training, experimental tracking, and more.

You can import them in your Java/Android projects as follows. See [https://github.com/FedML-AI/FedML/blob/master/android/fedmlsdk_demo/src/main/java/ai/fedml/edgedemo/ui/main/MainFragment.java](https://github.com/FedML-AI/FedML/blob/master/android/fedmlsdk_demo/src/main/java/ai/fedml/edgedemo/ui/main/MainFragment.java) as an example.
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -343,9 +343,9 @@ if __name__ == "__main__":
```


## A Better User-experience with FedML MLOps (fedml.ai)
## A Better User-experience with TensorOpera AI (fedml.ai)
To reduce the difficulty and complexity of these CLI commands. We recommend you to use our MLOps (fedml.ai).
FedML MLOps provides:
TensorOpera AI provides:
- Install Client Agent and Login
- Inviting Collaborators and group management
- Project Management
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -283,9 +283,9 @@ if __name__ == "__main__":
```


## A Better User-experience with FedML MLOps (fedml.ai)
## A Better User-experience with TensorOpera AI (fedml.ai)
To reduce the difficulty and complexity of these CLI commands. We recommend you to use our MLOps (fedml.ai).
FedML MLOps provides:
TensorOpera AI provides:
- Install Client Agent and Login
- Inviting Collaborators and group management
- Project Management
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -425,9 +425,9 @@ if __name__ == "__main__":

![img.png](cross_silo_hi_arch_refactored.png)

## A Better User-experience with FedML MLOps (fedml.ai)
## A Better User-experience with TensorOpera AI (fedml.ai)
To reduce the difficulty and complexity of these CLI commands. We recommend you to use our MLOps (fedml.ai).
FedML MLOps provides:
TensorOpera AI provides:
- Install Client Agent and Login
- Inviting Collaborators and group management
- Project Management
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -270,9 +270,9 @@ if __name__ == "__main__":
```


## A Better User-experience with FedML MLOps (fedml.ai)
## A Better User-experience with TensorOpera AI (fedml.ai)
To reduce the difficulty and complexity of these CLI commands. We recommend you to use our MLOps (fedml.ai).
FedML MLOps provides:
TensorOpera AI provides:
- Install Client Agent and Login
- Inviting Collaborators and group management
- Project Management
Expand Down
2 changes: 1 addition & 1 deletion docs/federate/cross-silo/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ where different data silos may have different numbers of GPUs or even multiple n
![./../_static/image/cross-silo-hi.png](./../_static/image/cross-silo-hi.png)

FedML Octopus addresses this challenge by enabling a distributed training paradigm (PyTorch DDP, distributed data parallel) to run inside each data-silo, and further orchestrate different silos with asynchronous or synchronous federated optimization method.
As a result, FedML Octopus can support this scenario in a flexible, secure, and efficient manner. FedML MLOps platform also simplifies its real-world deployment.
As a result, FedML Octopus can support this scenario in a flexible, secure, and efficient manner. TensorOpera AI platform also simplifies its real-world deployment.


Please read the detailed [examples and tutorial](./example/example.md) for details.
Expand Down
6 changes: 3 additions & 3 deletions docs/federate/cross-silo/user_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ https://www.youtube.com/embed/Xgm0XEaMlVQ

**Write Once, Run Anywhere: Seamlessly Migrate Your Local Development to the Real-world Edge-cloud Deployment**

- How Does FedML MLOps Platform Work?
- How Does TensorOpera AI Platform Work?
- Local Development and Building MLOps Packages
- Create Application and Upload Local Packages
- Install FedML Agent: fedml login $account_id
Expand All @@ -18,7 +18,7 @@ https://www.youtube.com/embed/Xgm0XEaMlVQ
- Experimental Tracking via Simplified Project Management
- FedML OTA (Over-the-Air) upgrade mechanism

### How Does FedML MLOps Platform Work?
### How Does TensorOpera AI Platform Work?

![image](../_static/image/mlops_workflow_new.png) \
Figure 1: the workflow describing how MLOps works
Expand Down Expand Up @@ -157,7 +157,7 @@ login: edge_id = 266
subscribe: flserver_agent/266/start_train
subscribe: flserver_agent/266/stop_train
subscribe: fl_client/flclient_agent_266/status
Congratulations, you have logged into the FedML MLOps platform successfully!
Congratulations, you have logged into the TensorOpera AI platform successfully!
Your device id is @0xb6ff42da6a7e.MacOS. You may review the device in the MLOps edge device list.
```

Expand Down
4 changes: 2 additions & 2 deletions docs/federate/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -224,9 +224,9 @@ Hierarchical Federated Learning:

[https://tensoropera.ai](https://tensoropera.ai)

Currently, the project developed based on FedML Octopus (cross-silo) and Beehive (cross-device) can be smoothly deployed into the real-world system using FedML MLOps.
Currently, the project developed based on FedML Octopus (cross-silo) and Beehive (cross-device) can be smoothly deployed into the real-world system using TensorOpera AI.

The FedML MLOps Platform simplifies the workflow of federated learning from anywhere and at any scale.
The TensorOpera AI Platform simplifies the workflow of federated learning from anywhere and at any scale.
It enables zero-code, lightweight, cross-platform, and provably secure federated learning.
It enables machine learning from decentralized data at various users/silos/edge nodes, without the need to centralize any data to the cloud, hence providing maximum privacy and efficiency.

Expand Down
4 changes: 2 additions & 2 deletions docs/launch/on-cloud/cloud-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ You can run as many consequent jobs as you like on your cluster now. It will que
Submitting your job to TensorOpera AI Platform: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.92k/2.92k [00:00<00:00, 17.4kB/s]

You can track your run details at this URL:
https://open.fedml.ai/train/project/run?projectId=1717276102352834560&runId=1717314053350756352
https://TensorOpera.ai/train/project/run?projectId=1717276102352834560&runId=1717314053350756352

For querying the realtime status of your run, please run the following command.
fedml run logs -rid 1717314053350756352
Expand All @@ -177,7 +177,7 @@ fedml run logs -rid 1717314053350756352
Submitting your job to TensorOpera AI Platform: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.92k/2.92k [00:00<00:00, 11.8kB/s]

You can track your run details at this URL:
https://open.fedml.ai/train/project/run?projectId=1717276102352834560&runId=1717314101526532096
https://TensorOpera.ai/train/project/run?projectId=1717276102352834560&runId=1717314101526532096

For querying the realtime status of your run, please run the following command.
fedml run logs -rid 1717314101526532096
Expand Down
2 changes: 1 addition & 1 deletion docs/launch/on-prem/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Requirement already satisfied: numpy>=1.21 in ./.pyenv/versions/fedml/lib/python
.
.

Congratulations, your device is connected to the FedML MLOps platform successfully!
Congratulations, your device is connected to the TensorOpera AI platform successfully!
Your FedML Edge ID is 201610, unique device ID is 0xffdc89fad658@Linux.Edge.Device
```

Expand Down
2 changes: 1 addition & 1 deletion docs/launch/share-and-earn.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Below is output of command when executed on a TensorOpera® GPU server:

(fedml) alay@a6000:~$

Congratulations, your device is connected to the FedML MLOps platform successfully!
Congratulations, your device is connected to the TensorOpera AI platform successfully!
Your FedML Edge ID is 1717367167533584384, unique device ID is 0xa11081eb21f1@Linux.Edge.GPU.Supplier

You may visit the following url to fill in more information with your device.
Expand Down
2 changes: 1 addition & 1 deletion docs/open-source/api/api-deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ sidebar_position: 3
:::tip
Before using some of the apis that require remote operation (e.g. `fedml.api.model_push()`),
please use one of the following methods to login
to FedML MLOps platform first:
to TensorOpera AI platform first:

1. CLI: `fedml login $api_key`

Expand Down
2 changes: 1 addition & 1 deletion docs/open-source/api/api-launch.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Simple launcher APIs for running any AI job across multiple public and/or decent

:::tip
Before using some of the apis that require remote operation (e.g. `fedml.api.launch_job()`), please use one of the following methods to login
to FedML MLOps platform first:
to TensorOpera AI platform first:

1. CLI: `fedml login $api_key`

Expand Down
2 changes: 1 addition & 1 deletion docs/open-source/api/api-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Storage APIs help in managing all the data needs that is typically associated wi

:::tip
Before using some of the apis that require remote operation (e.g. `fedml.api.launch_job()`), please use one of the following methods to login
to FedML MLOps platform first:
to TensorOpera AI platform first:

1. CLI: `fedml login $api_key`

Expand Down
2 changes: 1 addition & 1 deletion docs/open-source/cli/fedml-federate.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ computing:
maximum_cost_per_hour: $3000 # max cost per hour for your job per gpu card
#allow_cross_cloud_resources: true # true, false
#device_type: CPU # options: GPU, CPU, hybrid
resource_type: A100-80G # e.g., A100-80G, please check the resource type list by "fedml show-resource-type" or visiting URL: https://open.fedml.ai/accelerator_resource_type
resource_type: A100-80G # e.g., A100-80G, please check the resource type list by "fedml show-resource-type" or visiting URL: https://TensorOpera.ai/accelerator_resource_type
data_args:
dataset_name: mnist
dataset_path: ./dataset
Expand Down
2 changes: 1 addition & 1 deletion docs/open-source/cli/fedml-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ Check your device id for master role and worker role.
Welcome to FedML.ai!
Start to login the current device to the TensorOpera AI Platform

Congratulations, your device is connected to the FedML MLOps platform successfully!
Congratulations, your device is connected to the TensorOpera AI platform successfully!
Your FedML Edge ID is xxx, unique device ID is xxx, master deploy ID is 31240, worker deploy ID is 31239
```
From above, we can know that the master ID is 31240, worker deploy ID is 31239
Expand Down
2 changes: 1 addition & 1 deletion docs/open-source/cli/fedml-train.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ computing:
maximum_cost_per_hour: $3000 # max cost per hour for your job per gpu card
#allow_cross_cloud_resources: true # true, false
#device_type: CPU # options: GPU, CPU, hybrid
resource_type: A100-80G # e.g., A100-80G, please check the resource type list by "fedml show-resource-type" or visiting URL: https://open.fedml.ai/accelerator_resource_type
resource_type: A100-80G # e.g., A100-80G, please check the resource type list by "fedml show-resource-type" or visiting URL: https://TensorOpera.ai/accelerator_resource_type

data_args:
dataset_name: mnist
Expand Down
2 changes: 1 addition & 1 deletion docs/open-source/installation/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ ddocker run -v $LOCAL_WORKSPACE:$DOCKER_WORKSPACE --shm-size=64g --ulimit nofile

**(3) Run examples**

Now, you should now be inside the container. First, you need to log into the MLOps platform. The `USERID` placeholder used below refers to your user id in the FedML MLOps platform:
Now, you should now be inside the container. First, you need to log into the MLOps platform. The `USERID` placeholder used below refers to your user id in the TensorOpera AI platform:
```
root@142ffce4cdf8:/#
root@142ffce4cdf8:/# fedml login <USERID>
Expand Down
2 changes: 1 addition & 1 deletion docs/open-source/installation/linux.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ The entire workflow is as follows:
2. Deploy the fedml client: ```kubectl apply -f ./fedml-edge-client-server/deployment-client.yml```
3. In the file fedml-edge-client-server/deployment-server.yml, modify the variable ACCOUNT_ID to your desired value
4. Deploy the fedml server: ```kubectl apply -f ./fedml-edge-client-server/deployment-server.yml```
5. Login the FedML MLOps platform (https://tensoropera.ai), the above deployed client and server will be found in the edge devices
5. Login the TensorOpera AI platform (https://tensoropera.ai), the above deployed client and server will be found in the edge devices

If you want to scale up or scal down the pods to your desired count, you may run the following command:

Expand Down
4 changes: 2 additions & 2 deletions docs/share-and-earn/share-and-earn.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ device_count = 0
No GPU devices

======== Network Connection Checking ========
The connection to https://open.fedml.ai is OK.
The connection to https://TensorOpera.ai is OK.

The connection to S3 Object Storage is OK.

Expand Down Expand Up @@ -124,7 +124,7 @@ Below is output of command when executed on a FedML® GPU server:

(fedml) alay@a6000:~$

Congratulations, your device is connected to the FedML MLOps platform successfully!
Congratulations, your device is connected to the TensorOpera AI platform successfully!
Your FedML Edge ID is 1717367167533584384, unique device ID is 0xa11081eb21f1@Linux.Edge.GPU.Supplier

You may visit the following url to fill in more information with your device.
Expand Down
Binary file modified docs/train/train-on-cloud/static/image/.DS_Store
Binary file not shown.
4 changes: 2 additions & 2 deletions docs/train/train-on-prem/train_on_cloud_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ You can run as many consequent jobs as you like on your cluster now. It will que
Submitting your job to TensorOpera AI Platform: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.92k/2.92k [00:00<00:00, 17.4kB/s]

You can track your run details at this URL:
https://open.fedml.ai/train/project/run?projectId=1717276102352834560&runId=1717314053350756352
https://TensorOpera.ai/train/project/run?projectId=1717276102352834560&runId=1717314053350756352

For querying the realtime status of your run, please run the following command.
fedml run logs -rid 1717314053350756352
Expand All @@ -153,7 +153,7 @@ fedml run logs -rid 1717314053350756352
Submitting your job to TensorOpera AI Platform: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.92k/2.92k [00:00<00:00, 11.8kB/s]

You can track your run details at this URL:
https://open.fedml.ai/train/project/run?projectId=1717276102352834560&runId=1717314101526532096
https://TensorOpera.ai/train/project/run?projectId=1717276102352834560&runId=1717314101526532096

For querying the realtime status of your run, please run the following command.
fedml run logs -rid 1717314101526532096
Expand Down
Loading