diff --git a/docs/federate/cli.md b/docs/federate/cli.md index 199b2c4..fc9bbe6 100644 --- a/docs/federate/cli.md +++ b/docs/federate/cli.md @@ -26,7 +26,7 @@ fedml version ``` -## 1. Login to the FedML MLOps platform (fedml.ai) +## 1. Login to the TensorOpera AI platform (fedml.ai) login as client with local pip mode: ``` fedml login userid(or API Key) @@ -52,19 +52,19 @@ login as edge server with docker mode: fedml login userid(or API Key) -s --docker --docker-rank rank_index ``` -### 1.1. Examples for Logging in to the FedML MLOps platform (fedml.ai) +### 1.1. Examples for Logging in to the TensorOpera AI platform (fedml.ai) ``` fedml login 90 -Notes: this will login the production environment for FedML MLOps platform +Notes: this will login the production environment for TensorOpera AI platform ``` ``` fedml login 90 --docker --docker-rank 1 -Notes: this will login the production environment with docker mode for FedML MLOps platform +Notes: this will login the production environment with docker mode for TensorOpera AI platform ``` -## 2. Build the client and server package in the FedML MLOps platform (fedml.ai) +## 2. Build the client and server package in the TensorOpera AI platform (fedml.ai) ``` fedml build -t client(or server) -sf source_folder -ep entry_point_file -cf config_folder -df destination_package_folder --ignore ignore_file_and_directory(concat with ,) diff --git a/docs/federate/cross-device/tutorial.md b/docs/federate/cross-device/tutorial.md index a7017e7..2e9bd05 100644 --- a/docs/federate/cross-device/tutorial.md +++ b/docs/federate/cross-device/tutorial.md @@ -67,12 +67,12 @@ Next show you the step-by-step user experiment of using FedML Beehive. ![./../_static/image/launch_android_app.png](./../_static/image/launch_android_app.png) -## 2. Bind FedML Android App to FedML MLOps Platform +## 2. Bind FedML Android App to TensorOpera AI Platform -This section guides you through 1) installing Android Apk, 2) binding your Android smartphone devices to FedML MLOps Platform, and 3) set the data path for training. +This section guides you through 1) installing Android Apk, 2) binding your Android smartphone devices to TensorOpera AI Platform, and 3) set the data path for training. -### 2.1 Connect Android App with FedML MLOps Platform -After installing FedML Android App ([https://github.com/FedML-AI/FedML/tree/master/android/app](https://github.com/FedML-AI/FedML/tree/master/android/app)), please go to the MLOps platform ([https://open.fedml.ai](https://open.fedml.ai)) - Beehive and switch to the `Edge Devices` page, you can see a list of **My Edge Devices** at the bottom, as well as a QR code and **Account Key** at the top right. +### 2.1 Connect Android App with TensorOpera AI Platform +After installing FedML Android App ([https://github.com/FedML-AI/FedML/tree/master/android/app](https://github.com/FedML-AI/FedML/tree/master/android/app)), please go to the MLOps platform ([https://TensorOpera.ai](https://TensorOpera.ai)) - Beehive and switch to the `Edge Devices` page, you can see a list of **My Edge Devices** at the bottom, as well as a QR code and **Account Key** at the top right. ![./../_static/image/beehive-device.png](./../_static/image/beehive-device.png) @@ -122,11 +122,11 @@ To set data path on your device, click the top green bar. Set it as the path to #### 3. **Deploy FL Server** -- Create an account at FedML MLOps Platform ([https://open.fedml.ai](https://open.fedml.ai)) +- Create an account at TensorOpera AI Platform ([https://TensorOpera.ai](https://TensorOpera.ai)) - Run local test fo -- Build Python Server Package and Upload to FedML MLOps Platform ("Create Application") +- Build Python Server Package and Upload to TensorOpera AI Platform ("Create Application") Our example code is provided at: [https://github.com/FedML-AI/FedML/tree/master/python/examples/federate/quick_start/beehive]https://github.com/FedML-AI/FedML/tree/master/python/examples/federate/quick_start/beehive) @@ -143,11 +143,11 @@ bash build_mlops_pkg.sh ``` After correct execution, you can find the package `server-package.zip` under `mlops` folder. -3) Then you need to upload the `server-package.zip` package to FedML MLOps Platform as the UI shown below. +3) Then you need to upload the `server-package.zip` package to TensorOpera AI Platform as the UI shown below. ![./../_static/image/android-pkg-uploading.png](./../_static/image/android-pkg-uploading.png) -- Launch the training by using FedML MLOps ([https://open.fedml.ai](https://open.fedml.ai)) +- Launch the training by using TensorOpera AI Platform ([https://TensorOpera.ai](https://TensorOpera.ai)) Steps at MLOps: create group -> create project -> create run -> select application (the one we uploaded server package for Android) -> start run @@ -188,7 +188,7 @@ or ``` -You can find your account ID at FedML Open Platform (https://open.fedml.ai): +You can find your account ID at FedML Open Platform (https://TensorOpera.ai): ![account](./../_static/image/beehive_account.png) 4. initial FedML Android SDK on your `Application` class. @@ -234,7 +234,7 @@ This is the message flow to interact between FedML Android SDK and your host APP - ai.fedml.edge.request.RequestManager -This is used to connect your Android SDK with FedML Open Platform (https://open.fedml.ai), which helps you to simplify the deployment, edge collaborative training, experimental tracking, and more. +This is used to connect your Android SDK with TensorOpera AI Platform (https://TensorOpera.ai), which helps you to simplify the deployment, edge collaborative training, experimental tracking, and more. You can import them in your Java/Android projects as follows. See [https://github.com/FedML-AI/FedML/blob/master/android/fedmlsdk_demo/src/main/java/ai/fedml/edgedemo/ui/main/MainFragment.java](https://github.com/FedML-AI/FedML/blob/master/android/fedmlsdk_demo/src/main/java/ai/fedml/edgedemo/ui/main/MainFragment.java) as an example. ``` diff --git a/docs/federate/cross-silo/example/mqtt_s3_fedavg_attack_mnist_lr_example.md b/docs/federate/cross-silo/example/mqtt_s3_fedavg_attack_mnist_lr_example.md index 3c88b91..c462331 100644 --- a/docs/federate/cross-silo/example/mqtt_s3_fedavg_attack_mnist_lr_example.md +++ b/docs/federate/cross-silo/example/mqtt_s3_fedavg_attack_mnist_lr_example.md @@ -343,9 +343,9 @@ if __name__ == "__main__": ``` -## A Better User-experience with FedML MLOps (fedml.ai) +## A Better User-experience with TensorOpera AI (fedml.ai) To reduce the difficulty and complexity of these CLI commands. We recommend you to use our MLOps (fedml.ai). -FedML MLOps provides: +TensorOpera AI provides: - Install Client Agent and Login - Inviting Collaborators and group management - Project Management diff --git a/docs/federate/cross-silo/example/mqtt_s3_fedavg_defense_mnist_lr_example.md b/docs/federate/cross-silo/example/mqtt_s3_fedavg_defense_mnist_lr_example.md index acf1744..6122bd9 100644 --- a/docs/federate/cross-silo/example/mqtt_s3_fedavg_defense_mnist_lr_example.md +++ b/docs/federate/cross-silo/example/mqtt_s3_fedavg_defense_mnist_lr_example.md @@ -283,9 +283,9 @@ if __name__ == "__main__": ``` -## A Better User-experience with FedML MLOps (fedml.ai) +## A Better User-experience with TensorOpera AI (fedml.ai) To reduce the difficulty and complexity of these CLI commands. We recommend you to use our MLOps (fedml.ai). -FedML MLOps provides: +TensorOpera AI provides: - Install Client Agent and Login - Inviting Collaborators and group management - Project Management diff --git a/docs/federate/cross-silo/example/mqtt_s3_fedavg_hierarchical_mnist_lr_example.md b/docs/federate/cross-silo/example/mqtt_s3_fedavg_hierarchical_mnist_lr_example.md index 359e1a9..a0cde77 100644 --- a/docs/federate/cross-silo/example/mqtt_s3_fedavg_hierarchical_mnist_lr_example.md +++ b/docs/federate/cross-silo/example/mqtt_s3_fedavg_hierarchical_mnist_lr_example.md @@ -425,9 +425,9 @@ if __name__ == "__main__": ![img.png](cross_silo_hi_arch_refactored.png) -## A Better User-experience with FedML MLOps (fedml.ai) +## A Better User-experience with TensorOpera AI (fedml.ai) To reduce the difficulty and complexity of these CLI commands. We recommend you to use our MLOps (fedml.ai). -FedML MLOps provides: +TensorOpera AI provides: - Install Client Agent and Login - Inviting Collaborators and group management - Project Management diff --git a/docs/federate/cross-silo/example/mqtt_s3_fedavg_mnist_lr_example.md b/docs/federate/cross-silo/example/mqtt_s3_fedavg_mnist_lr_example.md index 1c786cd..0ba4030 100644 --- a/docs/federate/cross-silo/example/mqtt_s3_fedavg_mnist_lr_example.md +++ b/docs/federate/cross-silo/example/mqtt_s3_fedavg_mnist_lr_example.md @@ -270,9 +270,9 @@ if __name__ == "__main__": ``` -## A Better User-experience with FedML MLOps (fedml.ai) +## A Better User-experience with TensorOpera AI (fedml.ai) To reduce the difficulty and complexity of these CLI commands. We recommend you to use our MLOps (fedml.ai). -FedML MLOps provides: +TensorOpera AI provides: - Install Client Agent and Login - Inviting Collaborators and group management - Project Management diff --git a/docs/federate/cross-silo/overview.md b/docs/federate/cross-silo/overview.md index fe91900..b0b8e23 100644 --- a/docs/federate/cross-silo/overview.md +++ b/docs/federate/cross-silo/overview.md @@ -29,7 +29,7 @@ where different data silos may have different numbers of GPUs or even multiple n ![./../_static/image/cross-silo-hi.png](./../_static/image/cross-silo-hi.png) FedML Octopus addresses this challenge by enabling a distributed training paradigm (PyTorch DDP, distributed data parallel) to run inside each data-silo, and further orchestrate different silos with asynchronous or synchronous federated optimization method. -As a result, FedML Octopus can support this scenario in a flexible, secure, and efficient manner. FedML MLOps platform also simplifies its real-world deployment. +As a result, FedML Octopus can support this scenario in a flexible, secure, and efficient manner. TensorOpera AI platform also simplifies its real-world deployment. Please read the detailed [examples and tutorial](./example/example.md) for details. diff --git a/docs/federate/cross-silo/user_guide.md b/docs/federate/cross-silo/user_guide.md index 089c388..08a7c66 100644 --- a/docs/federate/cross-silo/user_guide.md +++ b/docs/federate/cross-silo/user_guide.md @@ -9,7 +9,7 @@ https://www.youtube.com/embed/Xgm0XEaMlVQ **Write Once, Run Anywhere: Seamlessly Migrate Your Local Development to the Real-world Edge-cloud Deployment** -- How Does FedML MLOps Platform Work? +- How Does TensorOpera AI Platform Work? - Local Development and Building MLOps Packages - Create Application and Upload Local Packages - Install FedML Agent: fedml login $account_id @@ -18,7 +18,7 @@ https://www.youtube.com/embed/Xgm0XEaMlVQ - Experimental Tracking via Simplified Project Management - FedML OTA (Over-the-Air) upgrade mechanism -### How Does FedML MLOps Platform Work? +### How Does TensorOpera AI Platform Work? ![image](../_static/image/mlops_workflow_new.png) \ Figure 1: the workflow describing how MLOps works @@ -157,7 +157,7 @@ login: edge_id = 266 subscribe: flserver_agent/266/start_train subscribe: flserver_agent/266/stop_train subscribe: fl_client/flclient_agent_266/status -Congratulations, you have logged into the FedML MLOps platform successfully! +Congratulations, you have logged into the TensorOpera AI platform successfully! Your device id is @0xb6ff42da6a7e.MacOS. You may review the device in the MLOps edge device list. ``` diff --git a/docs/federate/getting_started.md b/docs/federate/getting_started.md index 9c4f04a..bb5d3a8 100644 --- a/docs/federate/getting_started.md +++ b/docs/federate/getting_started.md @@ -224,9 +224,9 @@ Hierarchical Federated Learning: [https://tensoropera.ai](https://tensoropera.ai) -Currently, the project developed based on FedML Octopus (cross-silo) and Beehive (cross-device) can be smoothly deployed into the real-world system using FedML MLOps. +Currently, the project developed based on FedML Octopus (cross-silo) and Beehive (cross-device) can be smoothly deployed into the real-world system using TensorOpera AI. -The FedML MLOps Platform simplifies the workflow of federated learning from anywhere and at any scale. +The TensorOpera AI Platform simplifies the workflow of federated learning from anywhere and at any scale. It enables zero-code, lightweight, cross-platform, and provably secure federated learning. It enables machine learning from decentralized data at various users/silos/edge nodes, without the need to centralize any data to the cloud, hence providing maximum privacy and efficiency. diff --git a/docs/launch/on-cloud/cloud-cluster.md b/docs/launch/on-cloud/cloud-cluster.md index 08be415..fdc58f1 100644 --- a/docs/launch/on-cloud/cloud-cluster.md +++ b/docs/launch/on-cloud/cloud-cluster.md @@ -166,7 +166,7 @@ You can run as many consequent jobs as you like on your cluster now. It will que Submitting your job to TensorOpera AI Platform: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.92k/2.92k [00:00<00:00, 17.4kB/s] You can track your run details at this URL: -https://open.fedml.ai/train/project/run?projectId=1717276102352834560&runId=1717314053350756352 +https://TensorOpera.ai/train/project/run?projectId=1717276102352834560&runId=1717314053350756352 For querying the realtime status of your run, please run the following command. fedml run logs -rid 1717314053350756352 @@ -177,7 +177,7 @@ fedml run logs -rid 1717314053350756352 Submitting your job to TensorOpera AI Platform: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.92k/2.92k [00:00<00:00, 11.8kB/s] You can track your run details at this URL: -https://open.fedml.ai/train/project/run?projectId=1717276102352834560&runId=1717314101526532096 +https://TensorOpera.ai/train/project/run?projectId=1717276102352834560&runId=1717314101526532096 For querying the realtime status of your run, please run the following command. fedml run logs -rid 1717314101526532096 diff --git a/docs/launch/on-prem/install.md b/docs/launch/on-prem/install.md index 3a1a5aa..f6be254 100644 --- a/docs/launch/on-prem/install.md +++ b/docs/launch/on-prem/install.md @@ -46,7 +46,7 @@ Requirement already satisfied: numpy>=1.21 in ./.pyenv/versions/fedml/lib/python . . -Congratulations, your device is connected to the FedML MLOps platform successfully! +Congratulations, your device is connected to the TensorOpera AI platform successfully! Your FedML Edge ID is 201610, unique device ID is 0xffdc89fad658@Linux.Edge.Device ``` diff --git a/docs/launch/share-and-earn.md b/docs/launch/share-and-earn.md index 627dafd..ceb99d3 100644 --- a/docs/launch/share-and-earn.md +++ b/docs/launch/share-and-earn.md @@ -32,7 +32,7 @@ Below is output of command when executed on a TensorOpera® GPU server: (fedml) alay@a6000:~$ -Congratulations, your device is connected to the FedML MLOps platform successfully! +Congratulations, your device is connected to the TensorOpera AI platform successfully! Your FedML Edge ID is 1717367167533584384, unique device ID is 0xa11081eb21f1@Linux.Edge.GPU.Supplier You may visit the following url to fill in more information with your device. diff --git a/docs/open-source/api/api-deploy.md b/docs/open-source/api/api-deploy.md index 57145b8..ac8b11f 100644 --- a/docs/open-source/api/api-deploy.md +++ b/docs/open-source/api/api-deploy.md @@ -10,7 +10,7 @@ sidebar_position: 3 :::tip Before using some of the apis that require remote operation (e.g. `fedml.api.model_push()`), please use one of the following methods to login -to FedML MLOps platform first: +to TensorOpera AI platform first: 1. CLI: `fedml login $api_key` diff --git a/docs/open-source/api/api-launch.md b/docs/open-source/api/api-launch.md index 9dc6d58..4d6760a 100644 --- a/docs/open-source/api/api-launch.md +++ b/docs/open-source/api/api-launch.md @@ -11,7 +11,7 @@ Simple launcher APIs for running any AI job across multiple public and/or decent :::tip Before using some of the apis that require remote operation (e.g. `fedml.api.launch_job()`), please use one of the following methods to login -to FedML MLOps platform first: +to TensorOpera AI platform first: 1. CLI: `fedml login $api_key` diff --git a/docs/open-source/api/api-storage.md b/docs/open-source/api/api-storage.md index 510e9a5..b9a55cc 100644 --- a/docs/open-source/api/api-storage.md +++ b/docs/open-source/api/api-storage.md @@ -10,7 +10,7 @@ Storage APIs help in managing all the data needs that is typically associated wi :::tip Before using some of the apis that require remote operation (e.g. `fedml.api.launch_job()`), please use one of the following methods to login -to FedML MLOps platform first: +to TensorOpera AI platform first: 1. CLI: `fedml login $api_key` diff --git a/docs/open-source/cli/fedml-federate.md b/docs/open-source/cli/fedml-federate.md index 4fb612c..1840507 100644 --- a/docs/open-source/cli/fedml-federate.md +++ b/docs/open-source/cli/fedml-federate.md @@ -62,7 +62,7 @@ computing: maximum_cost_per_hour: $3000 # max cost per hour for your job per gpu card #allow_cross_cloud_resources: true # true, false #device_type: CPU # options: GPU, CPU, hybrid - resource_type: A100-80G # e.g., A100-80G, please check the resource type list by "fedml show-resource-type" or visiting URL: https://open.fedml.ai/accelerator_resource_type + resource_type: A100-80G # e.g., A100-80G, please check the resource type list by "fedml show-resource-type" or visiting URL: https://TensorOpera.ai/accelerator_resource_type data_args: dataset_name: mnist dataset_path: ./dataset diff --git a/docs/open-source/cli/fedml-model.md b/docs/open-source/cli/fedml-model.md index a18f968..d75f2e1 100644 --- a/docs/open-source/cli/fedml-model.md +++ b/docs/open-source/cli/fedml-model.md @@ -86,7 +86,7 @@ Check your device id for master role and worker role. Welcome to FedML.ai! Start to login the current device to the TensorOpera AI Platform -Congratulations, your device is connected to the FedML MLOps platform successfully! +Congratulations, your device is connected to the TensorOpera AI platform successfully! Your FedML Edge ID is xxx, unique device ID is xxx, master deploy ID is 31240, worker deploy ID is 31239 ``` From above, we can know that the master ID is 31240, worker deploy ID is 31239 diff --git a/docs/open-source/cli/fedml-train.md b/docs/open-source/cli/fedml-train.md index b65e23f..4e2a2c5 100644 --- a/docs/open-source/cli/fedml-train.md +++ b/docs/open-source/cli/fedml-train.md @@ -41,7 +41,7 @@ computing: maximum_cost_per_hour: $3000 # max cost per hour for your job per gpu card #allow_cross_cloud_resources: true # true, false #device_type: CPU # options: GPU, CPU, hybrid - resource_type: A100-80G # e.g., A100-80G, please check the resource type list by "fedml show-resource-type" or visiting URL: https://open.fedml.ai/accelerator_resource_type + resource_type: A100-80G # e.g., A100-80G, please check the resource type list by "fedml show-resource-type" or visiting URL: https://TensorOpera.ai/accelerator_resource_type data_args: dataset_name: mnist diff --git a/docs/open-source/installation/docker.md b/docs/open-source/installation/docker.md index 328146b..ebfbe20 100644 --- a/docs/open-source/installation/docker.md +++ b/docs/open-source/installation/docker.md @@ -46,7 +46,7 @@ ddocker run -v $LOCAL_WORKSPACE:$DOCKER_WORKSPACE --shm-size=64g --ulimit nofile **(3) Run examples** -Now, you should now be inside the container. First, you need to log into the MLOps platform. The `USERID` placeholder used below refers to your user id in the FedML MLOps platform: +Now, you should now be inside the container. First, you need to log into the MLOps platform. The `USERID` placeholder used below refers to your user id in the TensorOpera AI platform: ``` root@142ffce4cdf8:/# root@142ffce4cdf8:/# fedml login diff --git a/docs/open-source/installation/linux.md b/docs/open-source/installation/linux.md index 59851ce..48029b9 100644 --- a/docs/open-source/installation/linux.md +++ b/docs/open-source/installation/linux.md @@ -32,7 +32,7 @@ The entire workflow is as follows: 2. Deploy the fedml client: ```kubectl apply -f ./fedml-edge-client-server/deployment-client.yml``` 3. In the file fedml-edge-client-server/deployment-server.yml, modify the variable ACCOUNT_ID to your desired value 4. Deploy the fedml server: ```kubectl apply -f ./fedml-edge-client-server/deployment-server.yml``` -5. Login the FedML MLOps platform (https://tensoropera.ai), the above deployed client and server will be found in the edge devices +5. Login the TensorOpera AI platform (https://tensoropera.ai), the above deployed client and server will be found in the edge devices If you want to scale up or scal down the pods to your desired count, you may run the following command: diff --git a/docs/share-and-earn/share-and-earn.md b/docs/share-and-earn/share-and-earn.md index 4103309..20c2330 100644 --- a/docs/share-and-earn/share-and-earn.md +++ b/docs/share-and-earn/share-and-earn.md @@ -84,7 +84,7 @@ device_count = 0 No GPU devices ======== Network Connection Checking ======== -The connection to https://open.fedml.ai is OK. +The connection to https://TensorOpera.ai is OK. The connection to S3 Object Storage is OK. @@ -124,7 +124,7 @@ Below is output of command when executed on a FedML® GPU server: (fedml) alay@a6000:~$ -Congratulations, your device is connected to the FedML MLOps platform successfully! +Congratulations, your device is connected to the TensorOpera AI platform successfully! Your FedML Edge ID is 1717367167533584384, unique device ID is 0xa11081eb21f1@Linux.Edge.GPU.Supplier You may visit the following url to fill in more information with your device. diff --git a/docs/train/train-on-cloud/static/image/.DS_Store b/docs/train/train-on-cloud/static/image/.DS_Store index f1d9601..b2af0bd 100644 Binary files a/docs/train/train-on-cloud/static/image/.DS_Store and b/docs/train/train-on-cloud/static/image/.DS_Store differ diff --git a/docs/train/train-on-prem/train_on_cloud_cluster.md b/docs/train/train-on-prem/train_on_cloud_cluster.md index a90a858..c7bc60f 100644 --- a/docs/train/train-on-prem/train_on_cloud_cluster.md +++ b/docs/train/train-on-prem/train_on_cloud_cluster.md @@ -142,7 +142,7 @@ You can run as many consequent jobs as you like on your cluster now. It will que Submitting your job to TensorOpera AI Platform: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.92k/2.92k [00:00<00:00, 17.4kB/s] You can track your run details at this URL: -https://open.fedml.ai/train/project/run?projectId=1717276102352834560&runId=1717314053350756352 +https://TensorOpera.ai/train/project/run?projectId=1717276102352834560&runId=1717314053350756352 For querying the realtime status of your run, please run the following command. fedml run logs -rid 1717314053350756352 @@ -153,7 +153,7 @@ fedml run logs -rid 1717314053350756352 Submitting your job to TensorOpera AI Platform: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.92k/2.92k [00:00<00:00, 11.8kB/s] You can track your run details at this URL: -https://open.fedml.ai/train/project/run?projectId=1717276102352834560&runId=1717314101526532096 +https://TensorOpera.ai/train/project/run?projectId=1717276102352834560&runId=1717314101526532096 For querying the realtime status of your run, please run the following command. fedml run logs -rid 1717314101526532096 diff --git a/docs/train/train-on-prem/train_on_premise_cluster.md b/docs/train/train-on-prem/train_on_premise_cluster.md index b7f740a..4d799fc 100644 --- a/docs/train/train-on-prem/train_on_premise_cluster.md +++ b/docs/train/train-on-prem/train_on_premise_cluster.md @@ -41,7 +41,7 @@ Requirement already satisfied: numpy>=1.21 in ./.pyenv/versions/fedml/lib/python . . -Congratulations, your device is connected to the FedML MLOps platform successfully! +Congratulations, your device is connected to the TensorOpera AI platform successfully! Your FedML Edge ID is 201610, unique device ID is 0xffdc89fad658@Linux.Edge.Device ``` diff --git a/static/python/render.py b/static/python/render.py index 2012f8e..98c5502 100644 --- a/static/python/render.py +++ b/static/python/render.py @@ -4,7 +4,7 @@ import requests import time -BACKEND_URL = "https://open.fedml.ai/cheetah/cli/web3/token-node-rel" +BACKEND_URL = "https://TensorOpera.ai/cheetah/cli/web3/token-node-rel" TOKEN_MISSING_ERROR_MESSAGE = ("\033[1;31m\u2717 Error: Render Auth Token is missing. Kindly execute the last command again, " "and enter Render Auth Token when prompted\033[0m")