Execute the toxicity model for TensorFlow.js with the ScaleDynamics WarpJS SDK.
The toxicity model detects whether text contains toxic content such as threatening language, insults, obscenities, identity-based hate, or sexually explicit language.
👉 Try a live demo
- Clone the project
- Go to the
warp-samples/tensorflowjs-toxicity
directory - Run the following commands:
# install deps
$ npm install
# login to ScaleDynamics
$ npx warp login
# run a dev server
$ npm run dev
# build and deploy to production
$ npm run build
$ npm run deploy