MAX can accelerate the inference of existing PyTorch or ONNX models. These examples show several common PyTorch or ONNX models running in MAX through the Mojo, Python, and C APIs:
- Stable Diffusion, with the Mojo API and with the Python API
- YOLOv8 with the Python API