Skip to content

Added proof of concept for torch.fx to MDF #492

New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Open
wants to merge 1 commit into
base: development
Choose a base branch
from

Conversation

parikshit14
Copy link
Contributor

@parikshit14 parikshit14 commented Oct 23, 2023

Proof of concept for torch.fx to MDF #431

  • Created pytorch_fx_to_mdf() which takes a pytorch model, argument as its input and converts to MDF model by node then edges then appends them to the current MDF Graph
  • Currently works for examples/PyTorch/simple_pytorch_to_mdf.py https://github.com/ModECI/MDF/blob/main/examples/PyTorch/simple_pytorch_to_mdf.py and added support to use the similar approach
  • In the conversion, I have still used ONNX in functions so that we can execute the MDF graph to verify the results but in future, we will need changes in execution_engine or maybe use another representation. (Didn't explore actr, ddm present in execution engine)
  • Because of using ONNX functions, parameter.args dictionary is restricted to particular values of key for the respective functions.
  • Tried to make it as general as possible and use the standard way as in pytorch_to_mdf(), typically the onnx::Reshape op requires further generalization
  • Minor Fix for ORT 1.9 issue src/modeci_mdf/functions/onnx.py
ValueError: This ORT build has ['AzureExecutionProvider', 'CPUExecutionProvider'] enabled.
Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For 
example, onnxruntime.InferenceSession(..., providers=['AzureExecutionProvider', 'CPUExecutionProvider'], ...)

@davidt0x @pgleeson

result = next(args_iter)

elif node.op == "call_function":
func = "onnx::Relu"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need a more general way to identify the appropriate ONNX function for the torch op. I think this might be something that the torch and ONNX folks are working on as well, see here. It looks like they are developing a converter between torch fx IR to onnxscript. I didn't know that ONNX now has a thing called ONNX script. Allowing for composing ONNX ops in Python. Might be useful for re-writing the execution engine in fact.

Anyway, I think this issue needs to be fixed before merging.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did go through their implementation and they seem to get the appropriate function from here. But cannot figure out how to use a key in _OP_OVERLOAD_TO_EXPORTER_KEY_TABLE dictionary to get the value.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants