Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Tensor subnetwork canonicalization. #237

Open
bimalgaudel opened this issue Sep 9, 2024 · 4 comments
Open

Tensor subnetwork canonicalization. #237

bimalgaudel opened this issue Sep 9, 2024 · 4 comments

Comments

@bimalgaudel
Copy link
Member

bimalgaudel commented Sep 9, 2024

Tensor subnetwork canonicalization is useful to identify common sub-expressions.
As of now, the subnetwork canonicalization for tensors without proto-indices seems to at least not throw (have not been used in common sub-expression elimination to test out the robustness). However, the canonicalization of subnetworks involving tensors with proto-indices throws.

Examples are in the linked branch under test_tensor_network.cpp.

@Krzmbrzl
Copy link
Collaborator

This should probably only be tackled once my canonicalizer revamp has been merged (if it is still a problem).

Also, I kinda think the way to implement CSE would be via a cleverly designed hash function rather than via explicit canonicalization of all pairs of tensors in a network.

@bimalgaudel
Copy link
Member Author

bimalgaudel commented Sep 10, 2024

@Krzmbrzl A cleverly designed hash function is currently used to use common sub-expression elimination during evaluation. We want to use graph canonicalization to make the common sub-expressions detection at symbolic level rather than evaluation node level so that the CSE becomes even more robust. I do not expect this feature to improve the runtime significantly, however, with this implemented we will have a uniform way of finding common sub-expressions for evaluation nodes as well as symbolic refactoring.
Additionally, for tensors with proto-indices it is easy to mess up in devising the clever hash function, so the graph canonicalization is desirable.

Also, looking forward to your canonicalizer revamp!

@Krzmbrzl
Copy link
Collaborator

Krzmbrzl commented Sep 10, 2024

I actually meant to have a hash function on the symbolic level. Maybe we just put this on the list for our next meeting in order to make sure everyone is speaking about the same thing?

@evaleev
Copy link
Member

evaleev commented Feb 1, 2025

this is addressed by the TensorNetwork*::canonicalize_slots() function introduced in 4f9da5a ... Full canonicalization cannot be avoided in presence of multiple equivalent tensors, or even with a single tensor that has protoindices

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

When branches are created from issues, their pull requests are automatically linked.

3 participants