Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

[Performance] Boost tfq.convert_to_tensor speed #336

Open
MichaelBroughton opened this issue Aug 10, 2020 · 11 comments
Open

[Performance] Boost tfq.convert_to_tensor speed #336

MichaelBroughton opened this issue Aug 10, 2020 · 11 comments
Labels
good first issue Good starting issue for someone new to TFQ kind/feature-request New feature or request

Comments

@MichaelBroughton
Copy link
Collaborator

Currently tfq.convert_to_tensor uses just one core and makes use of Cirq serialization protocols. They are pretty slow for large circuits. A quick benchmark shows that more than 95% of all time spent computing in tfq.convert_to_tensor is spent in the cirq serialization logic and the protobuf SerializeToString function. Since it's unlikely we can speed either of those up quickly, perhaps we should look into parallelization of tfq.convert_to_tensor ?

@MichaelBroughton MichaelBroughton added kind/feature-request New feature or request good first issue Good starting issue for someone new to TFQ labels Aug 17, 2020
@MrSaral
Copy link

MrSaral commented Aug 27, 2020

Hey Michael, I am new to tfq community and I would love to work on this issue!

@MichaelBroughton
Copy link
Collaborator Author

Welcome! Glad you’ve taken an interest. I’m optimistic we can make things a little quicker :)

@MrSaral
Copy link

MrSaral commented Aug 29, 2020

@MichaelBroughton Where can I start, I went through the code in tfq.conver_to_tensor

@MichaelBroughton
Copy link
Collaborator Author

After you've read the code you could:

  1. Fork the code and work on a local copy.
  2. Time the original implementation and yours on some big reference circuits.
  3. Make the new implementation faster.
  4. Open a Pull request on here with the changes you've made showing the numbers behind the performance boost.

@MrSaral
Copy link

MrSaral commented Sep 1, 2020

Ok thanks, I will give this a shot

@tacho090
Copy link

@MrSaral are you working on this issue? If so, please request to be assigned to it

@MrSaral
Copy link

MrSaral commented Sep 27, 2020

Hey @tacho090 , Yes I am working on this. Please assign it to me.

@redayzarra
Copy link

redayzarra commented Jul 4, 2023

Hi! I'm new to the TFQ community and would love to tackle this problem. Can I be assigned to this issue? I'm more than happy to work on this.

@lockwo
Copy link
Contributor

lockwo commented Jul 4, 2023

Go for it, feel free to open a PR for it

@redayzarra
Copy link

redayzarra commented Jul 8, 2023

After you've read the code you could:

  1. Fork the code and work on a local copy.
  2. Time the original implementation and yours on some big reference circuits.
  3. Make the new implementation faster.
  4. Open a Pull request on here with the changes you've made showing the numbers behind the performance boost.

Hi, I've been working on this issue and I had a couple of questions. I read the benchmarks/README.md file and tried to use Bazel for benchmarking, but I ran into a lot of errors.

  1. How would you like me to time my code? Should a custom benchmark file suffice or should I be using the existing benchmarking system (with Bazel)?

  2. Is there any specific reference circuit you would like me to use? I want to try multiple circuits that stress the code in various ways but I'm not sure what to look for.

That's all the questions I have for now. I'm not sure if I'm supposed to be using Bazel in the first place.

@mhucka
Copy link
Member

mhucka commented Dec 4, 2024

@redayzarra For purposes of planning work and doing repository housekeeping, could you let us know what the status of this is?

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
good first issue Good starting issue for someone new to TFQ kind/feature-request New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants