Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

[Core] Ray Collective Types avoids importing torch directly #49153

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

HollowMan6
Copy link
Contributor

@HollowMan6 HollowMan6 commented Dec 8, 2024

Why are these changes needed?

(For AMD GPUs only, I'm using MI250x)

Not exactly sure about the reason, but something goes wrong (probably caused by the side effect of init.py file for torch) behind the scenes, and NCCL (actually RCCL in AMD's case) will give errors and is unable to initialize (although it's OK for NVIDIA GPUs):

Traceback (most recent call last):
  File "nccl_allreduce_example.py", line 43, in <module>
    results = ray.get([w.compute.remote() for w in workers])
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "ray/_private/client_mode_hook.py", line 103, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "ray/_private/worker.py", line 2755, in get
    values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ray/_private/worker.py", line 906, in get_objects
    raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(NcclError): ray::Worker.compute() (repr=<nccl_allreduce_example.Worker object at 0x14dca472ebd0>)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "nccl_allreduce_example.py", line 22, in compute
    collective.allreduce(self.send, "default")
  File "ray/util/collective/collective.py", line 273, in allreduce
    g.allreduce([tensor], opts)
  File "ray/util/collective/collective_group/nccl_collective_group.py", line 197, in allreduce
    self._collective(tensors, tensors, collective_fn)
  File "ray/util/collective/collective_group/nccl_collective_group.py", line 604, in _collective
    comms = self._get_nccl_collective_communicator(key, devices)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ray/util/collective/collective_group/nccl_collective_group.py",
line 443, in _get_nccl_collective_communicator
    comms[i] = nccl_util.create_nccl_communicator(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ray/util/collective/collective_group/nccl_util.py", line 113, in create_nccl_communicator
    comm = NcclCommunicator(world_size, nccl_unique_id, rank)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "cupy_backends/cuda/libs/nccl.pyx", line 283, in cupy_backends.cuda.libs.nccl.NcclCommunicator.__init__
  File "cupy_backends/cuda/libs/nccl.pyx", line 129, in cupy_backends.cuda.libs.nccl.check_status
cupy_backends.cuda.libs.nccl.NcclError: NCCL_ERROR_UNHANDLED_CUDA_ERROR: unhandled cuda error (run with NCCL_DEBUG=INFO for details)

So, let's check the availability with importlib.util.find_spec instead. Tested and it can fix the issue here.

Related issue number

We can fix the Ray Collective Communication Lib examples running issue when this combines with #49148

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

This commits fix an error that is for AMD GPUs only (MI250x)

Not exactly sure about the reason, but something goes wrong (probably caused by the side
effect of __init__.py file for torch) behind the scenes, and NCCL (actually RCCL in AMD's case)
will give errors and is unable to initialize (although it's OK for NVIDIA GPUs):

Traceback (most recent call last):
  File "nccl_allreduce_example.py", line 43, in <module>
    results = ray.get([w.compute.remote() for w in workers])
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "ray/_private/client_mode_hook.py", line 103, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "ray/_private/worker.py", line 2755, in get
    values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ray/_private/worker.py", line 906, in get_objects
    raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(NcclError): ray::Worker.compute()
(repr=<nccl_allreduce_example.Worker object at 0x14dca472ebd0>)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "nccl_allreduce_example.py", line 22, in compute
    collective.allreduce(self.send, "default")
  File "ray/util/collective/collective.py", line 273, in allreduce
    g.allreduce([tensor], opts)
  File "ray/util/collective/collective_group/nccl_collective_group.py", line 197, in allreduce
    self._collective(tensors, tensors, collective_fn)
  File "ray/util/collective/collective_group/nccl_collective_group.py", line 604, in _collective
    comms = self._get_nccl_collective_communicator(key, devices)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ray/util/collective/collective_group/nccl_collective_group.py",
line 443, in _get_nccl_collective_communicator
    comms[i] = nccl_util.create_nccl_communicator(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ray/util/collective/collective_group/nccl_util.py", line 113, in create_nccl_communicator
    comm = NcclCommunicator(world_size, nccl_unique_id, rank)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "cupy_backends/cuda/libs/nccl.pyx", line 283, in cupy_backends.cuda.libs.nccl.NcclCommunicator.__init__
  File "cupy_backends/cuda/libs/nccl.pyx", line 129, in cupy_backends.cuda.libs.nccl.check_status
cupy_backends.cuda.libs.nccl.NcclError: NCCL_ERROR_UNHANDLED_CUDA_ERROR: unhandled cuda error (run with NCCL_DEBUG=INFO for details)

So, let's check the availability with importlib.util.find_spec instead. Tested and it can fix the issue here.

We can fix the Ray Collective Communication Lib examples running issue when this combines with ray-project#49148

Signed-off-by: Hollow Man <hollowman@opensuse.org>
@jcotant1 jcotant1 added the core Issues that should be addressed in Ray Core label Dec 9, 2024
Copy link

stale bot commented Jan 22, 2025

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

  • If you'd like to keep this open, just leave any comment, and the stale label will be removed.

@stale stale bot added the stale The issue is stale. It will be closed within 7 days unless there are further conversation label Jan 22, 2025
@HollowMan6
Copy link
Contributor Author

This is necessary to get ray working on AMD GPUs, so I would hope to get this reviewed and merged soon.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
core Issues that should be addressed in Ray Core stale The issue is stale. It will be closed within 7 days unless there are further conversation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants