You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I set preferred_gpu_toolkit=ComputeResources.GPUToolkit.auto in a particular run, which I believe is meant to automatically select the correct platform in OpenMM.
However when experimenting with an Evaluator run, it gave the error below in the output section.
To Reproduce
Todo: come up with a MWE. The fix should be reasonably simple, i.e. get the value of the enum.
Output
Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/openff/evaluator/workflow/protocols.py", line 1194, in _execute_protocol
protocol.execute(directory, available_resources)
File "/opt/conda/lib/python3.11/site-packages/openff/evaluator/workflow/protocols.py", line 681, in execute
self._execute(directory, available_resources)
File "/opt/conda/lib/python3.11/site-packages/openff/evaluator/protocols/openmm.py", line 335, in _execute
platform = setup_platform_with_resources(available_resources)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/openff/evaluator/utils/openmm.py", line 65, in setup_platform_with_resources
platform = get_fastest_platform(minimum_precision=precision_level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/openmmtools/utils/utils.py", line 599, in get_fastest_platform
platforms = get_available_platforms(minimum_precision=minimum_precision)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/openmmtools/utils/utils.py", line 579, in get_available_platforms
platforms = [ platform for platform in platforms if platform_supports_precision(platform, minimum_precision) ]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/openmmtools/utils/utils.py", line 579, in <listcomp>
platforms = [ platform for platform in platforms if platform_supports_precision(platform, minimum_precision) ]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/openmmtools/utils/utils.py", line 534, in platform_supports_precision
assert precision in SUPPORTED_PRECISIONS, f"Precision {precision} must be one of {SUPPORTED_PRECISIONS}"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Precision GPUPrecision.mixed must be one of ['single', 'mixed', 'double']
Computing environment (please complete the following information):
Weird ... a MWE would be nice, of course, but it's perhaps a minimal-effort stopgap would be me asking if this is a new change or something that's always been funky?
Describe the bug
I set
preferred_gpu_toolkit=ComputeResources.GPUToolkit.auto
in a particular run, which I believe is meant to automatically select the correct platform in OpenMM.However when experimenting with an Evaluator run, it gave the error below in the output section.
To Reproduce
Todo: come up with a MWE. The fix should be reasonably simple, i.e. get the value of the enum.
Output
Computing environment (please complete the following information):
conda list
Additional context
The text was updated successfully, but these errors were encountered: