You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
This is a really odd corner case, but spark will place all -0.0 entries as less than 0.0 entries. This should almost never show up in practice, but because our tests hit corner cases we end up seeing it.
cudf does not even expose a good way for us to extract the sign bit from a -0.0 to try and use that as a second order sort column. So this one might be fun to try and fix.
Steps/Code to reproduce bug
Create a dataframe that has both 0.0 and -0.0 values in it. The sort it.
Expected behavior
-0.0 values come before 0.0 values
The text was updated successfully, but these errors were encountered:
Describe the bug
This is a really odd corner case, but spark will place all -0.0 entries as less than 0.0 entries. This should almost never show up in practice, but because our tests hit corner cases we end up seeing it.
cudf does not even expose a good way for us to extract the sign bit from a -0.0 to try and use that as a second order sort column. So this one might be fun to try and fix.
Steps/Code to reproduce bug
Create a dataframe that has both 0.0 and -0.0 values in it. The sort it.
Expected behavior
-0.0 values come before 0.0 values
The text was updated successfully, but these errors were encountered: