Proposal: Change the semantics of bitwise operations to be fully platform-dependent #802
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hello!!
in my array implementation, I use bitwise operations for internal index calculations. Bitwise operations are generally considered to be quite low-level, but extremely fast in comparison. They can be therefore be thought of a micro-optimisation; for example a left shift
a >> b
is a hardware implementation for the equivalentFrom my understanding, number semantics are platform-defined, except that Gleam defines division by zero to equal zero. Floating point numbers have slightly different semantics, since Erlang is not fully compliant with IEEE 754. Integers in Erlang have arbitrary precision, while standard floating point numbers are used on the Javascript target, even though a native
BigInt
type would exist on that target. It is well-known that while whole integers can be accurately represented for up to 53 bit using 64-bit floating-point numbers, bitwise operations are defined using the semantics of 32-bit integers in the ECMAScript standard.The current implementation of bitwise operations for Javascript instead tries to support the full range of valid integers, up to 53-bit. To that end, it converts all inputs to BigInts and back, which implies a significant overhead. These are notably the only sets of operations that try to preserve integer semantics beyond what the platform defines; I could not find another use of
BigInt
for any other operation in the standard library. This does not what I'd have expected for these reasons:If this is unacceptable, I think a range check that falls back to a mathematically equivalent expression defined on floating-point numbers (as outlined above) would overall still be faster and easier to optimise for engines.
thanks ~ 💜
(I have left the tests intentionally broken for now)