|
| 1 | +- Feature Name: int128 |
| 2 | +- Start Date: 21-02-2016 |
| 3 | +- RFC PR: (leave this empty) |
| 4 | +- Rust Issue: (leave this empty) |
| 5 | + |
| 6 | +# Summary |
| 7 | +[summary]: #summary |
| 8 | + |
| 9 | +This RFC adds the `i128` and `u128` types to Rust. Because these types are not available on all platforms, a new target flag (`target_has_int128`) is added to allow users to check whether 128-bit integers are supported. The `i128` and `u128` are not added to the prelude, and must instead be explicitly imported with `use core::{i128, u128}`. |
| 10 | + |
| 11 | +# Motivation |
| 12 | +[motivation]: #motivation |
| 13 | + |
| 14 | +Some algorithms need to work with very large numbers that don't fit in 64 bits, such as certain cryptographic algorithms. One possibility would be to use a BigNum library, but these use heap allocation and tend to have high overhead. LLVM has support for very efficient 128-bit integers, which are exposed by Clang in C as the `__int128` type. |
| 15 | + |
| 16 | +# Detailed design |
| 17 | +[design]: #detailed-design |
| 18 | + |
| 19 | +From a quick look at Clang's source, 128-bit integers are supported on all 64-bit platforms and a few 32-bit ones (those with 64-bit registers: x32 and MIPS n32). To allow users to determine whether 128-bit integers are available, a `target_has_int128` cfg is added. The `i128` and `u128` types are only available when this flag is set. |
| 20 | + |
| 21 | +The actual `i128` and `u128` types are not added to the Rust prelude since that would break compatibility. Instead they must be explicitly imported with `use core::{i128, u128}` or `use std::{i128, u128}`. This will also catch attempts to use 128-bit integers when they are not supported by the underlying platform since the import will fail if `target_has_int128` is not defined. |
| 22 | + |
| 23 | +Implementation-wise, this should just be a matter of adding a new primitive type to the compiler and adding trait implementations for `i128`/`u128` in libcore. A new entry will need to be added to target specifications to specify whether the target supports 128-bit integers. |
| 24 | + |
| 25 | +One possible complication is that primitive types aren't currently part of the prelude, instead they are directly added to the global namespace by the compiler. The new `i128` and `u128` types will behave differently and will need to be explicitly imported. |
| 26 | + |
| 27 | +Another possible issue is that a `u128` can hold a very large number that doesn't fit in a `f32`. We need to make sure this doesn't lead to any `undef`s from LLVM. |
| 28 | + |
| 29 | +# Drawbacks |
| 30 | +[drawbacks]: #drawbacks |
| 31 | + |
| 32 | +It adds a type to the language that may or may not be present depending on the target architecture. This could lead to surprises, bu |
| 33 | + |
| 34 | +# Alternatives |
| 35 | +[alternatives]: #alternatives |
| 36 | + |
| 37 | +There have been several attempts to create `u128`/`i128` wrappers based on two `u64` values, but these can't match the performance of LLVM's native 128-bit integers. |
| 38 | + |
| 39 | +# Unresolved questions |
| 40 | +[unresolved]: #unresolved-questions |
| 41 | + |
| 42 | +How should 128-bit literals be handled? The easiest solution would be to limit integer literals to 64 bits, which is what GCC does (no support for `__int128` literals). |
0 commit comments