-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
manylinux_2_34 x86-64 builds produce binaries that are not compatible with all x86-64 CPUs #1725
Comments
The immediate fix here is almost certainly to force the manylinux compilers to default to Longer term maybe there's a way for installers to be cleverer or something but that's probably beyond the scope of an issue here. |
Yeah, I think there's a question about how force the system compile to have an In terms of installer cleverness, I think that'd require a PEP to standardize a wheel tag for this? |
It is... fascinating... that they are deploying the compiler with a default Overriding CFLAGS to include #!/bin/sh
exec /path/to/real/gcc -march=x86-64 "$@" |
We'd need a PEP to support this properly. |
The compilers calls shall all be wrapped now. |
Well, it turns out that it's not necessarily true. auditwheel tests (local branch for now) are showing that executables with no dependencies are getting tagged as requiring x86-64-v2. This comes from objects file linked into the executable:
Other support object files do not set this (nor do their disassembly contain things that do not look like baseline x86-64).
This requires to build using the GCC option |
We are also affected by this in a different way. Previously we were using these images to produce a prebuilt installation by downloading the x86_64_v3 standalone build and now resolution with pip cannot download some wheels, for example
Does that mean previous images were using v3+ and now it's based on v2 or was there some sort of CPU-agnostic setup before now? I'm not sure how to proceed other than simply using a different image. FYI this only started happening in the past day or two. edit: contrary to the title of this issue we started experiencing this on |
I'm not certain I understand the question. Pip doesn't know or care whether cpython has been compiled with AVX, FMA, MOVBE etc. What are the relevant manylinux tags available on PyPI, and what glibc version is your image running? |
Note that CPU microarchitectures aren't encodable in wheel metadata at all, so cannot affect resolution in any way as far as I'm aware. The only information you're able to encode is:
If the compiler produces ELF objects that don't actually run on "x86-64" but only run on "x86-64 assuming that gcc -march=native would produce AVX instructions" then that simply results in what pip calls a perfectly valid x86-64 wheel, but which aborts at runtime when you try to import the code. That's why the issue is such a big one, since it "tricks" end user systems into thinking that they can use wheels and then they get a broken virtualenv that is difficult to debug since pip says everything is fine, but the interpreter keeps on crashing. |
My apologies, you're right of course. A new release of that package coincided with them dropping older tags. |
manylinux_2_34 is built on AlmaLinux 9. Alma Linux is built for the x86-64-v2 sub-architecture, which assumes that a particular set of x86-64 CPU extensions. (See https://developers.redhat.com/blog/2021/01/05/building-red-hat-enterprise-linux-9-for-the-x86-64-v2-microarchitecture-level#recommendations_for_rhel_9)
As a result, wheels built in manylinux_2_34 by the system compiler will use these CPU extensions, making the wheels not compatible with all x86-64 CPUs, which of course results will result in SIGILL at runtime.
Because wheel tags have no awareness of x86-64-v2, this effectively makes binaries built with manylinux_2_34 unusable.
See pyca/cryptography#12069 for an example of the impact of this.
The text was updated successfully, but these errors were encountered: