There are basically no situations where we want NVK running on Turing
and Ampere without GSP firmware, so we override the upstream default of
not enabling GSP with a modprobe config file until upstream enables it
by default.
While it is possible that we may eventually want to enable this for
RHEL, in its current form it is pulling in OpenCL dependencies (e.g.
libclc, SPIRV) and uses Fedora rust-packaging and crates. Because it
does not use the usual %cargo_* macros, it is not clear how to go about
vendoring the rust dependencies for ELN and RHEL builds.
LTO was disabled 3 years ago because it was causing issues with certain
applications and games:
https://bugzilla.redhat.com/show_bug.cgi?id=1862771
Since then, support for LTO has improved upstream. Re-enable it.
With new nouveau driver coming with Linux 6.7, NVK will be able to
be used for Turing+ GPUs (GTX 16/RTX 20+), and it will be used by
default for Ada Lovelace+ GPUs (RTX 40+).
Prior to GCC14, the `__arm_streaming` macro was not spelled as an attribute
using the CXX11 syntax (e.g: [[arm::streaming]]) and so clang expanded that
macro macro.
But since GCC14, it's spelled as `[[arm::streaming]]` which makes clang to
try expanding the attribute again and generating an invalid preprocessing
token due the nested macro usage:
/usr/include/clang/Basic/AttrTokenKinds.inc:9:1: error: pasting "kw_" and "[" does not give a valid preprocessing token
9 | KEYWORD_ATTRIBUTE(__arm_streaming)
Signed-off-by: Javier Martinez Canillas <javierm@redhat.com>
Also fix a package build issue on s390x due the PowerVR vulkan driver
and ICD loader not being built on that arch. Only incluse those for
aarch64 and x86 where are built.
This new release contains among other things, initial support in powervr
for the upstream imagination/powervr DRM driver and kmsro handling for a
bunch of display drivers.
In mesa 23.3.x zink broke on nvidia, crashing in eglCreateContext.
In the same release, zink was added as a fallback between the
hardware drivers and swrast.
Any application that was previously falling back to swrast
now instead crashes, when using the nvidia vulkan driver.
How exactly do you reach zink or previously swrast when using
nvidia you may ask?
One common path may be EGL applications using EGL_EXT_platform_xcb.
The nvidia driver does not support it, thus GLVND tries the next
driver which is mesa, mesa doesn't find any suitable hardware driver
and thus falls back to zink or swrast.
Until zink is stable again on nvidia, we should disable the zink
fallback to prevent applications crashing instead of falling back
to swrast.
There should be no need to also disable the GLX fallback to zink
as i'm not aware of a call path that would lead to using mesa when
the nvidia drivers are installed.
RHBZ 2255599
RHBZ 2255768
MESA 10340
MESA 10341
An update on the linker will now refuse to create binaries with a
loadable memory segment that has read, write and execute permissions
set.
mesa creates one unless "glx-read-only-text" is enabled.
Revert commit e2acc882a1 ("Disable rwx segment linker error") and set
"glx-read-only-text" instead.
See Nick's comment for more information about the revert:
https://bugzilla.redhat.com/show_bug.cgi?id=2250927#c10
Fix: https://bugzilla.redhat.com/show_bug.cgi?id=2250927
An update on the linker will now refuse to create binaries with a
loadable memory segment that has read, write and execute permissions
set.
mesa creates one unless "glx-read-only-text" is enabled, however, the
documentation for "glx-read-only-text" reads:
"Disable writable .text section on x86 (decreases performance)"
In order to avoid possible performance regressions, disable the linker
error.
Fix: https://bugzilla.redhat.com/show_bug.cgi?id=2250927
Remove patch added by commit d0377e3d3b ("Backport MR #24045 to fix
Iris crashes (#2238711)") as it was fixed by upstream mesa commit
9590bce3e249 ("radeonsi: prefix function with si_ to prevent name
collision"), which is included in 23.3.0-rc1:
$ git tag --contains=9590bce3e249
mesa-23.3.0-rc1
In order to enable hardware acceleration when running x86 applications
on AArch64, the drivers that are typically only enabled on AArch64
need to be built for x86 architectures too.
This allows us to set up x86 containers on AArch64 that can correctly
interface with hardware properly.