glibc/glibc-upstream-2.34-295.patch
Arjun Shankar 668eaab0c7 Import glibc-2.34-40.fc35 from f35
* Fri Jul 22 2022 Arjun Shankar <arjun@redhat.com> - 2.34-40
- Sync with upstream branch release/2.34/master,
  commit b2f32e746492615a6eb3e66fac1e766e32e8deb1:
- malloc: Simplify implementation of __malloc_assert
- Update syscall-names.list for Linux 5.18
- x86: Add missing IS_IN (libc) check to strncmp-sse4_2.S
- x86: Move mem{p}{mov|cpy}_{chk_}erms to its own file
- x86: Move and slightly improve memset_erms
- x86: Add definition for __wmemset_chk AVX2 RTM in ifunc impl list
- x86: Put wcs{n}len-sse4.1 in the sse4.1 text section
- x86: Align entry for memrchr to 64-bytes.
- x86: Add BMI1/BMI2 checks for ISA_V3 check
- x86: Cleanup bounds checking in large memcpy case
- x86: Add bounds `x86_non_temporal_threshold`
- x86: Add sse42 implementation to strcmp's ifunc
- x86: Fix misordered logic for setting `rep_movsb_stop_threshold`
- x86: Align varshift table to 32-bytes
- x86: ZERO_UPPER_VEC_REGISTERS_RETURN_XTEST expect no transactions
- x86: Shrink code size of memchr-evex.S
- x86: Shrink code size of memchr-avx2.S
- x86: Optimize memrchr-avx2.S
- x86: Optimize memrchr-evex.S
- x86: Optimize memrchr-sse2.S
- x86: Add COND_VZEROUPPER that can replace vzeroupper if no `ret`
- x86: Create header for VEC classes in x86 strings library
- x86_64: Add strstr function with 512-bit EVEX
- x86-64: Ignore r_addend for R_X86_64_GLOB_DAT/R_X86_64_JUMP_SLOT
- x86_64: Implement evex512 version of strlen, strnlen, wcslen and wcsnlen
- x86_64: Remove bzero optimization
- x86_64: Remove end of line trailing spaces
- nptl: Fix ___pthread_unregister_cancel_restore asynchronous restore
- linux: Fix mq_timereceive check for 32 bit fallback code (BZ 29304)

Resolves: #2109505
2022-07-22 21:11:19 +02:00

29 lines
993 B
Diff

commit d201c59177b98946d7f80145e7b4d02991d04805
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date: Fri Jun 24 09:42:12 2022 -0700
x86: Align entry for memrchr to 64-bytes.
The function was tuned around 64-byte entry alignment and performs
better for all sizes with it.
As well different code boths where explicitly written to touch the
minimum number of cache line i.e sizes <= 32 touch only the entry
cache line.
(cherry picked from commit 227afaa67213efcdce6a870ef5086200f1076438)
diff --git a/sysdeps/x86_64/multiarch/memrchr-avx2.S b/sysdeps/x86_64/multiarch/memrchr-avx2.S
index 5f8e0be18cfe4fad..edd8180ba1ede9a5 100644
--- a/sysdeps/x86_64/multiarch/memrchr-avx2.S
+++ b/sysdeps/x86_64/multiarch/memrchr-avx2.S
@@ -35,7 +35,7 @@
# define VEC_SIZE 32
# define PAGE_SIZE 4096
.section SECTION(.text), "ax", @progbits
-ENTRY(MEMRCHR)
+ENTRY_P2ALIGN(MEMRCHR, 6)
# ifdef __ILP32__
/* Clear upper bits. */
and %RDX_LP, %RDX_LP