forked from rpms/kernel
4b887b496d
* Wed Dec 22 2021 Herton R. Krzesinski <herton@redhat.com> [5.14.0-37.el9] - sched,x86: Don't use cluster topology for x86 hybrid CPUs (Phil Auld) [2020279] - sched/uclamp: Fix rq->uclamp_max not set on first enqueue (Phil Auld) [2020279] - preempt/dynamic: Fix setup_preempt_mode() return value (Phil Auld) [2020279] - sched/cputime: Fix getrusage(RUSAGE_THREAD) with nohz_full (Phil Auld) [2020279 2029640] - sched/scs: Reset task stack state in bringup_cpu() (Phil Auld) [2020279] - Enable CONFIG_SCHED_CLUSTER for RHEL (Phil Auld) [2020279] - arch_topology: Fix missing clear cluster_cpumask in remove_cpu_topology() (Phil Auld) [2020279] - mm: move node_reclaim_distance to fix NUMA without SMP (Phil Auld) [2020279] - sched/core: Mitigate race cpus_share_cache()/update_top_cache_domain() (Phil Auld) [2020279] - sched/fair: Prevent dead task groups from regaining cfs_rq's (Phil Auld) [2020279] - x86/smp: Factor out parts of native_smp_prepare_cpus() (Phil Auld) [2020279] - sched,x86: Fix L2 cache mask (Phil Auld) [2020279] - sched/fair: Cleanup newidle_balance (Phil Auld) [2020279] - sched/fair: Remove sysctl_sched_migration_cost condition (Phil Auld) [2020279] - sched/fair: Wait before decaying max_newidle_lb_cost (Phil Auld) [2020279] - sched/fair: Skip update_blocked_averages if we are defering load balance (Phil Auld) [2020279] - sched/fair: Account update_blocked_averages in newidle_balance cost (Phil Auld) [2020279] - sched/core: Remove rq_relock() (Phil Auld) [2020279] - sched: Improve wake_up_all_idle_cpus() take #2 (Phil Auld) [2020279] - sched: Disable -Wunused-but-set-variable (Phil Auld) [2020279] - irq_work: Handle some irq_work in a per-CPU thread on PREEMPT_RT (Phil Auld) [2020279] - irq_work: Also rcuwait for !IRQ_WORK_HARD_IRQ on PREEMPT_RT (Phil Auld) [2020279] - irq_work: Allow irq_work_sync() to sleep if irq_work() no IRQ support. (Phil Auld) [2020279] - sched/rt: Annotate the RT balancing logic irqwork as IRQ_WORK_HARD_IRQ (Phil Auld) [2020279] - sched: Fix DEBUG && !SCHEDSTATS warn (Phil Auld) [2020279] - sched/numa: Fix a few comments (Phil Auld) [2020279] - sched/numa: Remove the redundant member numa_group::fault_cpus (Phil Auld) [2020279] - sched/numa: Replace hard-coded number by a define in numa_task_group() (Phil Auld) [2020279] - sched: Remove pointless preemption disable in sched_submit_work() (Phil Auld) [2020279] - sched: Move mmdrop to RCU on RT (Phil Auld) [2020279] - sched: Move kprobes cleanup out of finish_task_switch() (Phil Auld) [2020279] - sched: Disable TTWU_QUEUE on RT (Phil Auld) [2020279] - sched: Limit the number of task migrations per batch on RT (Phil Auld) [2020279] - sched/fair: Removed useless update of p->recent_used_cpu (Phil Auld) [2020279] - sched: Add cluster scheduler level for x86 (Phil Auld) [1921343 2020279] - x86/cpu: Add get_llc_id() helper function (Phil Auld) [2020279] - x86/smp: Add a per-cpu view of SMT state (Phil Auld) [2020279] - sched: Add cluster scheduler level in core and related Kconfig for ARM64 (Phil Auld) [2020279] - topology: Represent clusters of CPUs within a die (Phil Auld) [2020279] - topology: use bin_attribute to break the size limitation of cpumap ABI (Phil Auld) [2020279] - cpumask: Omit terminating null byte in cpumap_print_{list,bitmask}_to_buf (Phil Auld) [2020279] - cpumask: introduce cpumap_print_list/bitmask_to_buf to support large bitmask and list (Phil Auld) [2020279] - sched: Make cookie functions static (Phil Auld) [2020279] - sched,livepatch: Use wake_up_if_idle() (Phil Auld) [2020279] - sched: Simplify wake_up_*idle*() (Phil Auld) [2020279] - sched,livepatch: Use task_call_func() (Phil Auld) [2020279] - sched,rcu: Rework try_invoke_on_locked_down_task() (Phil Auld) [2020279] - sched: Improve try_invoke_on_locked_down_task() (Phil Auld) [2020279] - kernel/sched: Fix sched_fork() access an invalid sched_task_group (Phil Auld) [2020279] - sched/topology: Remove unused numa_distance in cpu_attach_domain() (Phil Auld) [2020279] - sched: Remove unused inline function __rq_clock_broken() (Phil Auld) [2020279] - sched/fair: Consider SMT in ASYM_PACKING load balance (Phil Auld) [2020279] - sched/fair: Carve out logic to mark a group for asymmetric packing (Phil Auld) [2020279] - sched/fair: Provide update_sg_lb_stats() with sched domain statistics (Phil Auld) [2020279] - sched/fair: Optimize checking for group_asym_packing (Phil Auld) [2020279] - sched/topology: Introduce sched_group::flags (Phil Auld) [2020279] - sched/dl: Support schedstats for deadline sched class (Phil Auld) [2020279] - sched/dl: Support sched_stat_runtime tracepoint for deadline sched class (Phil Auld) [2020279] - sched/rt: Support schedstats for RT sched class (Phil Auld) [2020279] - sched/rt: Support sched_stat_runtime tracepoint for RT sched class (Phil Auld) [2020279] - sched: Introduce task block time in schedstats (Phil Auld) [2020279] - sched: Make schedstats helpers independent of fair sched class (Phil Auld) [2020279] - sched: Make struct sched_statistics independent of fair sched class (Phil Auld) [2020279] - sched/fair: Use __schedstat_set() in set_next_entity() (Phil Auld) [2020279] - kselftests/sched: cleanup the child processes (Phil Auld) [2020279] - sched/fair: Add document for burstable CFS bandwidth (Phil Auld) [2020279] - sched/fair: Add cfs bandwidth burst statistics (Phil Auld) [2020279] - fs/proc/uptime.c: Fix idle time reporting in /proc/uptime (Phil Auld) [2020279] - sched: Switch wait_task_inactive to HRTIMER_MODE_REL_HARD (Phil Auld) [2020279] - sched/core: Simplify core-wide task selection (Phil Auld) [2020279] - sched/fair: Trigger nohz.next_balance updates when a CPU goes NOHZ-idle (Phil Auld) [2020279] - sched/fair: Add NOHZ balancer flag for nohz.next_balance updates (Phil Auld) [2020279] - sched: adjust sleeper credit for SCHED_IDLE entities (Phil Auld) [2020279] - sched: reduce sched slice for SCHED_IDLE entities (Phil Auld) [2020279] - sched: Account number of SCHED_IDLE entities on each cfs_rq (Phil Auld) [2020279] - wait: use LIST_HEAD_INIT() to initialize wait_queue_head (Phil Auld) [2020279] - kthread: Move prio/affinite change into the newly created thread (Phil Auld) [2020279] Resolves: rhbz#1921343, rhbz#2020279, rhbz#2029640 Signed-off-by: Herton R. Krzesinski <herton@redhat.com>
4 lines
521 B
Plaintext
4 lines
521 B
Plaintext
SHA512 (linux-5.14.0-37.el9.tar.xz) = 11c4d4bdf6933357ff63fafc7161d719aa9eed736e0b9d19e8e1c484e014b042478699275ecd977ac14aeb930b4700ec61e802ec12530160d41a06a0bd9d1691
|
|
SHA512 (kernel-abi-stablelists-5.14.0-37.tar.bz2) = e883e3fd4ae92d367f1bd833a58589fcb490585ca0594733953a0fe5bcd3a1bbcf85baa0f0f563614327e24118e37f95e438f1da73fef865302c5bbf77c125a5
|
|
SHA512 (kernel-kabi-dw-5.14.0-37.tar.bz2) = e8772be058ab3289436c4d17dcf8437c882d41794fa715699497a241414e443736b2b69fb93e76762cf4759f076da86a97c1ff871008970b35d8704d0bbb1148
|