Import of kernel-5.14.0-611.30.1.el9_7
This commit is contained in:
parent
39c4a93a83
commit
56e2fefa0b
@ -179,7 +179,23 @@ Phase offset measurement and adjustment
|
||||
Device may provide ability to measure a phase difference between signals
|
||||
on a pin and its parent dpll device. If pin-dpll phase offset measurement
|
||||
is supported, it shall be provided with ``DPLL_A_PIN_PHASE_OFFSET``
|
||||
attribute for each parent dpll device.
|
||||
attribute for each parent dpll device. The reported phase offset may be
|
||||
computed as the average of prior values and the current measurement, using
|
||||
the following formula:
|
||||
|
||||
.. math::
|
||||
curr\_avg = prev\_avg * \frac{2^N-1}{2^N} + new\_val * \frac{1}{2^N}
|
||||
|
||||
where `curr_avg` is the current reported phase offset, `prev_avg` is the
|
||||
previously reported value, `new_val` is the current measurement, and `N` is
|
||||
the averaging factor. Configured averaging factor value is provided with
|
||||
``DPLL_A_PHASE_OFFSET_AVG_FACTOR`` attribute of a device and value change can
|
||||
be requested with the same attribute with ``DPLL_CMD_DEVICE_SET`` command.
|
||||
|
||||
================================== ======================================
|
||||
``DPLL_A_PHASE_OFFSET_AVG_FACTOR`` attr configured value of phase offset
|
||||
averaging factor
|
||||
================================== ======================================
|
||||
|
||||
Device may also provide ability to adjust a signal phase on a pin.
|
||||
If pin phase adjustment is supported, minimal and maximal values and
|
||||
@ -255,6 +271,31 @@ the pin.
|
||||
``DPLL_A_PIN_ESYNC_PULSE`` pulse type of Embedded SYNC
|
||||
========================================= =================================
|
||||
|
||||
Reference SYNC
|
||||
==============
|
||||
|
||||
The device may support the Reference SYNC feature, which allows the combination
|
||||
of two inputs into a input pair. In this configuration, clock signals
|
||||
from both inputs are used to synchronize the DPLL device. The higher frequency
|
||||
signal is utilized for the loop bandwidth of the DPLL, while the lower frequency
|
||||
signal is used to syntonize the output signal of the DPLL device. This feature
|
||||
enables the provision of a high-quality loop bandwidth signal from an external
|
||||
source.
|
||||
|
||||
A capable input provides a list of inputs that can be bound with to create
|
||||
Reference SYNC. To control this feature, the user must request a desired
|
||||
state for a target pin: use ``DPLL_PIN_STATE_CONNECTED`` to enable or
|
||||
``DPLL_PIN_STATE_DISCONNECTED`` to disable the feature. An input pin can be
|
||||
bound to only one other pin at any given time.
|
||||
|
||||
============================== ==========================================
|
||||
``DPLL_A_PIN_REFERENCE_SYNC`` nested attribute for providing info or
|
||||
requesting configuration of the Reference
|
||||
SYNC feature
|
||||
``DPLL_A_PIN_ID`` target pin id for Reference SYNC feature
|
||||
``DPLL_A_PIN_STATE`` state of Reference SYNC connection
|
||||
============================== ==========================================
|
||||
|
||||
Configuration commands group
|
||||
============================
|
||||
|
||||
|
||||
@ -315,6 +315,10 @@ attribute-sets:
|
||||
If enabled, dpll device shall monitor and notify all currently
|
||||
available inputs for changes of their phase offset against the
|
||||
dpll device.
|
||||
-
|
||||
name: phase-offset-avg-factor
|
||||
type: u32
|
||||
doc: Averaging factor applied to calculation of reported phase offset.
|
||||
-
|
||||
name: pin
|
||||
enum-name: dpll_a_pin
|
||||
@ -428,6 +432,14 @@ attribute-sets:
|
||||
doc: |
|
||||
A ratio of high to low state of a SYNC signal pulse embedded
|
||||
into base clock frequency. Value is in percents.
|
||||
-
|
||||
name: reference-sync
|
||||
type: nest
|
||||
multi-attr: true
|
||||
nested-attributes: reference-sync
|
||||
doc: |
|
||||
Capable pin provides list of pins that can be bound to create a
|
||||
reference-sync pin pair.
|
||||
-
|
||||
name: phase-adjust-gran
|
||||
type: u32
|
||||
@ -465,6 +477,14 @@ attribute-sets:
|
||||
name: frequency-min
|
||||
-
|
||||
name: frequency-max
|
||||
-
|
||||
name: reference-sync
|
||||
subset-of: pin
|
||||
attributes:
|
||||
-
|
||||
name: id
|
||||
-
|
||||
name: state
|
||||
|
||||
operations:
|
||||
enum-name: dpll_cmd
|
||||
@ -513,6 +533,7 @@ operations:
|
||||
- clock-id
|
||||
- type
|
||||
- phase-offset-monitor
|
||||
- phase-offset-avg-factor
|
||||
|
||||
dump:
|
||||
reply: *dev-attrs
|
||||
@ -530,6 +551,7 @@ operations:
|
||||
attributes:
|
||||
- id
|
||||
- phase-offset-monitor
|
||||
- phase-offset-avg-factor
|
||||
-
|
||||
name: device-create-ntf
|
||||
doc: Notification about device appearing
|
||||
@ -589,6 +611,8 @@ operations:
|
||||
reply: &pin-attrs
|
||||
attributes:
|
||||
- id
|
||||
- module-name
|
||||
- clock-id
|
||||
- board-label
|
||||
- panel-label
|
||||
- package-label
|
||||
@ -606,6 +630,7 @@ operations:
|
||||
- esync-frequency
|
||||
- esync-frequency-supported
|
||||
- esync-pulse
|
||||
- reference-sync
|
||||
|
||||
dump:
|
||||
request:
|
||||
@ -633,6 +658,7 @@ operations:
|
||||
- parent-pin
|
||||
- phase-adjust
|
||||
- esync-frequency
|
||||
- reference-sync
|
||||
-
|
||||
name: pin-create-ntf
|
||||
doc: Notification about pin appearing
|
||||
|
||||
@ -550,6 +550,12 @@ lacp_rate
|
||||
|
||||
The default is slow.
|
||||
|
||||
broadcast_neighbor
|
||||
|
||||
Option specifying whether to broadcast ARP/ND packets to all
|
||||
active slaves. This option has no effect in modes other than
|
||||
802.3ad mode. The default is off (0).
|
||||
|
||||
max_bonds
|
||||
|
||||
Specifies the number of bonding devices to create for this
|
||||
|
||||
@ -12,7 +12,7 @@ RHEL_MINOR = 7
|
||||
#
|
||||
# Use this spot to avoid future merge conflicts.
|
||||
# Do not trim this comment.
|
||||
RHEL_RELEASE = 611.27.1
|
||||
RHEL_RELEASE = 611.30.1
|
||||
|
||||
#
|
||||
# ZSTREAM
|
||||
|
||||
@ -134,7 +134,6 @@ config S390
|
||||
select ARCH_WANT_IPC_PARSE_VERSION
|
||||
select ARCH_WANT_KERNEL_PMD_MKWRITE
|
||||
select ARCH_WANT_LD_ORPHAN_WARN
|
||||
select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP
|
||||
select BUILDTIME_TABLE_SORT
|
||||
select CLONE_BACKWARDS2
|
||||
select DCACHE_WORD_ACCESS if !KMSAN
|
||||
|
||||
@ -15,6 +15,17 @@
|
||||
#include <linux/mman.h>
|
||||
#include <linux/sched/mm.h>
|
||||
#include <linux/security.h>
|
||||
#include <linux/jump_label.h>
|
||||
|
||||
/*
|
||||
* RHEL-only: Since the 'hugetlb_optimize_vmemmap_key' static key is part
|
||||
* of the kABI, we need stub definitions to avoid breaking the build
|
||||
* when CONFIG_ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP=n.
|
||||
*/
|
||||
#ifndef CONFIG_ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP
|
||||
DEFINE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key);
|
||||
EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key);
|
||||
#endif
|
||||
|
||||
/*
|
||||
* If the bit selected by single-bit bitmask "a" is set within "x", move
|
||||
|
||||
@ -695,7 +695,6 @@ CONFIG_SPARSEMEM=y
|
||||
CONFIG_SPARSEMEM_EXTREME=y
|
||||
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
|
||||
CONFIG_SPARSEMEM_VMEMMAP=y
|
||||
CONFIG_ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP=y
|
||||
CONFIG_HAVE_MEMBLOCK_PHYS_MAP=y
|
||||
CONFIG_HAVE_FAST_GUP=y
|
||||
CONFIG_NUMA_KEEP_MEMINFO=y
|
||||
@ -3124,8 +3123,6 @@ CONFIG_TMPFS_QUOTA=y
|
||||
CONFIG_ARCH_SUPPORTS_HUGETLBFS=y
|
||||
CONFIG_HUGETLBFS=y
|
||||
CONFIG_HUGETLB_PAGE=y
|
||||
CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP=y
|
||||
# CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON is not set
|
||||
CONFIG_ARCH_HAS_GIGANTIC_PAGE=y
|
||||
CONFIG_CONFIGFS_FS=y
|
||||
# end of Pseudo filesystems
|
||||
|
||||
@ -573,7 +573,6 @@ CONFIG_SPARSEMEM=y
|
||||
CONFIG_SPARSEMEM_EXTREME=y
|
||||
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
|
||||
CONFIG_SPARSEMEM_VMEMMAP=y
|
||||
CONFIG_ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP=y
|
||||
CONFIG_HAVE_MEMBLOCK_PHYS_MAP=y
|
||||
CONFIG_HAVE_FAST_GUP=y
|
||||
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
|
||||
|
||||
@ -717,7 +717,6 @@ CONFIG_SPARSEMEM=y
|
||||
CONFIG_SPARSEMEM_EXTREME=y
|
||||
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
|
||||
CONFIG_SPARSEMEM_VMEMMAP=y
|
||||
CONFIG_ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP=y
|
||||
CONFIG_HAVE_MEMBLOCK_PHYS_MAP=y
|
||||
CONFIG_HAVE_FAST_GUP=y
|
||||
CONFIG_NUMA_KEEP_MEMINFO=y
|
||||
@ -3148,8 +3147,6 @@ CONFIG_TMPFS_QUOTA=y
|
||||
CONFIG_ARCH_SUPPORTS_HUGETLBFS=y
|
||||
CONFIG_HUGETLBFS=y
|
||||
CONFIG_HUGETLB_PAGE=y
|
||||
CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP=y
|
||||
# CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON is not set
|
||||
CONFIG_ARCH_HAS_GIGANTIC_PAGE=y
|
||||
CONFIG_CONFIGFS_FS=y
|
||||
# end of Pseudo filesystems
|
||||
|
||||
@ -506,6 +506,7 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module,
|
||||
refcount_set(&pin->refcount, 1);
|
||||
xa_init_flags(&pin->dpll_refs, XA_FLAGS_ALLOC);
|
||||
xa_init_flags(&pin->parent_refs, XA_FLAGS_ALLOC);
|
||||
xa_init_flags(&pin->ref_sync_pins, XA_FLAGS_ALLOC);
|
||||
ret = xa_alloc_cyclic(&dpll_pin_xa, &pin->id, pin, xa_limit_32b,
|
||||
&dpll_pin_xa_id, GFP_KERNEL);
|
||||
if (ret < 0)
|
||||
@ -514,6 +515,7 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module,
|
||||
err_xa_alloc:
|
||||
xa_destroy(&pin->dpll_refs);
|
||||
xa_destroy(&pin->parent_refs);
|
||||
xa_destroy(&pin->ref_sync_pins);
|
||||
dpll_pin_prop_free(&pin->prop);
|
||||
err_pin_prop:
|
||||
kfree(pin);
|
||||
@ -595,6 +597,7 @@ void dpll_pin_put(struct dpll_pin *pin)
|
||||
xa_erase(&dpll_pin_xa, pin->id);
|
||||
xa_destroy(&pin->dpll_refs);
|
||||
xa_destroy(&pin->parent_refs);
|
||||
xa_destroy(&pin->ref_sync_pins);
|
||||
dpll_pin_prop_free(&pin->prop);
|
||||
kfree_rcu(pin, rcu);
|
||||
}
|
||||
@ -659,11 +662,26 @@ dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin,
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dpll_pin_register);
|
||||
|
||||
static void dpll_pin_ref_sync_pair_del(u32 ref_sync_pin_id)
|
||||
{
|
||||
struct dpll_pin *pin, *ref_sync_pin;
|
||||
unsigned long i;
|
||||
|
||||
xa_for_each(&dpll_pin_xa, i, pin) {
|
||||
ref_sync_pin = xa_load(&pin->ref_sync_pins, ref_sync_pin_id);
|
||||
if (ref_sync_pin) {
|
||||
xa_erase(&pin->ref_sync_pins, ref_sync_pin_id);
|
||||
__dpll_pin_change_ntf(pin);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void
|
||||
__dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin,
|
||||
const struct dpll_pin_ops *ops, void *priv, void *cookie)
|
||||
{
|
||||
ASSERT_DPLL_PIN_REGISTERED(pin);
|
||||
dpll_pin_ref_sync_pair_del(pin->id);
|
||||
dpll_xa_ref_pin_del(&dpll->pin_refs, pin, ops, priv, cookie);
|
||||
dpll_xa_ref_dpll_del(&pin->dpll_refs, dpll, ops, priv, cookie);
|
||||
if (xa_empty(&pin->dpll_refs))
|
||||
@ -783,6 +801,33 @@ void dpll_pin_on_pin_unregister(struct dpll_pin *parent, struct dpll_pin *pin,
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dpll_pin_on_pin_unregister);
|
||||
|
||||
/**
|
||||
* dpll_pin_ref_sync_pair_add - create a reference sync signal pin pair
|
||||
* @pin: pin which produces the base frequency
|
||||
* @ref_sync_pin: pin which produces the sync signal
|
||||
*
|
||||
* Once pins are paired, the user-space configuration of reference sync pair
|
||||
* is possible.
|
||||
* Context: Acquires a lock (dpll_lock)
|
||||
* Return:
|
||||
* * 0 on success
|
||||
* * negative - error value
|
||||
*/
|
||||
int dpll_pin_ref_sync_pair_add(struct dpll_pin *pin,
|
||||
struct dpll_pin *ref_sync_pin)
|
||||
{
|
||||
int ret;
|
||||
|
||||
mutex_lock(&dpll_lock);
|
||||
ret = xa_insert(&pin->ref_sync_pins, ref_sync_pin->id,
|
||||
ref_sync_pin, GFP_KERNEL);
|
||||
__dpll_pin_change_ntf(pin);
|
||||
mutex_unlock(&dpll_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dpll_pin_ref_sync_pair_add);
|
||||
|
||||
static struct dpll_device_registration *
|
||||
dpll_device_registration_first(struct dpll_device *dpll)
|
||||
{
|
||||
|
||||
@ -49,8 +49,8 @@ struct dpll_device {
|
||||
* @module: module of creator
|
||||
* @dpll_refs: hold referencees to dplls pin was registered with
|
||||
* @parent_refs: hold references to parent pins pin was registered with
|
||||
* @ref_sync_pins: hold references to pins for Reference SYNC feature
|
||||
* @prop: pin properties copied from the registerer
|
||||
* @rclk_dev_name: holds name of device when pin can recover clock from it
|
||||
* @refcount: refcount
|
||||
* @rcu: rcu_head for kfree_rcu()
|
||||
*
|
||||
@ -69,6 +69,7 @@ struct dpll_pin {
|
||||
struct dpll_pin_properties prop;
|
||||
refcount_t refcount;
|
||||
struct rcu_head rcu;
|
||||
RH_KABI_EXTEND(struct xarray ref_sync_pins)
|
||||
};
|
||||
|
||||
/**
|
||||
|
||||
@ -48,6 +48,24 @@ dpll_msg_add_dev_parent_handle(struct sk_buff *msg, u32 id)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool dpll_pin_available(struct dpll_pin *pin)
|
||||
{
|
||||
struct dpll_pin_ref *par_ref;
|
||||
unsigned long i;
|
||||
|
||||
if (!xa_get_mark(&dpll_pin_xa, pin->id, DPLL_REGISTERED))
|
||||
return false;
|
||||
xa_for_each(&pin->parent_refs, i, par_ref)
|
||||
if (xa_get_mark(&dpll_pin_xa, par_ref->pin->id,
|
||||
DPLL_REGISTERED))
|
||||
return true;
|
||||
xa_for_each(&pin->dpll_refs, i, par_ref)
|
||||
if (xa_get_mark(&dpll_device_xa, par_ref->dpll->id,
|
||||
DPLL_REGISTERED))
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* dpll_msg_add_pin_handle - attach pin handle attribute to a given message
|
||||
* @msg: pointer to sk_buff message to attach a pin handle
|
||||
@ -146,6 +164,27 @@ dpll_msg_add_phase_offset_monitor(struct sk_buff *msg, struct dpll_device *dpll,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_msg_add_phase_offset_avg_factor(struct sk_buff *msg,
|
||||
struct dpll_device *dpll,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
const struct dpll_device_ops *ops = dpll_device_ops(dpll);
|
||||
u32 factor;
|
||||
int ret;
|
||||
|
||||
if (ops->phase_offset_avg_factor_get) {
|
||||
ret = ops->phase_offset_avg_factor_get(dpll, dpll_priv(dpll),
|
||||
&factor, extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (nla_put_u32(msg, DPLL_A_PHASE_OFFSET_AVG_FACTOR, factor))
|
||||
return -EMSGSIZE;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_msg_add_lock_status(struct sk_buff *msg, struct dpll_device *dpll,
|
||||
struct netlink_ext_ack *extack)
|
||||
@ -193,8 +232,8 @@ static int
|
||||
dpll_msg_add_clock_quality_level(struct sk_buff *msg, struct dpll_device *dpll,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
DECLARE_BITMAP(qls, DPLL_CLOCK_QUALITY_LEVEL_MAX + 1) = { 0 };
|
||||
const struct dpll_device_ops *ops = dpll_device_ops(dpll);
|
||||
DECLARE_BITMAP(qls, DPLL_CLOCK_QUALITY_LEVEL_MAX) = { 0 };
|
||||
enum dpll_clock_quality_level ql;
|
||||
int ret;
|
||||
|
||||
@ -203,7 +242,7 @@ dpll_msg_add_clock_quality_level(struct sk_buff *msg, struct dpll_device *dpll,
|
||||
ret = ops->clock_quality_level_get(dpll, dpll_priv(dpll), qls, extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
for_each_set_bit(ql, qls, DPLL_CLOCK_QUALITY_LEVEL_MAX)
|
||||
for_each_set_bit(ql, qls, DPLL_CLOCK_QUALITY_LEVEL_MAX + 1)
|
||||
if (nla_put_u32(msg, DPLL_A_CLOCK_QUALITY_LEVEL, ql))
|
||||
return -EMSGSIZE;
|
||||
|
||||
@ -428,6 +467,47 @@ nest_cancel:
|
||||
return -EMSGSIZE;
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_msg_add_pin_ref_sync(struct sk_buff *msg, struct dpll_pin *pin,
|
||||
struct dpll_pin_ref *ref,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
const struct dpll_pin_ops *ops = dpll_pin_ops(ref);
|
||||
struct dpll_device *dpll = ref->dpll;
|
||||
void *pin_priv, *ref_sync_pin_priv;
|
||||
struct dpll_pin *ref_sync_pin;
|
||||
enum dpll_pin_state state;
|
||||
struct nlattr *nest;
|
||||
unsigned long index;
|
||||
int ret;
|
||||
|
||||
pin_priv = dpll_pin_on_dpll_priv(dpll, pin);
|
||||
xa_for_each(&pin->ref_sync_pins, index, ref_sync_pin) {
|
||||
if (!dpll_pin_available(ref_sync_pin))
|
||||
continue;
|
||||
ref_sync_pin_priv = dpll_pin_on_dpll_priv(dpll, ref_sync_pin);
|
||||
if (WARN_ON(!ops->ref_sync_get))
|
||||
return -EOPNOTSUPP;
|
||||
ret = ops->ref_sync_get(pin, pin_priv, ref_sync_pin,
|
||||
ref_sync_pin_priv, &state, extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
nest = nla_nest_start(msg, DPLL_A_PIN_REFERENCE_SYNC);
|
||||
if (!nest)
|
||||
return -EMSGSIZE;
|
||||
if (nla_put_s32(msg, DPLL_A_PIN_ID, ref_sync_pin->id))
|
||||
goto nest_cancel;
|
||||
if (nla_put_s32(msg, DPLL_A_PIN_STATE, state))
|
||||
goto nest_cancel;
|
||||
nla_nest_end(msg, nest);
|
||||
}
|
||||
return 0;
|
||||
|
||||
nest_cancel:
|
||||
nla_nest_cancel(msg, nest);
|
||||
return -EMSGSIZE;
|
||||
}
|
||||
|
||||
static bool dpll_pin_is_freq_supported(struct dpll_pin *pin, u32 freq)
|
||||
{
|
||||
int fs;
|
||||
@ -574,6 +654,10 @@ dpll_cmd_pin_get_one(struct sk_buff *msg, struct dpll_pin *pin,
|
||||
if (ret)
|
||||
return ret;
|
||||
ret = dpll_msg_add_pin_esync(msg, pin, ref, extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (!xa_empty(&pin->ref_sync_pins))
|
||||
ret = dpll_msg_add_pin_ref_sync(msg, pin, ref, extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (xa_empty(&pin->parent_refs))
|
||||
@ -616,6 +700,9 @@ dpll_device_get_one(struct dpll_device *dpll, struct sk_buff *msg,
|
||||
if (nla_put_u32(msg, DPLL_A_TYPE, dpll->type))
|
||||
return -EMSGSIZE;
|
||||
ret = dpll_msg_add_phase_offset_monitor(msg, dpll, extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
ret = dpll_msg_add_phase_offset_avg_factor(msg, dpll, extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -669,24 +756,6 @@ __dpll_device_change_ntf(struct dpll_device *dpll)
|
||||
return dpll_device_event_send(DPLL_CMD_DEVICE_CHANGE_NTF, dpll);
|
||||
}
|
||||
|
||||
static bool dpll_pin_available(struct dpll_pin *pin)
|
||||
{
|
||||
struct dpll_pin_ref *par_ref;
|
||||
unsigned long i;
|
||||
|
||||
if (!xa_get_mark(&dpll_pin_xa, pin->id, DPLL_REGISTERED))
|
||||
return false;
|
||||
xa_for_each(&pin->parent_refs, i, par_ref)
|
||||
if (xa_get_mark(&dpll_pin_xa, par_ref->pin->id,
|
||||
DPLL_REGISTERED))
|
||||
return true;
|
||||
xa_for_each(&pin->dpll_refs, i, par_ref)
|
||||
if (xa_get_mark(&dpll_device_xa, par_ref->dpll->id,
|
||||
DPLL_REGISTERED))
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* dpll_device_change_ntf - notify that the dpll device has been changed
|
||||
* @dpll: registered dpll pointer
|
||||
@ -749,7 +818,7 @@ int dpll_pin_delete_ntf(struct dpll_pin *pin)
|
||||
return dpll_pin_event_send(DPLL_CMD_PIN_DELETE_NTF, pin);
|
||||
}
|
||||
|
||||
static int __dpll_pin_change_ntf(struct dpll_pin *pin)
|
||||
int __dpll_pin_change_ntf(struct dpll_pin *pin)
|
||||
{
|
||||
return dpll_pin_event_send(DPLL_CMD_PIN_CHANGE_NTF, pin);
|
||||
}
|
||||
@ -798,6 +867,23 @@ dpll_phase_offset_monitor_set(struct dpll_device *dpll, struct nlattr *a,
|
||||
extack);
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_phase_offset_avg_factor_set(struct dpll_device *dpll, struct nlattr *a,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
const struct dpll_device_ops *ops = dpll_device_ops(dpll);
|
||||
u32 factor = nla_get_u32(a);
|
||||
|
||||
if (!ops->phase_offset_avg_factor_set) {
|
||||
NL_SET_ERR_MSG_ATTR(extack, a,
|
||||
"device not capable of changing phase offset average factor");
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
return ops->phase_offset_avg_factor_set(dpll, dpll_priv(dpll), factor,
|
||||
extack);
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_pin_freq_set(struct dpll_pin *pin, struct nlattr *a,
|
||||
struct netlink_ext_ack *extack)
|
||||
@ -939,6 +1025,108 @@ rollback:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_pin_ref_sync_state_set(struct dpll_pin *pin,
|
||||
unsigned long ref_sync_pin_idx,
|
||||
const enum dpll_pin_state state,
|
||||
struct netlink_ext_ack *extack)
|
||||
|
||||
{
|
||||
struct dpll_pin_ref *ref, *failed;
|
||||
const struct dpll_pin_ops *ops;
|
||||
enum dpll_pin_state old_state;
|
||||
struct dpll_pin *ref_sync_pin;
|
||||
struct dpll_device *dpll;
|
||||
unsigned long i;
|
||||
int ret;
|
||||
|
||||
ref_sync_pin = xa_find(&pin->ref_sync_pins, &ref_sync_pin_idx,
|
||||
ULONG_MAX, XA_PRESENT);
|
||||
if (!ref_sync_pin) {
|
||||
NL_SET_ERR_MSG(extack, "reference sync pin not found");
|
||||
return -EINVAL;
|
||||
}
|
||||
if (!dpll_pin_available(ref_sync_pin)) {
|
||||
NL_SET_ERR_MSG(extack, "reference sync pin not available");
|
||||
return -EINVAL;
|
||||
}
|
||||
ref = dpll_xa_ref_dpll_first(&pin->dpll_refs);
|
||||
ASSERT_NOT_NULL(ref);
|
||||
ops = dpll_pin_ops(ref);
|
||||
if (!ops->ref_sync_set || !ops->ref_sync_get) {
|
||||
NL_SET_ERR_MSG(extack, "reference sync not supported by this pin");
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
dpll = ref->dpll;
|
||||
ret = ops->ref_sync_get(pin, dpll_pin_on_dpll_priv(dpll, pin),
|
||||
ref_sync_pin,
|
||||
dpll_pin_on_dpll_priv(dpll, ref_sync_pin),
|
||||
&old_state, extack);
|
||||
if (ret) {
|
||||
NL_SET_ERR_MSG(extack, "unable to get old reference sync state");
|
||||
return ret;
|
||||
}
|
||||
if (state == old_state)
|
||||
return 0;
|
||||
xa_for_each(&pin->dpll_refs, i, ref) {
|
||||
ops = dpll_pin_ops(ref);
|
||||
dpll = ref->dpll;
|
||||
ret = ops->ref_sync_set(pin, dpll_pin_on_dpll_priv(dpll, pin),
|
||||
ref_sync_pin,
|
||||
dpll_pin_on_dpll_priv(dpll,
|
||||
ref_sync_pin),
|
||||
state, extack);
|
||||
if (ret) {
|
||||
failed = ref;
|
||||
NL_SET_ERR_MSG_FMT(extack, "reference sync set failed for dpll_id:%u",
|
||||
dpll->id);
|
||||
goto rollback;
|
||||
}
|
||||
}
|
||||
__dpll_pin_change_ntf(pin);
|
||||
|
||||
return 0;
|
||||
|
||||
rollback:
|
||||
xa_for_each(&pin->dpll_refs, i, ref) {
|
||||
if (ref == failed)
|
||||
break;
|
||||
ops = dpll_pin_ops(ref);
|
||||
dpll = ref->dpll;
|
||||
if (ops->ref_sync_set(pin, dpll_pin_on_dpll_priv(dpll, pin),
|
||||
ref_sync_pin,
|
||||
dpll_pin_on_dpll_priv(dpll, ref_sync_pin),
|
||||
old_state, extack))
|
||||
NL_SET_ERR_MSG(extack, "set reference sync rollback failed");
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_pin_ref_sync_set(struct dpll_pin *pin, struct nlattr *nest,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
struct nlattr *tb[DPLL_A_PIN_MAX + 1];
|
||||
enum dpll_pin_state state;
|
||||
u32 sync_pin_id;
|
||||
|
||||
nla_parse_nested(tb, DPLL_A_PIN_MAX, nest,
|
||||
dpll_reference_sync_nl_policy, extack);
|
||||
if (!tb[DPLL_A_PIN_ID]) {
|
||||
NL_SET_ERR_MSG(extack, "sync pin id expected");
|
||||
return -EINVAL;
|
||||
}
|
||||
sync_pin_id = nla_get_u32(tb[DPLL_A_PIN_ID]);
|
||||
|
||||
if (!tb[DPLL_A_PIN_STATE]) {
|
||||
NL_SET_ERR_MSG(extack, "sync pin state expected");
|
||||
return -EINVAL;
|
||||
}
|
||||
state = nla_get_u32(tb[DPLL_A_PIN_STATE]);
|
||||
|
||||
return dpll_pin_ref_sync_state_set(pin, sync_pin_id, state, extack);
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_pin_on_pin_state_set(struct dpll_pin *pin, u32 parent_idx,
|
||||
enum dpll_pin_state state,
|
||||
@ -1251,6 +1439,11 @@ dpll_pin_set_from_nlattr(struct dpll_pin *pin, struct genl_info *info)
|
||||
if (ret)
|
||||
return ret;
|
||||
break;
|
||||
case DPLL_A_PIN_REFERENCE_SYNC:
|
||||
ret = dpll_pin_ref_sync_set(pin, a, info->extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
@ -1376,16 +1569,18 @@ int dpll_nl_pin_id_get_doit(struct sk_buff *skb, struct genl_info *info)
|
||||
return -EMSGSIZE;
|
||||
}
|
||||
pin = dpll_pin_find_from_nlattr(info);
|
||||
if (!IS_ERR(pin)) {
|
||||
if (!dpll_pin_available(pin)) {
|
||||
nlmsg_free(msg);
|
||||
return -ENODEV;
|
||||
}
|
||||
ret = dpll_msg_add_pin_handle(msg, pin);
|
||||
if (ret) {
|
||||
nlmsg_free(msg);
|
||||
return ret;
|
||||
}
|
||||
if (IS_ERR(pin)) {
|
||||
nlmsg_free(msg);
|
||||
return PTR_ERR(pin);
|
||||
}
|
||||
if (!dpll_pin_available(pin)) {
|
||||
nlmsg_free(msg);
|
||||
return -ENODEV;
|
||||
}
|
||||
ret = dpll_msg_add_pin_handle(msg, pin);
|
||||
if (ret) {
|
||||
nlmsg_free(msg);
|
||||
return ret;
|
||||
}
|
||||
genlmsg_end(msg, hdr);
|
||||
|
||||
@ -1552,12 +1747,14 @@ int dpll_nl_device_id_get_doit(struct sk_buff *skb, struct genl_info *info)
|
||||
}
|
||||
|
||||
dpll = dpll_device_find_from_nlattr(info);
|
||||
if (!IS_ERR(dpll)) {
|
||||
ret = dpll_msg_add_dev_handle(msg, dpll);
|
||||
if (ret) {
|
||||
nlmsg_free(msg);
|
||||
return ret;
|
||||
}
|
||||
if (IS_ERR(dpll)) {
|
||||
nlmsg_free(msg);
|
||||
return PTR_ERR(dpll);
|
||||
}
|
||||
ret = dpll_msg_add_dev_handle(msg, dpll);
|
||||
if (ret) {
|
||||
nlmsg_free(msg);
|
||||
return ret;
|
||||
}
|
||||
genlmsg_end(msg, hdr);
|
||||
|
||||
@ -1594,14 +1791,25 @@ int dpll_nl_device_get_doit(struct sk_buff *skb, struct genl_info *info)
|
||||
static int
|
||||
dpll_set_from_nlattr(struct dpll_device *dpll, struct genl_info *info)
|
||||
{
|
||||
int ret;
|
||||
struct nlattr *a;
|
||||
int rem, ret;
|
||||
|
||||
if (info->attrs[DPLL_A_PHASE_OFFSET_MONITOR]) {
|
||||
struct nlattr *a = info->attrs[DPLL_A_PHASE_OFFSET_MONITOR];
|
||||
|
||||
ret = dpll_phase_offset_monitor_set(dpll, a, info->extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
nla_for_each_attr(a, genlmsg_data(info->genlhdr),
|
||||
genlmsg_len(info->genlhdr), rem) {
|
||||
switch (nla_type(a)) {
|
||||
case DPLL_A_PHASE_OFFSET_MONITOR:
|
||||
ret = dpll_phase_offset_monitor_set(dpll, a,
|
||||
info->extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
break;
|
||||
case DPLL_A_PHASE_OFFSET_AVG_FACTOR:
|
||||
ret = dpll_phase_offset_avg_factor_set(dpll, a,
|
||||
info->extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
@ -11,3 +11,5 @@ int dpll_device_delete_ntf(struct dpll_device *dpll);
|
||||
int dpll_pin_create_ntf(struct dpll_pin *pin);
|
||||
|
||||
int dpll_pin_delete_ntf(struct dpll_pin *pin);
|
||||
|
||||
int __dpll_pin_change_ntf(struct dpll_pin *pin);
|
||||
|
||||
@ -24,6 +24,11 @@ const struct nla_policy dpll_pin_parent_pin_nl_policy[DPLL_A_PIN_STATE + 1] = {
|
||||
[DPLL_A_PIN_STATE] = NLA_POLICY_RANGE(NLA_U32, 1, 3),
|
||||
};
|
||||
|
||||
const struct nla_policy dpll_reference_sync_nl_policy[DPLL_A_PIN_STATE + 1] = {
|
||||
[DPLL_A_PIN_ID] = { .type = NLA_U32, },
|
||||
[DPLL_A_PIN_STATE] = NLA_POLICY_RANGE(NLA_U32, 1, 3),
|
||||
};
|
||||
|
||||
/* DPLL_CMD_DEVICE_ID_GET - do */
|
||||
static const struct nla_policy dpll_device_id_get_nl_policy[DPLL_A_TYPE + 1] = {
|
||||
[DPLL_A_MODULE_NAME] = { .type = NLA_NUL_STRING, },
|
||||
@ -37,9 +42,10 @@ static const struct nla_policy dpll_device_get_nl_policy[DPLL_A_ID + 1] = {
|
||||
};
|
||||
|
||||
/* DPLL_CMD_DEVICE_SET - do */
|
||||
static const struct nla_policy dpll_device_set_nl_policy[DPLL_A_PHASE_OFFSET_MONITOR + 1] = {
|
||||
static const struct nla_policy dpll_device_set_nl_policy[DPLL_A_PHASE_OFFSET_AVG_FACTOR + 1] = {
|
||||
[DPLL_A_ID] = { .type = NLA_U32, },
|
||||
[DPLL_A_PHASE_OFFSET_MONITOR] = NLA_POLICY_MAX(NLA_U32, 1),
|
||||
[DPLL_A_PHASE_OFFSET_AVG_FACTOR] = { .type = NLA_U32, },
|
||||
};
|
||||
|
||||
/* DPLL_CMD_PIN_ID_GET - do */
|
||||
@ -63,7 +69,7 @@ static const struct nla_policy dpll_pin_get_dump_nl_policy[DPLL_A_PIN_ID + 1] =
|
||||
};
|
||||
|
||||
/* DPLL_CMD_PIN_SET - do */
|
||||
static const struct nla_policy dpll_pin_set_nl_policy[DPLL_A_PIN_ESYNC_FREQUENCY + 1] = {
|
||||
static const struct nla_policy dpll_pin_set_nl_policy[DPLL_A_PIN_REFERENCE_SYNC + 1] = {
|
||||
[DPLL_A_PIN_ID] = { .type = NLA_U32, },
|
||||
[DPLL_A_PIN_FREQUENCY] = { .type = NLA_U64, },
|
||||
[DPLL_A_PIN_DIRECTION] = NLA_POLICY_RANGE(NLA_U32, 1, 2),
|
||||
@ -73,6 +79,7 @@ static const struct nla_policy dpll_pin_set_nl_policy[DPLL_A_PIN_ESYNC_FREQUENCY
|
||||
[DPLL_A_PIN_PARENT_PIN] = NLA_POLICY_NESTED(dpll_pin_parent_pin_nl_policy),
|
||||
[DPLL_A_PIN_PHASE_ADJUST] = { .type = NLA_S32, },
|
||||
[DPLL_A_PIN_ESYNC_FREQUENCY] = { .type = NLA_U64, },
|
||||
[DPLL_A_PIN_REFERENCE_SYNC] = NLA_POLICY_NESTED(dpll_reference_sync_nl_policy),
|
||||
};
|
||||
|
||||
/* Ops table for dpll */
|
||||
@ -106,7 +113,7 @@ static const struct genl_split_ops dpll_nl_ops[] = {
|
||||
.doit = dpll_nl_device_set_doit,
|
||||
.post_doit = dpll_post_doit,
|
||||
.policy = dpll_device_set_nl_policy,
|
||||
.maxattr = DPLL_A_PHASE_OFFSET_MONITOR,
|
||||
.maxattr = DPLL_A_PHASE_OFFSET_AVG_FACTOR,
|
||||
.flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
|
||||
},
|
||||
{
|
||||
@ -140,7 +147,7 @@ static const struct genl_split_ops dpll_nl_ops[] = {
|
||||
.doit = dpll_nl_pin_set_doit,
|
||||
.post_doit = dpll_pin_post_doit,
|
||||
.policy = dpll_pin_set_nl_policy,
|
||||
.maxattr = DPLL_A_PIN_ESYNC_FREQUENCY,
|
||||
.maxattr = DPLL_A_PIN_REFERENCE_SYNC,
|
||||
.flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
|
||||
},
|
||||
};
|
||||
|
||||
@ -14,6 +14,7 @@
|
||||
/* Common nested types */
|
||||
extern const struct nla_policy dpll_pin_parent_device_nl_policy[DPLL_A_PIN_PHASE_OFFSET + 1];
|
||||
extern const struct nla_policy dpll_pin_parent_pin_nl_policy[DPLL_A_PIN_STATE + 1];
|
||||
extern const struct nla_policy dpll_reference_sync_nl_policy[DPLL_A_PIN_STATE + 1];
|
||||
|
||||
int dpll_lock_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
|
||||
struct genl_info *info);
|
||||
|
||||
@ -1,7 +1,8 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
obj-$(CONFIG_ZL3073X) += zl3073x.o
|
||||
zl3073x-objs := core.o devlink.o dpll.o flash.o fw.o prop.o
|
||||
zl3073x-objs := core.o devlink.o dpll.o flash.o fw.o \
|
||||
out.o prop.o ref.o synth.o
|
||||
|
||||
obj-$(CONFIG_ZL3073X_I2C) += zl3073x_i2c.o
|
||||
zl3073x_i2c-objs := i2c.o
|
||||
|
||||
@ -129,47 +129,6 @@ const struct regmap_config zl3073x_regmap_config = {
|
||||
};
|
||||
EXPORT_SYMBOL_NS_GPL(zl3073x_regmap_config, ZL3073X);
|
||||
|
||||
/**
|
||||
* zl3073x_ref_freq_factorize - factorize given frequency
|
||||
* @freq: input frequency
|
||||
* @base: base frequency
|
||||
* @mult: multiplier
|
||||
*
|
||||
* Checks if the given frequency can be factorized using one of the
|
||||
* supported base frequencies. If so the base frequency and multiplier
|
||||
* are stored into appropriate parameters if they are not NULL.
|
||||
*
|
||||
* Return: 0 on success, -EINVAL if the frequency cannot be factorized
|
||||
*/
|
||||
int
|
||||
zl3073x_ref_freq_factorize(u32 freq, u16 *base, u16 *mult)
|
||||
{
|
||||
static const u16 base_freqs[] = {
|
||||
1, 2, 4, 5, 8, 10, 16, 20, 25, 32, 40, 50, 64, 80, 100, 125,
|
||||
128, 160, 200, 250, 256, 320, 400, 500, 625, 640, 800, 1000,
|
||||
1250, 1280, 1600, 2000, 2500, 3125, 3200, 4000, 5000, 6250,
|
||||
6400, 8000, 10000, 12500, 15625, 16000, 20000, 25000, 31250,
|
||||
32000, 40000, 50000, 62500,
|
||||
};
|
||||
u32 div;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(base_freqs); i++) {
|
||||
div = freq / base_freqs[i];
|
||||
|
||||
if (div <= U16_MAX && (freq % base_freqs[i]) == 0) {
|
||||
if (base)
|
||||
*base = base_freqs[i];
|
||||
if (mult)
|
||||
*mult = div;
|
||||
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static bool
|
||||
zl3073x_check_reg(struct zl3073x_dev *zldev, unsigned int reg, size_t size)
|
||||
{
|
||||
@ -593,190 +552,6 @@ int zl3073x_write_hwreg_seq(struct zl3073x_dev *zldev,
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_state_fetch - get input reference state
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: input reference index to fetch state for
|
||||
*
|
||||
* Function fetches information for the given input reference that are
|
||||
* invariant and stores them for later use.
|
||||
*
|
||||
* Return: 0 on success, <0 on error
|
||||
*/
|
||||
static int
|
||||
zl3073x_ref_state_fetch(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
struct zl3073x_ref *input = &zldev->ref[index];
|
||||
u8 ref_config;
|
||||
int rc;
|
||||
|
||||
/* If the input is differential then the configuration for N-pin
|
||||
* reference is ignored and P-pin config is used for both.
|
||||
*/
|
||||
if (zl3073x_is_n_pin(index) &&
|
||||
zl3073x_ref_is_diff(zldev, index - 1)) {
|
||||
input->enabled = zl3073x_ref_is_enabled(zldev, index - 1);
|
||||
input->diff = true;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read reference configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_REF_MB_SEM, ZL_REF_MB_SEM_RD,
|
||||
ZL_REG_REF_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read ref_config register */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_REF_CONFIG, &ref_config);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
input->enabled = FIELD_GET(ZL_REF_CONFIG_ENABLE, ref_config);
|
||||
input->diff = FIELD_GET(ZL_REF_CONFIG_DIFF_EN, ref_config);
|
||||
|
||||
dev_dbg(zldev->dev, "REF%u is %s and configured as %s\n", index,
|
||||
str_enabled_disabled(input->enabled),
|
||||
input->diff ? "differential" : "single-ended");
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_state_fetch - get output state
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: output index to fetch state for
|
||||
*
|
||||
* Function fetches information for the given output (not output pin)
|
||||
* that are invariant and stores them for later use.
|
||||
*
|
||||
* Return: 0 on success, <0 on error
|
||||
*/
|
||||
static int
|
||||
zl3073x_out_state_fetch(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
struct zl3073x_out *out = &zldev->out[index];
|
||||
u8 output_ctrl, output_mode;
|
||||
int rc;
|
||||
|
||||
/* Read output configuration */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_OUTPUT_CTRL(index), &output_ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Store info about output enablement and synthesizer the output
|
||||
* is connected to.
|
||||
*/
|
||||
out->enabled = FIELD_GET(ZL_OUTPUT_CTRL_EN, output_ctrl);
|
||||
out->synth = FIELD_GET(ZL_OUTPUT_CTRL_SYNTH_SEL, output_ctrl);
|
||||
|
||||
dev_dbg(zldev->dev, "OUT%u is %s and connected to SYNTH%u\n", index,
|
||||
str_enabled_disabled(out->enabled), out->synth);
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read output configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_OUTPUT_MB_SEM, ZL_OUTPUT_MB_SEM_RD,
|
||||
ZL_REG_OUTPUT_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read output_mode */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_OUTPUT_MODE, &output_mode);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Extract and store output signal format */
|
||||
out->signal_format = FIELD_GET(ZL_OUTPUT_MODE_SIGNAL_FORMAT,
|
||||
output_mode);
|
||||
|
||||
dev_dbg(zldev->dev, "OUT%u has signal format 0x%02x\n", index,
|
||||
out->signal_format);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_synth_state_fetch - get synth state
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: synth index to fetch state for
|
||||
*
|
||||
* Function fetches information for the given synthesizer that are
|
||||
* invariant and stores them for later use.
|
||||
*
|
||||
* Return: 0 on success, <0 on error
|
||||
*/
|
||||
static int
|
||||
zl3073x_synth_state_fetch(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
struct zl3073x_synth *synth = &zldev->synth[index];
|
||||
u16 base, m, n;
|
||||
u8 synth_ctrl;
|
||||
u32 mult;
|
||||
int rc;
|
||||
|
||||
/* Read synth control register */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_SYNTH_CTRL(index), &synth_ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Store info about synth enablement and DPLL channel the synth is
|
||||
* driven by.
|
||||
*/
|
||||
synth->enabled = FIELD_GET(ZL_SYNTH_CTRL_EN, synth_ctrl);
|
||||
synth->dpll = FIELD_GET(ZL_SYNTH_CTRL_DPLL_SEL, synth_ctrl);
|
||||
|
||||
dev_dbg(zldev->dev, "SYNTH%u is %s and driven by DPLL%u\n", index,
|
||||
str_enabled_disabled(synth->enabled), synth->dpll);
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read synth configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_SYNTH_MB_SEM, ZL_SYNTH_MB_SEM_RD,
|
||||
ZL_REG_SYNTH_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* The output frequency is determined by the following formula:
|
||||
* base * multiplier * numerator / denominator
|
||||
*
|
||||
* Read registers with these values
|
||||
*/
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_SYNTH_FREQ_BASE, &base);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_SYNTH_FREQ_MULT, &mult);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_SYNTH_FREQ_M, &m);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_SYNTH_FREQ_N, &n);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Check denominator for zero to avoid div by 0 */
|
||||
if (!n) {
|
||||
dev_err(zldev->dev,
|
||||
"Zero divisor for SYNTH%u retrieved from device\n",
|
||||
index);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Compute and store synth frequency */
|
||||
zldev->synth[index].freq = div_u64(mul_u32_u32(base * m, mult), n);
|
||||
|
||||
dev_dbg(zldev->dev, "SYNTH%u frequency: %u Hz\n", index,
|
||||
zldev->synth[index].freq);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int
|
||||
zl3073x_dev_state_fetch(struct zl3073x_dev *zldev)
|
||||
{
|
||||
@ -816,6 +591,21 @@ zl3073x_dev_state_fetch(struct zl3073x_dev *zldev)
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void
|
||||
zl3073x_dev_ref_status_update(struct zl3073x_dev *zldev)
|
||||
{
|
||||
int i, rc;
|
||||
|
||||
for (i = 0; i < ZL3073X_NUM_REFS; i++) {
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_REF_MON_STATUS(i),
|
||||
&zldev->ref[i].mon_status);
|
||||
if (rc)
|
||||
dev_warn(zldev->dev,
|
||||
"Failed to get REF%u status: %pe\n", i,
|
||||
ERR_PTR(rc));
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_phase_offsets_update - update reference phase offsets
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
@ -935,6 +725,9 @@ zl3073x_dev_periodic_work(struct kthread_work *work)
|
||||
struct zl3073x_dpll *zldpll;
|
||||
int rc;
|
||||
|
||||
/* Update input references status */
|
||||
zl3073x_dev_ref_status_update(zldev);
|
||||
|
||||
/* Update DPLL-to-connected-ref phase offsets registers */
|
||||
rc = zl3073x_ref_phase_offsets_update(zldev, -1);
|
||||
if (rc)
|
||||
@ -956,6 +749,32 @@ zl3073x_dev_periodic_work(struct kthread_work *work)
|
||||
msecs_to_jiffies(500));
|
||||
}
|
||||
|
||||
int zl3073x_dev_phase_avg_factor_set(struct zl3073x_dev *zldev, u8 factor)
|
||||
{
|
||||
u8 dpll_meas_ctrl, value;
|
||||
int rc;
|
||||
|
||||
/* Read DPLL phase measurement control register */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_DPLL_MEAS_CTRL, &dpll_meas_ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Convert requested factor to register value */
|
||||
value = (factor + 1) & 0x0f;
|
||||
|
||||
/* Update phase measurement control register */
|
||||
dpll_meas_ctrl &= ~ZL_DPLL_MEAS_CTRL_AVG_FACTOR;
|
||||
dpll_meas_ctrl |= FIELD_PREP(ZL_DPLL_MEAS_CTRL_AVG_FACTOR, value);
|
||||
rc = zl3073x_write_u8(zldev, ZL_REG_DPLL_MEAS_CTRL, dpll_meas_ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Save the new factor */
|
||||
zldev->phase_avg_factor = factor;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_dev_phase_meas_setup - setup phase offset measurement
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
@ -972,15 +791,16 @@ zl3073x_dev_phase_meas_setup(struct zl3073x_dev *zldev)
|
||||
u8 dpll_meas_ctrl, mask = 0;
|
||||
int rc;
|
||||
|
||||
/* Setup phase measurement averaging factor */
|
||||
rc = zl3073x_dev_phase_avg_factor_set(zldev, zldev->phase_avg_factor);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read DPLL phase measurement control register */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_DPLL_MEAS_CTRL, &dpll_meas_ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Setup phase measurement averaging factor */
|
||||
dpll_meas_ctrl &= ~ZL_DPLL_MEAS_CTRL_AVG_FACTOR;
|
||||
dpll_meas_ctrl |= FIELD_PREP(ZL_DPLL_MEAS_CTRL_AVG_FACTOR, 3);
|
||||
|
||||
/* Enable DPLL measurement block */
|
||||
dpll_meas_ctrl |= ZL_DPLL_MEAS_CTRL_EN;
|
||||
|
||||
@ -1231,6 +1051,9 @@ int zl3073x_dev_probe(struct zl3073x_dev *zldev,
|
||||
*/
|
||||
zldev->clock_id = get_random_u64();
|
||||
|
||||
/* Default phase offset averaging factor */
|
||||
zldev->phase_avg_factor = 2;
|
||||
|
||||
/* Initialize mutex for operations where multiple reads, writes
|
||||
* and/or polls are required to be done atomically.
|
||||
*/
|
||||
|
||||
@ -9,7 +9,10 @@
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "out.h"
|
||||
#include "ref.h"
|
||||
#include "regs.h"
|
||||
#include "synth.h"
|
||||
|
||||
struct device;
|
||||
struct regmap;
|
||||
@ -27,60 +30,24 @@ struct zl3073x_dpll;
|
||||
#define ZL3073X_NUM_PINS (ZL3073X_NUM_INPUT_PINS + \
|
||||
ZL3073X_NUM_OUTPUT_PINS)
|
||||
|
||||
/**
|
||||
* struct zl3073x_ref - input reference invariant info
|
||||
* @enabled: input reference is enabled or disabled
|
||||
* @diff: true if input reference is differential
|
||||
* @ffo: current fractional frequency offset
|
||||
*/
|
||||
struct zl3073x_ref {
|
||||
bool enabled;
|
||||
bool diff;
|
||||
s64 ffo;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct zl3073x_out - output invariant info
|
||||
* @enabled: out is enabled or disabled
|
||||
* @synth: synthesizer the out is connected to
|
||||
* @signal_format: out signal format
|
||||
*/
|
||||
struct zl3073x_out {
|
||||
bool enabled;
|
||||
u8 synth;
|
||||
u8 signal_format;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct zl3073x_synth - synthesizer invariant info
|
||||
* @freq: synthesizer frequency
|
||||
* @dpll: ID of DPLL the synthesizer is driven by
|
||||
* @enabled: synth is enabled or disabled
|
||||
*/
|
||||
struct zl3073x_synth {
|
||||
u32 freq;
|
||||
u8 dpll;
|
||||
bool enabled;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct zl3073x_dev - zl3073x device
|
||||
* @dev: pointer to device
|
||||
* @regmap: regmap to access device registers
|
||||
* @multiop_lock: to serialize multiple register operations
|
||||
* @clock_id: clock id of the device
|
||||
* @ref: array of input references' invariants
|
||||
* @out: array of outs' invariants
|
||||
* @synth: array of synths' invariants
|
||||
* @dplls: list of DPLLs
|
||||
* @kworker: thread for periodic work
|
||||
* @work: periodic work
|
||||
* @clock_id: clock id of the device
|
||||
* @phase_avg_factor: phase offset measurement averaging factor
|
||||
*/
|
||||
struct zl3073x_dev {
|
||||
struct device *dev;
|
||||
struct regmap *regmap;
|
||||
struct mutex multiop_lock;
|
||||
u64 clock_id;
|
||||
|
||||
/* Invariants */
|
||||
struct zl3073x_ref ref[ZL3073X_NUM_REFS];
|
||||
@ -93,6 +60,10 @@ struct zl3073x_dev {
|
||||
/* Monitor */
|
||||
struct kthread_worker *kworker;
|
||||
struct kthread_delayed_work work;
|
||||
|
||||
/* Devlink parameters */
|
||||
u64 clock_id;
|
||||
u8 phase_avg_factor;
|
||||
};
|
||||
|
||||
struct zl3073x_chip_info {
|
||||
@ -115,6 +86,13 @@ int zl3073x_dev_probe(struct zl3073x_dev *zldev,
|
||||
int zl3073x_dev_start(struct zl3073x_dev *zldev, bool full);
|
||||
void zl3073x_dev_stop(struct zl3073x_dev *zldev);
|
||||
|
||||
static inline u8 zl3073x_dev_phase_avg_factor_get(struct zl3073x_dev *zldev)
|
||||
{
|
||||
return zldev->phase_avg_factor;
|
||||
}
|
||||
|
||||
int zl3073x_dev_phase_avg_factor_set(struct zl3073x_dev *zldev, u8 factor);
|
||||
|
||||
/**********************
|
||||
* Registers operations
|
||||
**********************/
|
||||
@ -164,7 +142,6 @@ int zl3073x_write_hwreg_seq(struct zl3073x_dev *zldev,
|
||||
* Misc operations
|
||||
*****************/
|
||||
|
||||
int zl3073x_ref_freq_factorize(u32 freq, u16 *base, u16 *mult);
|
||||
int zl3073x_ref_phase_offsets_update(struct zl3073x_dev *zldev, int channel);
|
||||
|
||||
static inline bool
|
||||
@ -206,172 +183,141 @@ zl3073x_output_pin_out_get(u8 id)
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_ffo_get - get current fractional frequency offset
|
||||
* zl3073x_dev_ref_freq_get - get input reference frequency
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: input reference index
|
||||
*
|
||||
* Return: the latest measured fractional frequency offset
|
||||
* Return: frequency of given input reference
|
||||
*/
|
||||
static inline s64
|
||||
zl3073x_ref_ffo_get(struct zl3073x_dev *zldev, u8 index)
|
||||
static inline u32
|
||||
zl3073x_dev_ref_freq_get(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->ref[index].ffo;
|
||||
const struct zl3073x_ref *ref = zl3073x_ref_state_get(zldev, index);
|
||||
|
||||
return zl3073x_ref_freq_get(ref);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_is_diff - check if the given input reference is differential
|
||||
* zl3073x_dev_ref_is_diff - check if the given input reference is differential
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: input reference index
|
||||
*
|
||||
* Return: true if reference is differential, false if reference is single-ended
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_ref_is_diff(struct zl3073x_dev *zldev, u8 index)
|
||||
zl3073x_dev_ref_is_diff(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->ref[index].diff;
|
||||
const struct zl3073x_ref *ref = zl3073x_ref_state_get(zldev, index);
|
||||
|
||||
return zl3073x_ref_is_diff(ref);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_is_enabled - check if the given input reference is enabled
|
||||
/*
|
||||
* zl3073x_dev_ref_is_status_ok - check the given input reference status
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: input reference index
|
||||
*
|
||||
* Return: true if input refernce is enabled, false otherwise
|
||||
* Return: true if the status is ok, false otherwise
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_ref_is_enabled(struct zl3073x_dev *zldev, u8 index)
|
||||
zl3073x_dev_ref_is_status_ok(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->ref[index].enabled;
|
||||
const struct zl3073x_ref *ref = zl3073x_ref_state_get(zldev, index);
|
||||
|
||||
return zl3073x_ref_is_status_ok(ref);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_synth_dpll_get - get DPLL ID the synth is driven by
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: synth index
|
||||
*
|
||||
* Return: ID of DPLL the given synthetizer is driven by
|
||||
*/
|
||||
static inline u8
|
||||
zl3073x_synth_dpll_get(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->synth[index].dpll;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_synth_freq_get - get synth current freq
|
||||
* zl3073x_dev_synth_freq_get - get synth current freq
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: synth index
|
||||
*
|
||||
* Return: frequency of given synthetizer
|
||||
*/
|
||||
static inline u32
|
||||
zl3073x_synth_freq_get(struct zl3073x_dev *zldev, u8 index)
|
||||
zl3073x_dev_synth_freq_get(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->synth[index].freq;
|
||||
const struct zl3073x_synth *synth;
|
||||
|
||||
synth = zl3073x_synth_state_get(zldev, index);
|
||||
return zl3073x_synth_freq_get(synth);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_synth_is_enabled - check if the given synth is enabled
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: synth index
|
||||
*
|
||||
* Return: true if synth is enabled, false otherwise
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_synth_is_enabled(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->synth[index].enabled;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_synth_get - get synth connected to given output
|
||||
* zl3073x_dev_out_synth_get - get synth connected to given output
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: output index
|
||||
*
|
||||
* Return: index of synth connected to given output.
|
||||
*/
|
||||
static inline u8
|
||||
zl3073x_out_synth_get(struct zl3073x_dev *zldev, u8 index)
|
||||
zl3073x_dev_out_synth_get(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->out[index].synth;
|
||||
const struct zl3073x_out *out = zl3073x_out_state_get(zldev, index);
|
||||
|
||||
return zl3073x_out_synth_get(out);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_is_enabled - check if the given output is enabled
|
||||
* zl3073x_dev_out_is_enabled - check if the given output is enabled
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: output index
|
||||
*
|
||||
* Return: true if the output is enabled, false otherwise
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_out_is_enabled(struct zl3073x_dev *zldev, u8 index)
|
||||
zl3073x_dev_out_is_enabled(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
u8 synth;
|
||||
const struct zl3073x_out *out = zl3073x_out_state_get(zldev, index);
|
||||
const struct zl3073x_synth *synth;
|
||||
u8 synth_id;
|
||||
|
||||
/* Output is enabled only if associated synth is enabled */
|
||||
synth = zl3073x_out_synth_get(zldev, index);
|
||||
if (zl3073x_synth_is_enabled(zldev, synth))
|
||||
return zldev->out[index].enabled;
|
||||
synth_id = zl3073x_out_synth_get(out);
|
||||
synth = zl3073x_synth_state_get(zldev, synth_id);
|
||||
|
||||
return false;
|
||||
return zl3073x_synth_is_enabled(synth) && zl3073x_out_is_enabled(out);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_signal_format_get - get output signal format
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: output index
|
||||
*
|
||||
* Return: signal format of given output
|
||||
*/
|
||||
static inline u8
|
||||
zl3073x_out_signal_format_get(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->out[index].signal_format;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_dpll_get - get DPLL ID the output is driven by
|
||||
* zl3073x_dev_out_dpll_get - get DPLL ID the output is driven by
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: output index
|
||||
*
|
||||
* Return: ID of DPLL the given output is driven by
|
||||
*/
|
||||
static inline
|
||||
u8 zl3073x_out_dpll_get(struct zl3073x_dev *zldev, u8 index)
|
||||
u8 zl3073x_dev_out_dpll_get(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
u8 synth;
|
||||
const struct zl3073x_out *out = zl3073x_out_state_get(zldev, index);
|
||||
const struct zl3073x_synth *synth;
|
||||
u8 synth_id;
|
||||
|
||||
/* Get synthesizer connected to given output */
|
||||
synth = zl3073x_out_synth_get(zldev, index);
|
||||
synth_id = zl3073x_out_synth_get(out);
|
||||
synth = zl3073x_synth_state_get(zldev, synth_id);
|
||||
|
||||
/* Return DPLL that drives the synth */
|
||||
return zl3073x_synth_dpll_get(zldev, synth);
|
||||
return zl3073x_synth_dpll_get(synth);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_is_diff - check if the given output is differential
|
||||
* zl3073x_dev_out_is_diff - check if the given output is differential
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: output index
|
||||
*
|
||||
* Return: true if output is differential, false if output is single-ended
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_out_is_diff(struct zl3073x_dev *zldev, u8 index)
|
||||
zl3073x_dev_out_is_diff(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
switch (zl3073x_out_signal_format_get(zldev, index)) {
|
||||
case ZL_OUTPUT_MODE_SIGNAL_FORMAT_LVDS:
|
||||
case ZL_OUTPUT_MODE_SIGNAL_FORMAT_DIFF:
|
||||
case ZL_OUTPUT_MODE_SIGNAL_FORMAT_LOWVCM:
|
||||
return true;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
const struct zl3073x_out *out = zl3073x_out_state_get(zldev, index);
|
||||
|
||||
return false;
|
||||
return zl3073x_out_is_diff(out);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_output_pin_is_enabled - check if the given output pin is enabled
|
||||
* zl3073x_dev_output_pin_is_enabled - check if the given output pin is enabled
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @id: output pin id
|
||||
*
|
||||
@ -381,16 +327,21 @@ zl3073x_out_is_diff(struct zl3073x_dev *zldev, u8 index)
|
||||
* Return: true if output pin is enabled, false if output pin is disabled
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_output_pin_is_enabled(struct zl3073x_dev *zldev, u8 id)
|
||||
zl3073x_dev_output_pin_is_enabled(struct zl3073x_dev *zldev, u8 id)
|
||||
{
|
||||
u8 output = zl3073x_output_pin_out_get(id);
|
||||
u8 out_id = zl3073x_output_pin_out_get(id);
|
||||
const struct zl3073x_out *out;
|
||||
|
||||
/* Check if the whole output is enabled */
|
||||
if (!zl3073x_out_is_enabled(zldev, output))
|
||||
out = zl3073x_out_state_get(zldev, out_id);
|
||||
|
||||
/* Check if the output is enabled - call _dev_ helper that
|
||||
* additionally checks for attached synth enablement.
|
||||
*/
|
||||
if (!zl3073x_dev_out_is_enabled(zldev, out_id))
|
||||
return false;
|
||||
|
||||
/* Check signal format */
|
||||
switch (zl3073x_out_signal_format_get(zldev, output)) {
|
||||
switch (zl3073x_out_signal_format_get(out)) {
|
||||
case ZL_OUTPUT_MODE_SIGNAL_FORMAT_DISABLED:
|
||||
/* Both output pins are disabled by signal format */
|
||||
return false;
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@ -20,6 +20,7 @@
|
||||
* @dpll_dev: pointer to registered DPLL device
|
||||
* @lock_status: last saved DPLL lock status
|
||||
* @pins: list of pins
|
||||
* @change_work: device change notification work
|
||||
*/
|
||||
struct zl3073x_dpll {
|
||||
struct list_head list;
|
||||
@ -32,6 +33,7 @@ struct zl3073x_dpll {
|
||||
struct dpll_device *dpll_dev;
|
||||
enum dpll_lock_status lock_status;
|
||||
struct list_head pins;
|
||||
struct work_struct change_work;
|
||||
};
|
||||
|
||||
struct zl3073x_dpll *zl3073x_dpll_alloc(struct zl3073x_dev *zldev, u8 ch);
|
||||
|
||||
157
drivers/dpll/zl3073x/out.c
Normal file
157
drivers/dpll/zl3073x/out.c
Normal file
@ -0,0 +1,157 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/cleanup.h>
|
||||
#include <linux/dev_printk.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/string_choices.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "core.h"
|
||||
#include "out.h"
|
||||
|
||||
/**
|
||||
* zl3073x_out_state_fetch - fetch output state from hardware
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: output index to fetch state for
|
||||
*
|
||||
* Function fetches state of the given output from hardware and stores it
|
||||
* for later use.
|
||||
*
|
||||
* Return: 0 on success, <0 on error
|
||||
*/
|
||||
int zl3073x_out_state_fetch(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
struct zl3073x_out *out = &zldev->out[index];
|
||||
int rc;
|
||||
|
||||
/* Read output configuration */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_OUTPUT_CTRL(index), &out->ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
dev_dbg(zldev->dev, "OUT%u is %s and connected to SYNTH%u\n", index,
|
||||
str_enabled_disabled(zl3073x_out_is_enabled(out)),
|
||||
zl3073x_out_synth_get(out));
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read output configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_OUTPUT_MB_SEM, ZL_OUTPUT_MB_SEM_RD,
|
||||
ZL_REG_OUTPUT_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read output mode */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_OUTPUT_MODE, &out->mode);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
dev_dbg(zldev->dev, "OUT%u has signal format 0x%02x\n", index,
|
||||
zl3073x_out_signal_format_get(out));
|
||||
|
||||
/* Read output divisor */
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_OUTPUT_DIV, &out->div);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
if (!out->div) {
|
||||
dev_err(zldev->dev, "Zero divisor for OUT%u got from device\n",
|
||||
index);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
dev_dbg(zldev->dev, "OUT%u divisor: %u\n", index, out->div);
|
||||
|
||||
/* Read output width */
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_OUTPUT_WIDTH, &out->width);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_OUTPUT_ESYNC_PERIOD,
|
||||
&out->esync_n_period);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
if (!out->esync_n_period) {
|
||||
dev_err(zldev->dev,
|
||||
"Zero esync divisor for OUT%u got from device\n",
|
||||
index);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_OUTPUT_ESYNC_WIDTH,
|
||||
&out->esync_n_width);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_OUTPUT_PHASE_COMP,
|
||||
&out->phase_comp);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_state_get - get current output state
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: output index to get state for
|
||||
*
|
||||
* Return: pointer to given output state
|
||||
*/
|
||||
const struct zl3073x_out *zl3073x_out_state_get(struct zl3073x_dev *zldev,
|
||||
u8 index)
|
||||
{
|
||||
return &zldev->out[index];
|
||||
}
|
||||
|
||||
int zl3073x_out_state_set(struct zl3073x_dev *zldev, u8 index,
|
||||
const struct zl3073x_out *out)
|
||||
{
|
||||
struct zl3073x_out *dout = &zldev->out[index];
|
||||
int rc;
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read output configuration into mailbox */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_OUTPUT_MB_SEM, ZL_OUTPUT_MB_SEM_RD,
|
||||
ZL_REG_OUTPUT_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Update mailbox with changed values */
|
||||
if (dout->div != out->div)
|
||||
rc = zl3073x_write_u32(zldev, ZL_REG_OUTPUT_DIV, out->div);
|
||||
if (!rc && dout->width != out->width)
|
||||
rc = zl3073x_write_u32(zldev, ZL_REG_OUTPUT_WIDTH, out->width);
|
||||
if (!rc && dout->esync_n_period != out->esync_n_period)
|
||||
rc = zl3073x_write_u32(zldev, ZL_REG_OUTPUT_ESYNC_PERIOD,
|
||||
out->esync_n_period);
|
||||
if (!rc && dout->esync_n_width != out->esync_n_width)
|
||||
rc = zl3073x_write_u32(zldev, ZL_REG_OUTPUT_ESYNC_WIDTH,
|
||||
out->esync_n_width);
|
||||
if (!rc && dout->mode != out->mode)
|
||||
rc = zl3073x_write_u8(zldev, ZL_REG_OUTPUT_MODE, out->mode);
|
||||
if (!rc && dout->phase_comp != out->phase_comp)
|
||||
rc = zl3073x_write_u32(zldev, ZL_REG_OUTPUT_PHASE_COMP,
|
||||
out->phase_comp);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Commit output configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_OUTPUT_MB_SEM, ZL_OUTPUT_MB_SEM_WR,
|
||||
ZL_REG_OUTPUT_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* After successful commit store new state */
|
||||
dout->div = out->div;
|
||||
dout->width = out->width;
|
||||
dout->esync_n_period = out->esync_n_period;
|
||||
dout->esync_n_width = out->esync_n_width;
|
||||
dout->mode = out->mode;
|
||||
dout->phase_comp = out->phase_comp;
|
||||
|
||||
return 0;
|
||||
}
|
||||
93
drivers/dpll/zl3073x/out.h
Normal file
93
drivers/dpll/zl3073x/out.h
Normal file
@ -0,0 +1,93 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
|
||||
#ifndef _ZL3073X_OUT_H
|
||||
#define _ZL3073X_OUT_H
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "regs.h"
|
||||
|
||||
struct zl3073x_dev;
|
||||
|
||||
/**
|
||||
* struct zl3073x_out - output state
|
||||
* @div: output divisor
|
||||
* @width: output pulse width
|
||||
* @esync_n_period: embedded sync or n-pin period (for n-div formats)
|
||||
* @esync_n_width: embedded sync or n-pin pulse width
|
||||
* @phase_comp: phase compensation
|
||||
* @ctrl: output control
|
||||
* @mode: output mode
|
||||
*/
|
||||
struct zl3073x_out {
|
||||
u32 div;
|
||||
u32 width;
|
||||
u32 esync_n_period;
|
||||
u32 esync_n_width;
|
||||
s32 phase_comp;
|
||||
u8 ctrl;
|
||||
u8 mode;
|
||||
};
|
||||
|
||||
int zl3073x_out_state_fetch(struct zl3073x_dev *zldev, u8 index);
|
||||
const struct zl3073x_out *zl3073x_out_state_get(struct zl3073x_dev *zldev,
|
||||
u8 index);
|
||||
|
||||
int zl3073x_out_state_set(struct zl3073x_dev *zldev, u8 index,
|
||||
const struct zl3073x_out *out);
|
||||
|
||||
/**
|
||||
* zl3073x_out_signal_format_get - get output signal format
|
||||
* @out: pointer to out state
|
||||
*
|
||||
* Return: signal format of given output
|
||||
*/
|
||||
static inline u8 zl3073x_out_signal_format_get(const struct zl3073x_out *out)
|
||||
{
|
||||
return FIELD_GET(ZL_OUTPUT_MODE_SIGNAL_FORMAT, out->mode);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_is_diff - check if the given output is differential
|
||||
* @out: pointer to out state
|
||||
*
|
||||
* Return: true if output is differential, false if output is single-ended
|
||||
*/
|
||||
static inline bool zl3073x_out_is_diff(const struct zl3073x_out *out)
|
||||
{
|
||||
switch (zl3073x_out_signal_format_get(out)) {
|
||||
case ZL_OUTPUT_MODE_SIGNAL_FORMAT_LVDS:
|
||||
case ZL_OUTPUT_MODE_SIGNAL_FORMAT_DIFF:
|
||||
case ZL_OUTPUT_MODE_SIGNAL_FORMAT_LOWVCM:
|
||||
return true;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_is_enabled - check if the given output is enabled
|
||||
* @out: pointer to out state
|
||||
*
|
||||
* Return: true if output is enabled, false if output is disabled
|
||||
*/
|
||||
static inline bool zl3073x_out_is_enabled(const struct zl3073x_out *out)
|
||||
{
|
||||
return !!FIELD_GET(ZL_OUTPUT_CTRL_EN, out->ctrl);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_synth_get - get synth connected to given output
|
||||
* @out: pointer to out state
|
||||
*
|
||||
* Return: index of synth connected to given output.
|
||||
*/
|
||||
static inline u8 zl3073x_out_synth_get(const struct zl3073x_out *out)
|
||||
{
|
||||
return FIELD_GET(ZL_OUTPUT_CTRL_SYNTH_SEL, out->ctrl);
|
||||
}
|
||||
|
||||
#endif /* _ZL3073X_OUT_H */
|
||||
@ -46,10 +46,10 @@ zl3073x_pin_check_freq(struct zl3073x_dev *zldev, enum dpll_pin_direction dir,
|
||||
|
||||
/* Get output pin synthesizer */
|
||||
out = zl3073x_output_pin_out_get(id);
|
||||
synth = zl3073x_out_synth_get(zldev, out);
|
||||
synth = zl3073x_dev_out_synth_get(zldev, out);
|
||||
|
||||
/* Get synth frequency */
|
||||
synth_freq = zl3073x_synth_freq_get(zldev, synth);
|
||||
synth_freq = zl3073x_dev_synth_freq_get(zldev, synth);
|
||||
|
||||
/* Check the frequency divides synth frequency */
|
||||
if (synth_freq % (u32)freq)
|
||||
@ -93,13 +93,13 @@ zl3073x_prop_pin_package_label_set(struct zl3073x_dev *zldev,
|
||||
|
||||
prefix = "REF";
|
||||
ref = zl3073x_input_pin_ref_get(id);
|
||||
is_diff = zl3073x_ref_is_diff(zldev, ref);
|
||||
is_diff = zl3073x_dev_ref_is_diff(zldev, ref);
|
||||
} else {
|
||||
u8 out;
|
||||
|
||||
prefix = "OUT";
|
||||
out = zl3073x_output_pin_out_get(id);
|
||||
is_diff = zl3073x_out_is_diff(zldev, out);
|
||||
is_diff = zl3073x_dev_out_is_diff(zldev, out);
|
||||
}
|
||||
|
||||
if (!is_diff)
|
||||
@ -217,8 +217,8 @@ struct zl3073x_pin_props *zl3073x_pin_props_get(struct zl3073x_dev *zldev,
|
||||
* the synth frequency count.
|
||||
*/
|
||||
out = zl3073x_output_pin_out_get(index);
|
||||
synth = zl3073x_out_synth_get(zldev, out);
|
||||
f = 2 * zl3073x_synth_freq_get(zldev, synth);
|
||||
synth = zl3073x_dev_out_synth_get(zldev, out);
|
||||
f = 2 * zl3073x_dev_synth_freq_get(zldev, synth);
|
||||
props->dpll_props.phase_gran = f ? div_u64(PSEC_PER_SEC, f) : 1;
|
||||
}
|
||||
|
||||
|
||||
204
drivers/dpll/zl3073x/ref.c
Normal file
204
drivers/dpll/zl3073x/ref.c
Normal file
@ -0,0 +1,204 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/cleanup.h>
|
||||
#include <linux/dev_printk.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/string_choices.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "core.h"
|
||||
#include "ref.h"
|
||||
|
||||
/**
|
||||
* zl3073x_ref_freq_factorize - factorize given frequency
|
||||
* @freq: input frequency
|
||||
* @base: base frequency
|
||||
* @mult: multiplier
|
||||
*
|
||||
* Checks if the given frequency can be factorized using one of the
|
||||
* supported base frequencies. If so the base frequency and multiplier
|
||||
* are stored into appropriate parameters if they are not NULL.
|
||||
*
|
||||
* Return: 0 on success, -EINVAL if the frequency cannot be factorized
|
||||
*/
|
||||
int
|
||||
zl3073x_ref_freq_factorize(u32 freq, u16 *base, u16 *mult)
|
||||
{
|
||||
static const u16 base_freqs[] = {
|
||||
1, 2, 4, 5, 8, 10, 16, 20, 25, 32, 40, 50, 64, 80, 100, 125,
|
||||
128, 160, 200, 250, 256, 320, 400, 500, 625, 640, 800, 1000,
|
||||
1250, 1280, 1600, 2000, 2500, 3125, 3200, 4000, 5000, 6250,
|
||||
6400, 8000, 10000, 12500, 15625, 16000, 20000, 25000, 31250,
|
||||
32000, 40000, 50000, 62500,
|
||||
};
|
||||
u32 div;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(base_freqs); i++) {
|
||||
div = freq / base_freqs[i];
|
||||
|
||||
if (div <= U16_MAX && (freq % base_freqs[i]) == 0) {
|
||||
if (base)
|
||||
*base = base_freqs[i];
|
||||
if (mult)
|
||||
*mult = div;
|
||||
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_state_fetch - fetch input reference state from hardware
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: input reference index to fetch state for
|
||||
*
|
||||
* Function fetches state for the given input reference from hardware and
|
||||
* stores it for later use.
|
||||
*
|
||||
* Return: 0 on success, <0 on error
|
||||
*/
|
||||
int zl3073x_ref_state_fetch(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
struct zl3073x_ref *ref = &zldev->ref[index];
|
||||
int rc;
|
||||
|
||||
/* For differential type inputs the N-pin reference shares
|
||||
* part of the configuration with the P-pin counterpart.
|
||||
*/
|
||||
if (zl3073x_is_n_pin(index) && zl3073x_ref_is_diff(ref - 1)) {
|
||||
struct zl3073x_ref *p_ref = ref - 1; /* P-pin counterpart*/
|
||||
|
||||
/* Copy the shared items from the P-pin */
|
||||
ref->config = p_ref->config;
|
||||
ref->esync_n_div = p_ref->esync_n_div;
|
||||
ref->freq_base = p_ref->freq_base;
|
||||
ref->freq_mult = p_ref->freq_mult;
|
||||
ref->freq_ratio_m = p_ref->freq_ratio_m;
|
||||
ref->freq_ratio_n = p_ref->freq_ratio_n;
|
||||
ref->phase_comp = p_ref->phase_comp;
|
||||
ref->sync_ctrl = p_ref->sync_ctrl;
|
||||
|
||||
return 0; /* Finish - no non-shared items for now */
|
||||
}
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read reference configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_REF_MB_SEM, ZL_REF_MB_SEM_RD,
|
||||
ZL_REG_REF_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read ref_config register */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_REF_CONFIG, &ref->config);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read frequency related registers */
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_REF_FREQ_BASE, &ref->freq_base);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_REF_FREQ_MULT, &ref->freq_mult);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_REF_RATIO_M, &ref->freq_ratio_m);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_REF_RATIO_N, &ref->freq_ratio_n);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read eSync and N-div rated registers */
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_REF_ESYNC_DIV, &ref->esync_n_div);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_REF_SYNC_CTRL, &ref->sync_ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read phase compensation register */
|
||||
rc = zl3073x_read_u48(zldev, ZL_REG_REF_PHASE_OFFSET_COMP,
|
||||
&ref->phase_comp);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
dev_dbg(zldev->dev, "REF%u is %s and configured as %s\n", index,
|
||||
str_enabled_disabled(zl3073x_ref_is_enabled(ref)),
|
||||
zl3073x_ref_is_diff(ref) ? "differential" : "single-ended");
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_state_get - get current input reference state
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: input reference index to get state for
|
||||
*
|
||||
* Return: pointer to given input reference state
|
||||
*/
|
||||
const struct zl3073x_ref *
|
||||
zl3073x_ref_state_get(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return &zldev->ref[index];
|
||||
}
|
||||
|
||||
int zl3073x_ref_state_set(struct zl3073x_dev *zldev, u8 index,
|
||||
const struct zl3073x_ref *ref)
|
||||
{
|
||||
struct zl3073x_ref *dref = &zldev->ref[index];
|
||||
int rc;
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read reference configuration into mailbox */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_REF_MB_SEM, ZL_REF_MB_SEM_RD,
|
||||
ZL_REG_REF_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Update mailbox with changed values */
|
||||
if (dref->freq_base != ref->freq_base)
|
||||
rc = zl3073x_write_u16(zldev, ZL_REG_REF_FREQ_BASE,
|
||||
ref->freq_base);
|
||||
if (!rc && dref->freq_mult != ref->freq_mult)
|
||||
rc = zl3073x_write_u16(zldev, ZL_REG_REF_FREQ_MULT,
|
||||
ref->freq_mult);
|
||||
if (!rc && dref->freq_ratio_m != ref->freq_ratio_m)
|
||||
rc = zl3073x_write_u16(zldev, ZL_REG_REF_RATIO_M,
|
||||
ref->freq_ratio_m);
|
||||
if (!rc && dref->freq_ratio_n != ref->freq_ratio_n)
|
||||
rc = zl3073x_write_u16(zldev, ZL_REG_REF_RATIO_N,
|
||||
ref->freq_ratio_n);
|
||||
if (!rc && dref->esync_n_div != ref->esync_n_div)
|
||||
rc = zl3073x_write_u32(zldev, ZL_REG_REF_ESYNC_DIV,
|
||||
ref->esync_n_div);
|
||||
if (!rc && dref->sync_ctrl != ref->sync_ctrl)
|
||||
rc = zl3073x_write_u8(zldev, ZL_REG_REF_SYNC_CTRL,
|
||||
ref->sync_ctrl);
|
||||
if (!rc && dref->phase_comp != ref->phase_comp)
|
||||
rc = zl3073x_write_u48(zldev, ZL_REG_REF_PHASE_OFFSET_COMP,
|
||||
ref->phase_comp);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Commit reference configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_REF_MB_SEM, ZL_REF_MB_SEM_WR,
|
||||
ZL_REG_REF_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* After successful commit store new state */
|
||||
dref->freq_base = ref->freq_base;
|
||||
dref->freq_mult = ref->freq_mult;
|
||||
dref->freq_ratio_m = ref->freq_ratio_m;
|
||||
dref->freq_ratio_n = ref->freq_ratio_n;
|
||||
dref->esync_n_div = ref->esync_n_div;
|
||||
dref->sync_ctrl = ref->sync_ctrl;
|
||||
dref->phase_comp = ref->phase_comp;
|
||||
|
||||
return 0;
|
||||
}
|
||||
134
drivers/dpll/zl3073x/ref.h
Normal file
134
drivers/dpll/zl3073x/ref.h
Normal file
@ -0,0 +1,134 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
|
||||
#ifndef _ZL3073X_REF_H
|
||||
#define _ZL3073X_REF_H
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/math64.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "regs.h"
|
||||
|
||||
struct zl3073x_dev;
|
||||
|
||||
/**
|
||||
* struct zl3073x_ref - input reference state
|
||||
* @ffo: current fractional frequency offset
|
||||
* @phase_comp: phase compensation
|
||||
* @esync_n_div: divisor for embedded sync or n-divided signal formats
|
||||
* @freq_base: frequency base
|
||||
* @freq_mult: frequnecy multiplier
|
||||
* @freq_ratio_m: FEC mode multiplier
|
||||
* @freq_ratio_n: FEC mode divisor
|
||||
* @config: reference config
|
||||
* @sync_ctrl: reference sync control
|
||||
* @mon_status: reference monitor status
|
||||
*/
|
||||
struct zl3073x_ref {
|
||||
s64 ffo;
|
||||
u64 phase_comp;
|
||||
u32 esync_n_div;
|
||||
u16 freq_base;
|
||||
u16 freq_mult;
|
||||
u16 freq_ratio_m;
|
||||
u16 freq_ratio_n;
|
||||
u8 config;
|
||||
u8 sync_ctrl;
|
||||
u8 mon_status;
|
||||
};
|
||||
|
||||
int zl3073x_ref_state_fetch(struct zl3073x_dev *zldev, u8 index);
|
||||
|
||||
const struct zl3073x_ref *zl3073x_ref_state_get(struct zl3073x_dev *zldev,
|
||||
u8 index);
|
||||
|
||||
int zl3073x_ref_state_set(struct zl3073x_dev *zldev, u8 index,
|
||||
const struct zl3073x_ref *ref);
|
||||
|
||||
int zl3073x_ref_freq_factorize(u32 freq, u16 *base, u16 *mult);
|
||||
|
||||
/**
|
||||
* zl3073x_ref_ffo_get - get current fractional frequency offset
|
||||
* @ref: pointer to ref state
|
||||
*
|
||||
* Return: the latest measured fractional frequency offset
|
||||
*/
|
||||
static inline s64
|
||||
zl3073x_ref_ffo_get(const struct zl3073x_ref *ref)
|
||||
{
|
||||
return ref->ffo;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_freq_get - get given input reference frequency
|
||||
* @ref: pointer to ref state
|
||||
*
|
||||
* Return: frequency of the given input reference
|
||||
*/
|
||||
static inline u32
|
||||
zl3073x_ref_freq_get(const struct zl3073x_ref *ref)
|
||||
{
|
||||
return mul_u64_u32_div(ref->freq_base * ref->freq_mult,
|
||||
ref->freq_ratio_m, ref->freq_ratio_n);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_freq_set - set given input reference frequency
|
||||
* @ref: pointer to ref state
|
||||
* @freq: frequency to be set
|
||||
*
|
||||
* Return: 0 on success, <0 when frequency cannot be factorized
|
||||
*/
|
||||
static inline int
|
||||
zl3073x_ref_freq_set(struct zl3073x_ref *ref, u32 freq)
|
||||
{
|
||||
u16 base, mult;
|
||||
int rc;
|
||||
|
||||
rc = zl3073x_ref_freq_factorize(freq, &base, &mult);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
ref->freq_base = base;
|
||||
ref->freq_mult = mult;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_is_diff - check if the given input reference is differential
|
||||
* @ref: pointer to ref state
|
||||
*
|
||||
* Return: true if reference is differential, false if reference is single-ended
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_ref_is_diff(const struct zl3073x_ref *ref)
|
||||
{
|
||||
return !!FIELD_GET(ZL_REF_CONFIG_DIFF_EN, ref->config);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_is_enabled - check if the given input reference is enabled
|
||||
* @ref: pointer to ref state
|
||||
*
|
||||
* Return: true if input refernce is enabled, false otherwise
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_ref_is_enabled(const struct zl3073x_ref *ref)
|
||||
{
|
||||
return !!FIELD_GET(ZL_REF_CONFIG_ENABLE, ref->config);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_is_status_ok - check the given input reference status
|
||||
* @ref: pointer to ref state
|
||||
*
|
||||
* Return: true if the status is ok, false otherwise
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_ref_is_status_ok(const struct zl3073x_ref *ref)
|
||||
{
|
||||
return ref->mon_status == ZL_REF_MON_STATUS_OK;
|
||||
}
|
||||
|
||||
#endif /* _ZL3073X_REF_H */
|
||||
87
drivers/dpll/zl3073x/synth.c
Normal file
87
drivers/dpll/zl3073x/synth.c
Normal file
@ -0,0 +1,87 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/cleanup.h>
|
||||
#include <linux/dev_printk.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/string_choices.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "core.h"
|
||||
#include "synth.h"
|
||||
|
||||
/**
|
||||
* zl3073x_synth_state_fetch - fetch synth state from hardware
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: synth index to fetch state for
|
||||
*
|
||||
* Function fetches state of the given synthesizer from the hardware and
|
||||
* stores it for later use.
|
||||
*
|
||||
* Return: 0 on success, <0 on error
|
||||
*/
|
||||
int zl3073x_synth_state_fetch(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
struct zl3073x_synth *synth = &zldev->synth[index];
|
||||
int rc;
|
||||
|
||||
/* Read synth control register */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_SYNTH_CTRL(index), &synth->ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read synth configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_SYNTH_MB_SEM, ZL_SYNTH_MB_SEM_RD,
|
||||
ZL_REG_SYNTH_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* The output frequency is determined by the following formula:
|
||||
* base * multiplier * numerator / denominator
|
||||
*
|
||||
* Read registers with these values
|
||||
*/
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_SYNTH_FREQ_BASE, &synth->freq_base);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_SYNTH_FREQ_MULT, &synth->freq_mult);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_SYNTH_FREQ_M, &synth->freq_m);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_SYNTH_FREQ_N, &synth->freq_n);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Check denominator for zero to avoid div by 0 */
|
||||
if (!synth->freq_n) {
|
||||
dev_err(zldev->dev,
|
||||
"Zero divisor for SYNTH%u retrieved from device\n",
|
||||
index);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
dev_dbg(zldev->dev, "SYNTH%u frequency: %u Hz\n", index,
|
||||
zl3073x_synth_freq_get(synth));
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_synth_state_get - get current synth state
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: synth index to get state for
|
||||
*
|
||||
* Return: pointer to given synth state
|
||||
*/
|
||||
const struct zl3073x_synth *zl3073x_synth_state_get(struct zl3073x_dev *zldev,
|
||||
u8 index)
|
||||
{
|
||||
return &zldev->synth[index];
|
||||
}
|
||||
72
drivers/dpll/zl3073x/synth.h
Normal file
72
drivers/dpll/zl3073x/synth.h
Normal file
@ -0,0 +1,72 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
|
||||
#ifndef _ZL3073X_SYNTH_H
|
||||
#define _ZL3073X_SYNTH_H
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/math64.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "regs.h"
|
||||
|
||||
struct zl3073x_dev;
|
||||
|
||||
/**
|
||||
* struct zl3073x_synth - synthesizer state
|
||||
* @freq_mult: frequency multiplier
|
||||
* @freq_base: frequency base
|
||||
* @freq_m: frequency numerator
|
||||
* @freq_n: frequency denominator
|
||||
* @ctrl: synth control
|
||||
*/
|
||||
struct zl3073x_synth {
|
||||
u32 freq_mult;
|
||||
u16 freq_base;
|
||||
u16 freq_m;
|
||||
u16 freq_n;
|
||||
u8 ctrl;
|
||||
};
|
||||
|
||||
int zl3073x_synth_state_fetch(struct zl3073x_dev *zldev, u8 synth_id);
|
||||
|
||||
const struct zl3073x_synth *zl3073x_synth_state_get(struct zl3073x_dev *zldev,
|
||||
u8 synth_id);
|
||||
|
||||
int zl3073x_synth_state_set(struct zl3073x_dev *zldev, u8 synth_id,
|
||||
const struct zl3073x_synth *synth);
|
||||
|
||||
/**
|
||||
* zl3073x_synth_dpll_get - get DPLL ID the synth is driven by
|
||||
* @synth: pointer to synth state
|
||||
*
|
||||
* Return: ID of DPLL the given synthetizer is driven by
|
||||
*/
|
||||
static inline u8 zl3073x_synth_dpll_get(const struct zl3073x_synth *synth)
|
||||
{
|
||||
return FIELD_GET(ZL_SYNTH_CTRL_DPLL_SEL, synth->ctrl);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_synth_freq_get - get synth current freq
|
||||
* @synth: pointer to synth state
|
||||
*
|
||||
* Return: frequency of given synthetizer
|
||||
*/
|
||||
static inline u32 zl3073x_synth_freq_get(const struct zl3073x_synth *synth)
|
||||
{
|
||||
return mul_u64_u32_div(synth->freq_base * synth->freq_m,
|
||||
synth->freq_mult, synth->freq_n);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_synth_is_enabled - check if the given synth is enabled
|
||||
* @synth: pointer to synth state
|
||||
*
|
||||
* Return: true if synth is enabled, false otherwise
|
||||
*/
|
||||
static inline bool zl3073x_synth_is_enabled(const struct zl3073x_synth *synth)
|
||||
{
|
||||
return FIELD_GET(ZL_SYNTH_CTRL_EN, synth->ctrl);
|
||||
}
|
||||
|
||||
#endif /* _ZL3073X_SYNTH_H */
|
||||
@ -1352,6 +1352,9 @@ static void ib_device_notify_register(struct ib_device *device)
|
||||
|
||||
down_read(&devices_rwsem);
|
||||
|
||||
/* Mark for userspace that device is ready */
|
||||
kobject_uevent(&device->dev.kobj, KOBJ_ADD);
|
||||
|
||||
ret = rdma_nl_notify_event(device, 0, RDMA_REGISTER_EVENT);
|
||||
if (ret)
|
||||
goto out;
|
||||
@ -1468,10 +1471,9 @@ int ib_register_device(struct ib_device *device, const char *name,
|
||||
return ret;
|
||||
}
|
||||
dev_set_uevent_suppress(&device->dev, false);
|
||||
/* Mark for userspace that device is ready */
|
||||
kobject_uevent(&device->dev.kobj, KOBJ_ADD);
|
||||
|
||||
ib_device_notify_register(device);
|
||||
|
||||
ib_device_put(device);
|
||||
|
||||
return 0;
|
||||
|
||||
@ -56,11 +56,8 @@ int rxe_cq_from_init(struct rxe_dev *rxe, struct rxe_cq *cq, int cqe,
|
||||
|
||||
err = do_mmap_info(rxe, uresp ? &uresp->mi : NULL, udata,
|
||||
cq->queue->buf, cq->queue->buf_size, &cq->queue->ip);
|
||||
if (err) {
|
||||
vfree(cq->queue->buf);
|
||||
kfree(cq->queue);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
cq->is_user = uresp;
|
||||
|
||||
|
||||
@ -454,7 +454,7 @@ static int __init gicv2m_of_init(struct fwnode_handle *parent_handle,
|
||||
#ifdef CONFIG_ACPI
|
||||
static int acpi_num_msi;
|
||||
|
||||
static __init struct fwnode_handle *gicv2m_get_fwnode(struct device *dev)
|
||||
static struct fwnode_handle *gicv2m_get_fwnode(struct device *dev)
|
||||
{
|
||||
struct v2m_data *data;
|
||||
|
||||
|
||||
@ -317,6 +317,7 @@ static int start_readonly;
|
||||
* so all the races disappear.
|
||||
*/
|
||||
static bool create_on_open = true;
|
||||
static bool legacy_async_del_gendisk = true;
|
||||
|
||||
/*
|
||||
* We have a system wide 'event count' that is incremented
|
||||
@ -614,9 +615,12 @@ static void __mddev_put(struct mddev *mddev)
|
||||
mddev->ctime || mddev->hold_active)
|
||||
return;
|
||||
|
||||
/* Array is not configured at all, and not held active, so destroy it */
|
||||
/*
|
||||
* If array is freed by stopping array, MD_DELETED is set by
|
||||
* do_md_stop(), MD_DELETED is still set here in case mddev is freed
|
||||
* directly by closing a mddev that is created by create_on_open.
|
||||
*/
|
||||
set_bit(MD_DELETED, &mddev->flags);
|
||||
|
||||
/*
|
||||
* Call queue_work inside the spinlock so that flush_workqueue() after
|
||||
* mddev_find will succeed in waiting for the work to be done.
|
||||
@ -851,6 +855,22 @@ void mddev_unlock(struct mddev *mddev)
|
||||
kobject_del(&rdev->kobj);
|
||||
export_rdev(rdev, mddev);
|
||||
}
|
||||
|
||||
if (!legacy_async_del_gendisk) {
|
||||
/*
|
||||
* Call del_gendisk after release reconfig_mutex to avoid
|
||||
* deadlock (e.g. call del_gendisk under the lock and an
|
||||
* access to sysfs files waits the lock)
|
||||
* And MD_DELETED is only used for md raid which is set in
|
||||
* do_md_stop. dm raid only uses md_stop to stop. So dm raid
|
||||
* doesn't need to check MD_DELETED when getting reconfig lock
|
||||
*/
|
||||
if (test_bit(MD_DELETED, &mddev->flags) &&
|
||||
!test_and_set_bit(MD_DO_DELETE, &mddev->flags)) {
|
||||
kobject_del(&mddev->kobj);
|
||||
del_gendisk(mddev->gendisk);
|
||||
}
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mddev_unlock);
|
||||
|
||||
@ -5760,19 +5780,30 @@ md_attr_store(struct kobject *kobj, struct attribute *attr,
|
||||
struct md_sysfs_entry *entry = container_of(attr, struct md_sysfs_entry, attr);
|
||||
struct mddev *mddev = container_of(kobj, struct mddev, kobj);
|
||||
ssize_t rv;
|
||||
struct kernfs_node *kn = NULL;
|
||||
|
||||
if (!entry->store)
|
||||
return -EIO;
|
||||
if (!capable(CAP_SYS_ADMIN))
|
||||
return -EACCES;
|
||||
|
||||
if (entry->store == array_state_store && cmd_match(page, "clear"))
|
||||
kn = sysfs_break_active_protection(kobj, attr);
|
||||
|
||||
spin_lock(&all_mddevs_lock);
|
||||
if (!mddev_get(mddev)) {
|
||||
spin_unlock(&all_mddevs_lock);
|
||||
if (kn)
|
||||
sysfs_unbreak_active_protection(kn);
|
||||
return -EBUSY;
|
||||
}
|
||||
spin_unlock(&all_mddevs_lock);
|
||||
rv = entry->store(mddev, page, length);
|
||||
mddev_put(mddev);
|
||||
|
||||
if (kn)
|
||||
sysfs_unbreak_active_protection(kn);
|
||||
|
||||
return rv;
|
||||
}
|
||||
|
||||
@ -5780,12 +5811,13 @@ static void md_kobj_release(struct kobject *ko)
|
||||
{
|
||||
struct mddev *mddev = container_of(ko, struct mddev, kobj);
|
||||
|
||||
if (mddev->sysfs_state)
|
||||
sysfs_put(mddev->sysfs_state);
|
||||
if (mddev->sysfs_level)
|
||||
sysfs_put(mddev->sysfs_level);
|
||||
|
||||
del_gendisk(mddev->gendisk);
|
||||
if (legacy_async_del_gendisk) {
|
||||
if (mddev->sysfs_state)
|
||||
sysfs_put(mddev->sysfs_state);
|
||||
if (mddev->sysfs_level)
|
||||
sysfs_put(mddev->sysfs_level);
|
||||
del_gendisk(mddev->gendisk);
|
||||
}
|
||||
put_disk(mddev->gendisk);
|
||||
}
|
||||
|
||||
@ -5989,6 +6021,9 @@ static int md_alloc_and_put(dev_t dev, char *name)
|
||||
{
|
||||
struct mddev *mddev = md_alloc(dev, name);
|
||||
|
||||
if (legacy_async_del_gendisk)
|
||||
pr_warn("md: async del_gendisk mode will be removed in future, please upgrade to mdadm-4.5+\n");
|
||||
|
||||
if (IS_ERR(mddev))
|
||||
return PTR_ERR(mddev);
|
||||
mddev_put(mddev);
|
||||
@ -6399,15 +6434,22 @@ static void md_clean(struct mddev *mddev)
|
||||
mddev->persistent = 0;
|
||||
mddev->level = LEVEL_NONE;
|
||||
mddev->clevel[0] = 0;
|
||||
|
||||
/*
|
||||
* Don't clear MD_CLOSING, or mddev can be opened again.
|
||||
* 'hold_active != 0' means mddev is still in the creation
|
||||
* process and will be used later.
|
||||
* For legacy_async_del_gendisk mode, it can stop the array in the
|
||||
* middle of assembling it, then it still can access the array. So
|
||||
* it needs to clear MD_CLOSING. If not legacy_async_del_gendisk,
|
||||
* it can't open the array again after stopping it. So it doesn't
|
||||
* clear MD_CLOSING.
|
||||
*/
|
||||
if (mddev->hold_active)
|
||||
mddev->flags = 0;
|
||||
else
|
||||
if (legacy_async_del_gendisk && mddev->hold_active) {
|
||||
clear_bit(MD_CLOSING, &mddev->flags);
|
||||
} else {
|
||||
/* if UNTIL_STOP is set, it's cleared here */
|
||||
mddev->hold_active = 0;
|
||||
/* Don't clear MD_CLOSING, or mddev can be opened again. */
|
||||
mddev->flags &= BIT_ULL_MASK(MD_CLOSING);
|
||||
}
|
||||
mddev->sb_flags = 0;
|
||||
mddev->ro = MD_RDWR;
|
||||
mddev->metadata_type[0] = 0;
|
||||
@ -6632,10 +6674,9 @@ static int do_md_stop(struct mddev *mddev, int mode)
|
||||
mddev->bitmap_info.offset = 0;
|
||||
|
||||
export_array(mddev);
|
||||
|
||||
md_clean(mddev);
|
||||
if (mddev->hold_active == UNTIL_STOP)
|
||||
mddev->hold_active = 0;
|
||||
if (!legacy_async_del_gendisk)
|
||||
set_bit(MD_DELETED, &mddev->flags);
|
||||
}
|
||||
md_new_event();
|
||||
sysfs_notify_dirent_safe(mddev->sysfs_state);
|
||||
@ -10327,6 +10368,7 @@ module_param_call(start_ro, set_ro, get_ro, NULL, S_IRUSR|S_IWUSR);
|
||||
module_param(start_dirty_degraded, int, S_IRUGO|S_IWUSR);
|
||||
module_param_call(new_array, add_named_array, NULL, NULL, S_IWUSR);
|
||||
module_param(create_on_open, bool, S_IRUSR|S_IWUSR);
|
||||
module_param(legacy_async_del_gendisk, bool, 0600);
|
||||
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("MD RAID framework");
|
||||
|
||||
@ -355,6 +355,7 @@ enum mddev_flags {
|
||||
MD_HAS_MULTIPLE_PPLS,
|
||||
MD_NOT_READY,
|
||||
MD_BROKEN,
|
||||
MD_DO_DELETE,
|
||||
MD_DELETED,
|
||||
};
|
||||
|
||||
@ -699,11 +700,26 @@ static inline bool reshape_interrupted(struct mddev *mddev)
|
||||
|
||||
static inline int __must_check mddev_lock(struct mddev *mddev)
|
||||
{
|
||||
return mutex_lock_interruptible(&mddev->reconfig_mutex);
|
||||
int ret;
|
||||
|
||||
ret = mutex_lock_interruptible(&mddev->reconfig_mutex);
|
||||
|
||||
/* MD_DELETED is set in do_md_stop with reconfig_mutex.
|
||||
* So check it here.
|
||||
*/
|
||||
if (!ret && test_bit(MD_DELETED, &mddev->flags)) {
|
||||
ret = -ENODEV;
|
||||
mutex_unlock(&mddev->reconfig_mutex);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Sometimes we need to take the lock in a situation where
|
||||
* failure due to interrupts is not acceptable.
|
||||
* It doesn't need to check MD_DELETED here, the owner which
|
||||
* holds the lock here can't be stopped. And all paths can't
|
||||
* call this function after do_md_stop.
|
||||
*/
|
||||
static inline void mddev_lock_nointr(struct mddev *mddev)
|
||||
{
|
||||
@ -712,7 +728,14 @@ static inline void mddev_lock_nointr(struct mddev *mddev)
|
||||
|
||||
static inline int mddev_trylock(struct mddev *mddev)
|
||||
{
|
||||
return mutex_trylock(&mddev->reconfig_mutex);
|
||||
int ret;
|
||||
|
||||
ret = mutex_trylock(&mddev->reconfig_mutex);
|
||||
if (!ret && test_bit(MD_DELETED, &mddev->flags)) {
|
||||
ret = -ENODEV;
|
||||
mutex_unlock(&mddev->reconfig_mutex);
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
extern void mddev_unlock(struct mddev *mddev);
|
||||
|
||||
|
||||
@ -209,6 +209,8 @@ atomic_t netpoll_block_tx = ATOMIC_INIT(0);
|
||||
|
||||
unsigned int bond_net_id __read_mostly;
|
||||
|
||||
DEFINE_STATIC_KEY_FALSE(bond_bcast_neigh_enabled);
|
||||
|
||||
static const struct flow_dissector_key flow_keys_bonding_keys[] = {
|
||||
{
|
||||
.key_id = FLOW_DISSECTOR_KEY_CONTROL,
|
||||
@ -2378,7 +2380,9 @@ skip_mac_set:
|
||||
unblock_netpoll_tx();
|
||||
}
|
||||
|
||||
if (bond_mode_can_use_xmit_hash(bond))
|
||||
/* broadcast mode uses the all_slaves to loop through slaves. */
|
||||
if (bond_mode_can_use_xmit_hash(bond) ||
|
||||
BOND_MODE(bond) == BOND_MODE_BROADCAST)
|
||||
bond_update_slave_arr(bond, NULL);
|
||||
|
||||
if (!slave_dev->netdev_ops->ndo_bpf ||
|
||||
@ -2554,7 +2558,8 @@ static int __bond_release_one(struct net_device *bond_dev,
|
||||
|
||||
bond_upper_dev_unlink(bond, slave);
|
||||
|
||||
if (bond_mode_can_use_xmit_hash(bond))
|
||||
if (bond_mode_can_use_xmit_hash(bond) ||
|
||||
BOND_MODE(bond) == BOND_MODE_BROADCAST)
|
||||
bond_update_slave_arr(bond, slave);
|
||||
|
||||
slave_info(bond_dev, slave_dev, "Releasing %s interface\n",
|
||||
@ -4464,6 +4469,9 @@ static int bond_open(struct net_device *bond_dev)
|
||||
|
||||
bond_for_each_slave(bond, slave, iter)
|
||||
dev_mc_add(slave->dev, lacpdu_mcast_addr);
|
||||
|
||||
if (bond->params.broadcast_neighbor)
|
||||
static_branch_inc(&bond_bcast_neigh_enabled);
|
||||
}
|
||||
|
||||
if (bond_mode_can_use_xmit_hash(bond))
|
||||
@ -4483,6 +4491,10 @@ static int bond_close(struct net_device *bond_dev)
|
||||
bond_alb_deinitialize(bond);
|
||||
bond->recv_probe = NULL;
|
||||
|
||||
if (BOND_MODE(bond) == BOND_MODE_8023AD &&
|
||||
bond->params.broadcast_neighbor)
|
||||
static_branch_dec(&bond_bcast_neigh_enabled);
|
||||
|
||||
if (bond_uses_primary(bond)) {
|
||||
rcu_read_lock();
|
||||
slave = rcu_dereference(bond->curr_active_slave);
|
||||
@ -5319,6 +5331,37 @@ static struct slave *bond_xdp_xmit_3ad_xor_slave_get(struct bonding *bond,
|
||||
return slaves->arr[hash % count];
|
||||
}
|
||||
|
||||
static bool bond_should_broadcast_neighbor(struct sk_buff *skb,
|
||||
struct net_device *dev)
|
||||
{
|
||||
struct bonding *bond = netdev_priv(dev);
|
||||
struct {
|
||||
struct ipv6hdr ip6;
|
||||
struct icmp6hdr icmp6;
|
||||
} *combined, _combined;
|
||||
|
||||
if (!static_branch_unlikely(&bond_bcast_neigh_enabled))
|
||||
return false;
|
||||
|
||||
if (!bond->params.broadcast_neighbor)
|
||||
return false;
|
||||
|
||||
if (skb->protocol == htons(ETH_P_ARP))
|
||||
return true;
|
||||
|
||||
if (skb->protocol == htons(ETH_P_IPV6)) {
|
||||
combined = skb_header_pointer(skb, skb_mac_header_len(skb),
|
||||
sizeof(_combined),
|
||||
&_combined);
|
||||
if (combined && combined->ip6.nexthdr == NEXTHDR_ICMP &&
|
||||
(combined->icmp6.icmp6_type == NDISC_NEIGHBOUR_SOLICITATION ||
|
||||
combined->icmp6.icmp6_type == NDISC_NEIGHBOUR_ADVERTISEMENT))
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/* Use this Xmit function for 3AD as well as XOR modes. The current
|
||||
* usable slave array is formed in the control path. The xmit function
|
||||
* just calculates hash and sends the packet out.
|
||||
@ -5338,17 +5381,27 @@ static netdev_tx_t bond_3ad_xor_xmit(struct sk_buff *skb,
|
||||
return bond_tx_drop(dev, skb);
|
||||
}
|
||||
|
||||
/* in broadcast mode, we send everything to all usable interfaces. */
|
||||
/* in broadcast mode, we send everything to all or usable slave interfaces.
|
||||
* under rcu_read_lock when this function is called.
|
||||
*/
|
||||
static netdev_tx_t bond_xmit_broadcast(struct sk_buff *skb,
|
||||
struct net_device *bond_dev)
|
||||
struct net_device *bond_dev,
|
||||
bool all_slaves)
|
||||
{
|
||||
struct bonding *bond = netdev_priv(bond_dev);
|
||||
struct slave *slave = NULL;
|
||||
struct list_head *iter;
|
||||
struct bond_up_slave *slaves;
|
||||
bool xmit_suc = false;
|
||||
bool skb_used = false;
|
||||
int slaves_count, i;
|
||||
|
||||
bond_for_each_slave_rcu(bond, slave, iter) {
|
||||
if (all_slaves)
|
||||
slaves = rcu_dereference(bond->all_slaves);
|
||||
else
|
||||
slaves = rcu_dereference(bond->usable_slaves);
|
||||
|
||||
slaves_count = slaves ? READ_ONCE(slaves->count) : 0;
|
||||
for (i = 0; i < slaves_count; i++) {
|
||||
struct slave *slave = slaves->arr[i];
|
||||
struct sk_buff *skb2;
|
||||
|
||||
if (!(bond_slave_is_up(slave) && slave->link == BOND_LINK_UP))
|
||||
@ -5586,10 +5639,13 @@ static netdev_tx_t __bond_start_xmit(struct sk_buff *skb, struct net_device *dev
|
||||
case BOND_MODE_ACTIVEBACKUP:
|
||||
return bond_xmit_activebackup(skb, dev);
|
||||
case BOND_MODE_8023AD:
|
||||
if (bond_should_broadcast_neighbor(skb, dev))
|
||||
return bond_xmit_broadcast(skb, dev, false);
|
||||
fallthrough;
|
||||
case BOND_MODE_XOR:
|
||||
return bond_3ad_xor_xmit(skb, dev);
|
||||
case BOND_MODE_BROADCAST:
|
||||
return bond_xmit_broadcast(skb, dev);
|
||||
return bond_xmit_broadcast(skb, dev, true);
|
||||
case BOND_MODE_ALB:
|
||||
return bond_alb_xmit(skb, dev);
|
||||
case BOND_MODE_TLB:
|
||||
@ -6468,6 +6524,7 @@ static int __init bond_check_params(struct bond_params *params)
|
||||
params->ad_actor_sys_prio = ad_actor_sys_prio;
|
||||
eth_zero_addr(params->ad_actor_system);
|
||||
params->ad_user_port_key = ad_user_port_key;
|
||||
params->broadcast_neighbor = 0;
|
||||
if (packets_per_slave > 0) {
|
||||
params->reciprocal_packets_per_slave =
|
||||
reciprocal_value(packets_per_slave);
|
||||
|
||||
@ -122,6 +122,7 @@ static const struct nla_policy bond_policy[IFLA_BOND_MAX + 1] = {
|
||||
[IFLA_BOND_PEER_NOTIF_DELAY] = NLA_POLICY_FULL_RANGE(NLA_U32, &delay_range),
|
||||
[IFLA_BOND_MISSED_MAX] = { .type = NLA_U8 },
|
||||
[IFLA_BOND_NS_IP6_TARGET] = { .type = NLA_NESTED },
|
||||
[IFLA_BOND_BROADCAST_NEIGH] = { .type = NLA_U8 },
|
||||
};
|
||||
|
||||
static const struct nla_policy bond_slave_policy[IFLA_BOND_SLAVE_MAX + 1] = {
|
||||
@ -549,6 +550,16 @@ static int bond_changelink(struct net_device *bond_dev, struct nlattr *tb[],
|
||||
return err;
|
||||
}
|
||||
|
||||
if (data[IFLA_BOND_BROADCAST_NEIGH]) {
|
||||
int broadcast_neigh = nla_get_u8(data[IFLA_BOND_BROADCAST_NEIGH]);
|
||||
|
||||
bond_opt_initval(&newval, broadcast_neigh);
|
||||
err = __bond_opt_set(bond, BOND_OPT_BROADCAST_NEIGH, &newval,
|
||||
data[IFLA_BOND_BROADCAST_NEIGH], extack);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -615,6 +626,7 @@ static size_t bond_get_size(const struct net_device *bond_dev)
|
||||
/* IFLA_BOND_NS_IP6_TARGET */
|
||||
nla_total_size(sizeof(struct nlattr)) +
|
||||
nla_total_size(sizeof(struct in6_addr)) * BOND_MAX_NS_TARGETS +
|
||||
nla_total_size(sizeof(u8)) + /* IFLA_BOND_BROADCAST_NEIGH */
|
||||
0;
|
||||
}
|
||||
|
||||
@ -774,6 +786,10 @@ static int bond_fill_info(struct sk_buff *skb,
|
||||
bond->params.missed_max))
|
||||
goto nla_put_failure;
|
||||
|
||||
if (nla_put_u8(skb, IFLA_BOND_BROADCAST_NEIGH,
|
||||
bond->params.broadcast_neighbor))
|
||||
goto nla_put_failure;
|
||||
|
||||
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
|
||||
struct ad_info info;
|
||||
|
||||
|
||||
@ -85,7 +85,8 @@ static int bond_option_ad_user_port_key_set(struct bonding *bond,
|
||||
const struct bond_opt_value *newval);
|
||||
static int bond_option_missed_max_set(struct bonding *bond,
|
||||
const struct bond_opt_value *newval);
|
||||
|
||||
static int bond_option_broadcast_neigh_set(struct bonding *bond,
|
||||
const struct bond_opt_value *newval);
|
||||
|
||||
static const struct bond_opt_value bond_mode_tbl[] = {
|
||||
{ "balance-rr", BOND_MODE_ROUNDROBIN, BOND_VALFLAG_DEFAULT},
|
||||
@ -233,6 +234,12 @@ static const struct bond_opt_value bond_missed_max_tbl[] = {
|
||||
{ NULL, -1, 0},
|
||||
};
|
||||
|
||||
static const struct bond_opt_value bond_broadcast_neigh_tbl[] = {
|
||||
{ "off", 0, BOND_VALFLAG_DEFAULT},
|
||||
{ "on", 1, 0},
|
||||
{ NULL, -1, 0}
|
||||
};
|
||||
|
||||
static const struct bond_option bond_opts[BOND_OPT_LAST] = {
|
||||
[BOND_OPT_MODE] = {
|
||||
.id = BOND_OPT_MODE,
|
||||
@ -497,6 +504,14 @@ static const struct bond_option bond_opts[BOND_OPT_LAST] = {
|
||||
.desc = "Delay between each peer notification on failover event, in milliseconds",
|
||||
.values = bond_peer_notif_delay_tbl,
|
||||
.set = bond_option_peer_notif_delay_set
|
||||
},
|
||||
[BOND_OPT_BROADCAST_NEIGH] = {
|
||||
.id = BOND_OPT_BROADCAST_NEIGH,
|
||||
.name = "broadcast_neighbor",
|
||||
.desc = "Broadcast neighbor packets to all active slaves",
|
||||
.unsuppmodes = BOND_MODE_ALL_EX(BIT(BOND_MODE_8023AD)),
|
||||
.values = bond_broadcast_neigh_tbl,
|
||||
.set = bond_option_broadcast_neigh_set,
|
||||
}
|
||||
};
|
||||
|
||||
@ -888,6 +903,13 @@ static int bond_option_mode_set(struct bonding *bond,
|
||||
bond->params.arp_validate = BOND_ARP_VALIDATE_NONE;
|
||||
bond->params.mode = newval->value;
|
||||
|
||||
/* When changing mode, the bond device is down, we may reduce
|
||||
* the bond_bcast_neigh_enabled in bond_close() if broadcast_neighbor
|
||||
* enabled in 8023ad mode. Therefore, only clear broadcast_neighbor
|
||||
* to 0.
|
||||
*/
|
||||
bond->params.broadcast_neighbor = 0;
|
||||
|
||||
if (bond->dev->reg_state == NETREG_REGISTERED) {
|
||||
bool update = false;
|
||||
|
||||
@ -1829,3 +1851,22 @@ static int bond_option_ad_user_port_key_set(struct bonding *bond,
|
||||
bond->params.ad_user_port_key = newval->value;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int bond_option_broadcast_neigh_set(struct bonding *bond,
|
||||
const struct bond_opt_value *newval)
|
||||
{
|
||||
if (bond->params.broadcast_neighbor == newval->value)
|
||||
return 0;
|
||||
|
||||
bond->params.broadcast_neighbor = newval->value;
|
||||
if (bond->dev->flags & IFF_UP) {
|
||||
if (bond->params.broadcast_neighbor)
|
||||
static_branch_inc(&bond_bcast_neigh_enabled);
|
||||
else
|
||||
static_branch_dec(&bond_bcast_neigh_enabled);
|
||||
}
|
||||
|
||||
netdev_dbg(bond->dev, "Setting broadcast_neighbor to %s (%llu)\n",
|
||||
newval->string, newval->value);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -704,7 +704,7 @@ void ice_lag_move_new_vf_nodes(struct ice_vf *vf)
|
||||
lag = pf->lag;
|
||||
|
||||
mutex_lock(&pf->lag_mutex);
|
||||
if (!lag->bonded)
|
||||
if (!lag || !lag->bonded)
|
||||
goto new_vf_unlock;
|
||||
|
||||
pri_port = pf->hw.port_info->lport;
|
||||
|
||||
@ -654,7 +654,7 @@ static int vrf_finish_output6(struct net *net, struct sock *sk,
|
||||
skb->protocol = htons(ETH_P_IPV6);
|
||||
skb->dev = dev;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
nexthop = rt6_nexthop((struct rt6_info *)dst, &ipv6_hdr(skb)->daddr);
|
||||
neigh = __ipv6_neigh_lookup_noref(dst->dev, nexthop);
|
||||
if (unlikely(!neigh))
|
||||
@ -662,10 +662,10 @@ static int vrf_finish_output6(struct net *net, struct sock *sk,
|
||||
if (!IS_ERR(neigh)) {
|
||||
sock_confirm_neigh(skb, neigh);
|
||||
ret = neigh_output(neigh, skb, false);
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
return ret;
|
||||
}
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
|
||||
IP6_INC_STATS(dev_net(dst->dev),
|
||||
ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES);
|
||||
@ -879,7 +879,7 @@ static int vrf_finish_output(struct net *net, struct sock *sk, struct sk_buff *s
|
||||
}
|
||||
}
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
|
||||
neigh = ip_neigh_for_gw(rt, skb, &is_v6gw);
|
||||
if (!IS_ERR(neigh)) {
|
||||
@ -888,11 +888,11 @@ static int vrf_finish_output(struct net *net, struct sock *sk, struct sk_buff *s
|
||||
sock_confirm_neigh(skb, neigh);
|
||||
/* if crossing protocols, can not use the cached header */
|
||||
ret = neigh_output(neigh, skb, is_v6gw);
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
return ret;
|
||||
}
|
||||
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
vrf_tx_error(skb->dev, skb);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@ -53,6 +53,8 @@ MODULE_PARM_DESC(tls_handshake_timeout,
|
||||
"nvme TLS handshake timeout in seconds (default 10)");
|
||||
#endif
|
||||
|
||||
static atomic_t nvme_tcp_cpu_queues[NR_CPUS];
|
||||
|
||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||
/* lockdep can detect a circular dependency of the form
|
||||
* sk_lock -> mmap_lock (page fault) -> fs locks -> sk_lock
|
||||
@ -126,6 +128,7 @@ enum nvme_tcp_queue_flags {
|
||||
NVME_TCP_Q_ALLOCATED = 0,
|
||||
NVME_TCP_Q_LIVE = 1,
|
||||
NVME_TCP_Q_POLLING = 2,
|
||||
NVME_TCP_Q_IO_CPU_SET = 3,
|
||||
};
|
||||
|
||||
enum nvme_tcp_recv_state {
|
||||
@ -1630,23 +1633,56 @@ static bool nvme_tcp_poll_queue(struct nvme_tcp_queue *queue)
|
||||
ctrl->io_queues[HCTX_TYPE_POLL];
|
||||
}
|
||||
|
||||
/*
|
||||
* Track the number of queues assigned to each cpu using a global per-cpu
|
||||
* counter and select the least used cpu from the mq_map. Our goal is to spread
|
||||
* different controllers I/O threads across different cpu cores.
|
||||
*
|
||||
* Note that the accounting is not 100% perfect, but we don't need to be, we're
|
||||
* simply putting our best effort to select the best candidate cpu core that we
|
||||
* find at any given point.
|
||||
*/
|
||||
static void nvme_tcp_set_queue_io_cpu(struct nvme_tcp_queue *queue)
|
||||
{
|
||||
struct nvme_tcp_ctrl *ctrl = queue->ctrl;
|
||||
int qid = nvme_tcp_queue_id(queue);
|
||||
int n = 0;
|
||||
struct blk_mq_tag_set *set = &ctrl->tag_set;
|
||||
int qid = nvme_tcp_queue_id(queue) - 1;
|
||||
unsigned int *mq_map = NULL;
|
||||
int cpu, min_queues = INT_MAX, io_cpu;
|
||||
|
||||
if (wq_unbound)
|
||||
goto out;
|
||||
|
||||
if (nvme_tcp_default_queue(queue))
|
||||
n = qid - 1;
|
||||
mq_map = set->map[HCTX_TYPE_DEFAULT].mq_map;
|
||||
else if (nvme_tcp_read_queue(queue))
|
||||
n = qid - ctrl->io_queues[HCTX_TYPE_DEFAULT] - 1;
|
||||
mq_map = set->map[HCTX_TYPE_READ].mq_map;
|
||||
else if (nvme_tcp_poll_queue(queue))
|
||||
n = qid - ctrl->io_queues[HCTX_TYPE_DEFAULT] -
|
||||
ctrl->io_queues[HCTX_TYPE_READ] - 1;
|
||||
if (wq_unbound)
|
||||
queue->io_cpu = WORK_CPU_UNBOUND;
|
||||
else
|
||||
queue->io_cpu = cpumask_next_wrap(n - 1, cpu_online_mask, -1, false);
|
||||
mq_map = set->map[HCTX_TYPE_POLL].mq_map;
|
||||
|
||||
if (WARN_ON(!mq_map))
|
||||
goto out;
|
||||
|
||||
/* Search for the least used cpu from the mq_map */
|
||||
io_cpu = WORK_CPU_UNBOUND;
|
||||
for_each_online_cpu(cpu) {
|
||||
int num_queues = atomic_read(&nvme_tcp_cpu_queues[cpu]);
|
||||
|
||||
if (mq_map[cpu] != qid)
|
||||
continue;
|
||||
if (num_queues < min_queues) {
|
||||
io_cpu = cpu;
|
||||
min_queues = num_queues;
|
||||
}
|
||||
}
|
||||
if (io_cpu != WORK_CPU_UNBOUND) {
|
||||
queue->io_cpu = io_cpu;
|
||||
atomic_inc(&nvme_tcp_cpu_queues[io_cpu]);
|
||||
set_bit(NVME_TCP_Q_IO_CPU_SET, &queue->flags);
|
||||
}
|
||||
out:
|
||||
dev_dbg(ctrl->ctrl.device, "queue %d: using cpu %d\n",
|
||||
qid, queue->io_cpu);
|
||||
}
|
||||
|
||||
static void nvme_tcp_tls_done(void *data, int status, key_serial_t pskid)
|
||||
@ -1790,7 +1826,7 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, int qid,
|
||||
|
||||
queue->sock->sk->sk_allocation = GFP_ATOMIC;
|
||||
queue->sock->sk->sk_use_task_frag = false;
|
||||
nvme_tcp_set_queue_io_cpu(queue);
|
||||
queue->io_cpu = WORK_CPU_UNBOUND;
|
||||
queue->request = NULL;
|
||||
queue->data_remaining = 0;
|
||||
queue->ddgst_remaining = 0;
|
||||
@ -1912,6 +1948,9 @@ static void nvme_tcp_stop_queue_nowait(struct nvme_ctrl *nctrl, int qid)
|
||||
if (!test_bit(NVME_TCP_Q_ALLOCATED, &queue->flags))
|
||||
return;
|
||||
|
||||
if (test_and_clear_bit(NVME_TCP_Q_IO_CPU_SET, &queue->flags))
|
||||
atomic_dec(&nvme_tcp_cpu_queues[queue->io_cpu]);
|
||||
|
||||
mutex_lock(&queue->queue_lock);
|
||||
if (test_and_clear_bit(NVME_TCP_Q_LIVE, &queue->flags))
|
||||
__nvme_tcp_stop_queue(queue);
|
||||
@ -1971,9 +2010,10 @@ static int nvme_tcp_start_queue(struct nvme_ctrl *nctrl, int idx)
|
||||
nvme_tcp_init_recv_ctx(queue);
|
||||
nvme_tcp_setup_sock_ops(queue);
|
||||
|
||||
if (idx)
|
||||
if (idx) {
|
||||
nvme_tcp_set_queue_io_cpu(queue);
|
||||
ret = nvmf_connect_io_queue(nctrl, idx);
|
||||
else
|
||||
} else
|
||||
ret = nvmf_connect_admin_queue(nctrl);
|
||||
|
||||
if (!ret) {
|
||||
@ -2992,6 +3032,7 @@ static struct nvmf_transport_ops nvme_tcp_transport = {
|
||||
static int __init nvme_tcp_init_module(void)
|
||||
{
|
||||
unsigned int wq_flags = WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_SYSFS;
|
||||
int cpu;
|
||||
|
||||
BUILD_BUG_ON(sizeof(struct nvme_tcp_hdr) != 8);
|
||||
BUILD_BUG_ON(sizeof(struct nvme_tcp_cmd_pdu) != 72);
|
||||
@ -3009,6 +3050,9 @@ static int __init nvme_tcp_init_module(void)
|
||||
if (!nvme_tcp_wq)
|
||||
return -ENOMEM;
|
||||
|
||||
for_each_possible_cpu(cpu)
|
||||
atomic_set(&nvme_tcp_cpu_queues[cpu], 0);
|
||||
|
||||
nvmf_register_transport(&nvme_tcp_transport);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -107,8 +107,14 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
|
||||
*/
|
||||
desc = (struct usb_ss_ep_comp_descriptor *) buffer;
|
||||
|
||||
if (desc->bDescriptorType != USB_DT_SS_ENDPOINT_COMP ||
|
||||
size < USB_DT_SS_EP_COMP_SIZE) {
|
||||
if (size < USB_DT_SS_EP_COMP_SIZE) {
|
||||
dev_notice(ddev,
|
||||
"invalid SuperSpeed endpoint companion descriptor "
|
||||
"of length %d, skipping\n", size);
|
||||
return;
|
||||
}
|
||||
|
||||
if (desc->bDescriptorType != USB_DT_SS_ENDPOINT_COMP) {
|
||||
dev_notice(ddev, "No SuperSpeed endpoint companion for config %d "
|
||||
" interface %d altsetting %d ep %d: "
|
||||
"using minimum values\n",
|
||||
|
||||
@ -688,6 +688,12 @@ void pde_put(struct proc_dir_entry *pde)
|
||||
}
|
||||
}
|
||||
|
||||
static void pde_erase(struct proc_dir_entry *pde, struct proc_dir_entry *parent)
|
||||
{
|
||||
rb_erase(&pde->subdir_node, &parent->subdir);
|
||||
RB_CLEAR_NODE(&pde->subdir_node);
|
||||
}
|
||||
|
||||
/*
|
||||
* Remove a /proc entry and free it if it's not currently in use.
|
||||
*/
|
||||
@ -710,7 +716,7 @@ void remove_proc_entry(const char *name, struct proc_dir_entry *parent)
|
||||
WARN(1, "removing permanent /proc entry '%s'", de->name);
|
||||
de = NULL;
|
||||
} else {
|
||||
rb_erase(&de->subdir_node, &parent->subdir);
|
||||
pde_erase(de, parent);
|
||||
if (S_ISDIR(de->mode))
|
||||
parent->nlink--;
|
||||
}
|
||||
@ -754,7 +760,7 @@ int remove_proc_subtree(const char *name, struct proc_dir_entry *parent)
|
||||
root->parent->name, root->name);
|
||||
return -EINVAL;
|
||||
}
|
||||
rb_erase(&root->subdir_node, &parent->subdir);
|
||||
pde_erase(root, parent);
|
||||
|
||||
de = root;
|
||||
while (1) {
|
||||
@ -766,7 +772,7 @@ int remove_proc_subtree(const char *name, struct proc_dir_entry *parent)
|
||||
next->parent->name, next->name);
|
||||
return -EINVAL;
|
||||
}
|
||||
rb_erase(&next->subdir_node, &de->subdir);
|
||||
pde_erase(next, de);
|
||||
de = next;
|
||||
continue;
|
||||
}
|
||||
|
||||
@ -2729,6 +2729,7 @@ wdata_prepare_pages(struct cifs_writedata *wdata, unsigned int found_pages,
|
||||
* back from swapper_space to tmpfs file mapping
|
||||
*/
|
||||
|
||||
relock_recheck:
|
||||
if (nr_pages == 0)
|
||||
lock_page(page);
|
||||
else if (!trylock_page(page))
|
||||
@ -2751,11 +2752,16 @@ wdata_prepare_pages(struct cifs_writedata *wdata, unsigned int found_pages,
|
||||
break;
|
||||
}
|
||||
|
||||
if (wbc->sync_mode != WB_SYNC_NONE)
|
||||
wait_on_page_writeback(page);
|
||||
if (PageWriteback(page)) {
|
||||
unlock_page(page);
|
||||
if (wbc->sync_mode != WB_SYNC_NONE) {
|
||||
wait_on_page_writeback(page);
|
||||
goto relock_recheck;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
if (PageWriteback(page) ||
|
||||
!clear_page_dirty_for_io(page)) {
|
||||
if (!clear_page_dirty_for_io(page)) {
|
||||
unlock_page(page);
|
||||
break;
|
||||
}
|
||||
|
||||
@ -142,10 +142,15 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
|
||||
unsigned short flags;
|
||||
unsigned int fragments;
|
||||
u64 lookup_table_start, xattr_id_table_start, next_table;
|
||||
int err;
|
||||
int err, devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE);
|
||||
|
||||
TRACE("Entered squashfs_fill_superblock\n");
|
||||
|
||||
if (!devblksize) {
|
||||
errorf(fc, "squashfs: unable to set blocksize\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* squashfs provides 'backing_dev_info' in order to disable read-ahead. For
|
||||
* squashfs, I/O is not deferred, it is done immediately in read_folio,
|
||||
@ -169,7 +174,7 @@ static int squashfs_fill_super(struct super_block *sb, struct fs_context *fc)
|
||||
|
||||
msblk->panic_on_errors = (opts->errors == Opt_errors_panic);
|
||||
|
||||
msblk->devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE);
|
||||
msblk->devblksize = devblksize;
|
||||
msblk->devblksize_log2 = ffz(~msblk->devblksize);
|
||||
|
||||
mutex_init(&msblk->meta_index_mutex);
|
||||
|
||||
@ -41,8 +41,12 @@ struct dpll_device_ops {
|
||||
enum dpll_feature_state *state,
|
||||
struct netlink_ext_ack *extack);
|
||||
|
||||
RH_KABI_RESERVE(1)
|
||||
RH_KABI_RESERVE(2)
|
||||
RH_KABI_USE(1, int (*phase_offset_avg_factor_set)(const struct dpll_device *dpll,
|
||||
void *dpll_priv, u32 factor,
|
||||
struct netlink_ext_ack *extack))
|
||||
RH_KABI_USE(2, int (*phase_offset_avg_factor_get)(const struct dpll_device *dpll,
|
||||
void *dpll_priv, u32 *factor,
|
||||
struct netlink_ext_ack *extack))
|
||||
RH_KABI_RESERVE(3)
|
||||
RH_KABI_RESERVE(4)
|
||||
RH_KABI_RESERVE(5)
|
||||
@ -117,8 +121,18 @@ struct dpll_pin_ops {
|
||||
struct dpll_pin_esync *esync,
|
||||
struct netlink_ext_ack *extack);
|
||||
|
||||
RH_KABI_RESERVE(1)
|
||||
RH_KABI_RESERVE(2)
|
||||
RH_KABI_USE(1, int (*ref_sync_set)(const struct dpll_pin *pin,
|
||||
void *pin_priv,
|
||||
const struct dpll_pin *ref_sync_pin,
|
||||
void *ref_sync_pin_priv,
|
||||
const enum dpll_pin_state state,
|
||||
struct netlink_ext_ack *extack))
|
||||
RH_KABI_USE(2, int (*ref_sync_get)(const struct dpll_pin *pin,
|
||||
void *pin_priv,
|
||||
const struct dpll_pin *ref_sync_pin,
|
||||
void *ref_sync_pin_priv,
|
||||
enum dpll_pin_state *state,
|
||||
struct netlink_ext_ack *extack))
|
||||
RH_KABI_RESERVE(3)
|
||||
RH_KABI_RESERVE(4)
|
||||
RH_KABI_RESERVE(5)
|
||||
@ -233,6 +247,9 @@ int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin,
|
||||
void dpll_pin_on_pin_unregister(struct dpll_pin *parent, struct dpll_pin *pin,
|
||||
const struct dpll_pin_ops *ops, void *priv);
|
||||
|
||||
int dpll_pin_ref_sync_pair_add(struct dpll_pin *pin,
|
||||
struct dpll_pin *ref_sync_pin);
|
||||
|
||||
int dpll_device_change_ntf(struct dpll_device *dpll);
|
||||
|
||||
int dpll_pin_change_ntf(struct dpll_pin *pin);
|
||||
|
||||
@ -70,16 +70,6 @@ static inline bool lockdep_rtnl_is_held(void)
|
||||
#define rcu_dereference_rtnl(p) \
|
||||
rcu_dereference_check(p, lockdep_rtnl_is_held())
|
||||
|
||||
/**
|
||||
* rcu_dereference_bh_rtnl - rcu_dereference_bh with debug checking
|
||||
* @p: The pointer to read, prior to dereference
|
||||
*
|
||||
* Do an rcu_dereference_bh(p), but check caller either holds rcu_read_lock_bh()
|
||||
* or RTNL. Note : Please prefer rtnl_dereference() or rcu_dereference_bh()
|
||||
*/
|
||||
#define rcu_dereference_bh_rtnl(p) \
|
||||
rcu_dereference_bh_check(p, lockdep_rtnl_is_held())
|
||||
|
||||
/**
|
||||
* rtnl_dereference - fetch RCU pointer when updates are prevented by RTNL
|
||||
* @p: The pointer to read, prior to dereferencing
|
||||
|
||||
@ -38,11 +38,11 @@ static inline struct neighbour *__ipv4_neigh_lookup(struct net_device *dev, u32
|
||||
{
|
||||
struct neighbour *n;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
n = __ipv4_neigh_lookup_noref(dev, key);
|
||||
if (n && !refcount_inc_not_zero(&n->refcnt))
|
||||
n = NULL;
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
|
||||
return n;
|
||||
}
|
||||
@ -51,16 +51,10 @@ static inline void __ipv4_confirm_neigh(struct net_device *dev, u32 key)
|
||||
{
|
||||
struct neighbour *n;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
n = __ipv4_neigh_lookup_noref(dev, key);
|
||||
if (n) {
|
||||
unsigned long now = jiffies;
|
||||
|
||||
/* avoid dirtying neighbour */
|
||||
if (READ_ONCE(n->confirmed) != now)
|
||||
WRITE_ONCE(n->confirmed, now);
|
||||
}
|
||||
rcu_read_unlock_bh();
|
||||
neigh_confirm(n);
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
void arp_init(void);
|
||||
|
||||
@ -76,6 +76,8 @@ enum {
|
||||
BOND_OPT_MISSED_MAX,
|
||||
BOND_OPT_NS_TARGETS,
|
||||
BOND_OPT_PRIO,
|
||||
BOND_OPT_COUPLED_CONTROL,
|
||||
BOND_OPT_BROADCAST_NEIGH,
|
||||
BOND_OPT_LAST
|
||||
};
|
||||
|
||||
|
||||
@ -119,6 +119,8 @@ static inline int is_netpoll_tx_blocked(struct net_device *dev)
|
||||
#define is_netpoll_tx_blocked(dev) (0)
|
||||
#endif
|
||||
|
||||
DECLARE_STATIC_KEY_FALSE(bond_bcast_neigh_enabled);
|
||||
|
||||
struct bond_params {
|
||||
int mode;
|
||||
int xmit_policy;
|
||||
@ -152,6 +154,7 @@ struct bond_params {
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
struct in6_addr ns_targets[BOND_MAX_NS_TARGETS];
|
||||
#endif
|
||||
int broadcast_neighbor;
|
||||
|
||||
/* 2 bytes of padding : see ether_addr_equal_64bits() */
|
||||
u8 ad_actor_system[ETH_ALEN + 2];
|
||||
|
||||
@ -26,7 +26,10 @@
|
||||
struct sk_buff;
|
||||
|
||||
struct dst_entry {
|
||||
struct net_device *dev;
|
||||
RH_KABI_REPLACE(struct net_device *dev, union {
|
||||
struct net_device *dev;
|
||||
struct net_device __rcu *dev_rcu;
|
||||
})
|
||||
struct dst_ops *ops;
|
||||
unsigned long _metrics;
|
||||
unsigned long expires;
|
||||
@ -579,6 +582,41 @@ static inline void skb_dst_update_pmtu_no_confirm(struct sk_buff *skb, u32 mtu)
|
||||
dst->ops->update_pmtu(dst, NULL, skb, mtu, false);
|
||||
}
|
||||
|
||||
static inline struct net_device *dst_dev(const struct dst_entry *dst)
|
||||
{
|
||||
return READ_ONCE(dst->dev);
|
||||
}
|
||||
|
||||
static inline struct net_device *dst_dev_rcu(const struct dst_entry *dst)
|
||||
{
|
||||
return rcu_dereference(dst->dev_rcu);
|
||||
}
|
||||
|
||||
static inline struct net *dst_dev_net_rcu(const struct dst_entry *dst)
|
||||
{
|
||||
return dev_net_rcu(dst_dev_rcu(dst));
|
||||
}
|
||||
|
||||
static inline struct net_device *skb_dst_dev(const struct sk_buff *skb)
|
||||
{
|
||||
return dst_dev(skb_dst(skb));
|
||||
}
|
||||
|
||||
static inline struct net_device *skb_dst_dev_rcu(const struct sk_buff *skb)
|
||||
{
|
||||
return dst_dev_rcu(skb_dst(skb));
|
||||
}
|
||||
|
||||
static inline struct net *skb_dst_dev_net(const struct sk_buff *skb)
|
||||
{
|
||||
return dev_net(skb_dst_dev(skb));
|
||||
}
|
||||
|
||||
static inline struct net *skb_dst_dev_net_rcu(const struct sk_buff *skb)
|
||||
{
|
||||
return dev_net_rcu(skb_dst_dev_rcu(skb));
|
||||
}
|
||||
|
||||
struct dst_entry *dst_blackhole_check(struct dst_entry *dst, u32 cookie);
|
||||
void dst_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk,
|
||||
struct sk_buff *skb, u32 mtu, bool confirm_neigh);
|
||||
|
||||
@ -443,20 +443,43 @@ static inline bool ip_sk_ignore_df(const struct sock *sk)
|
||||
static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst,
|
||||
bool forwarding)
|
||||
{
|
||||
struct net *net = dev_net(dst->dev);
|
||||
unsigned int mtu;
|
||||
const struct rtable *rt = container_of(dst, struct rtable, dst);
|
||||
const struct net_device *dev;
|
||||
unsigned int mtu, res;
|
||||
struct net *net;
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
dev = dst_dev_rcu(dst);
|
||||
net = dev_net_rcu(dev);
|
||||
if (READ_ONCE(net->ipv4.sysctl_ip_fwd_use_pmtu) ||
|
||||
ip_mtu_locked(dst) ||
|
||||
!forwarding)
|
||||
return dst_mtu(dst);
|
||||
!forwarding) {
|
||||
mtu = rt->rt_pmtu;
|
||||
if (mtu && time_before(jiffies, rt->dst.expires))
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* 'forwarding = true' case should always honour route mtu */
|
||||
mtu = dst_metric_raw(dst, RTAX_MTU);
|
||||
if (!mtu)
|
||||
mtu = min(READ_ONCE(dst->dev->mtu), IP_MAX_MTU);
|
||||
if (mtu)
|
||||
goto out;
|
||||
|
||||
return mtu - lwtunnel_headroom(dst->lwtstate, mtu);
|
||||
mtu = READ_ONCE(dev->mtu);
|
||||
|
||||
if (unlikely(ip_mtu_locked(dst))) {
|
||||
if (rt->rt_uses_gateway && mtu > 576)
|
||||
mtu = 576;
|
||||
}
|
||||
|
||||
out:
|
||||
mtu = min_t(unsigned int, mtu, IP_MAX_MTU);
|
||||
|
||||
res = mtu - lwtunnel_headroom(dst->lwtstate, mtu);
|
||||
|
||||
rcu_read_unlock();
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
static inline unsigned int ip_skb_dst_mtu(struct sock *sk,
|
||||
|
||||
@ -337,7 +337,7 @@ static inline unsigned int ip6_dst_mtu_forward(const struct dst_entry *dst)
|
||||
|
||||
mtu = IPV6_MIN_MTU;
|
||||
rcu_read_lock();
|
||||
idev = __in6_dev_get(dst->dev);
|
||||
idev = __in6_dev_get(dst_dev_rcu(dst));
|
||||
if (idev)
|
||||
mtu = READ_ONCE(idev->cnf.mtu6);
|
||||
rcu_read_unlock();
|
||||
|
||||
@ -395,11 +395,11 @@ static inline struct neighbour *__ipv6_neigh_lookup(struct net_device *dev, cons
|
||||
{
|
||||
struct neighbour *n;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
n = __ipv6_neigh_lookup_noref(dev, pkey);
|
||||
if (n && !refcount_inc_not_zero(&n->refcnt))
|
||||
n = NULL;
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
|
||||
return n;
|
||||
}
|
||||
@ -409,16 +409,10 @@ static inline void __ipv6_confirm_neigh(struct net_device *dev,
|
||||
{
|
||||
struct neighbour *n;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
n = __ipv6_neigh_lookup_noref(dev, pkey);
|
||||
if (n) {
|
||||
unsigned long now = jiffies;
|
||||
|
||||
/* avoid dirtying neighbour */
|
||||
if (READ_ONCE(n->confirmed) != now)
|
||||
WRITE_ONCE(n->confirmed, now);
|
||||
}
|
||||
rcu_read_unlock_bh();
|
||||
neigh_confirm(n);
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
static inline void __ipv6_confirm_neigh_stub(struct net_device *dev,
|
||||
@ -426,16 +420,10 @@ static inline void __ipv6_confirm_neigh_stub(struct net_device *dev,
|
||||
{
|
||||
struct neighbour *n;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
n = __ipv6_neigh_lookup_noref_stub(dev, pkey);
|
||||
if (n) {
|
||||
unsigned long now = jiffies;
|
||||
|
||||
/* avoid dirtying neighbour */
|
||||
if (READ_ONCE(n->confirmed) != now)
|
||||
WRITE_ONCE(n->confirmed, now);
|
||||
}
|
||||
rcu_read_unlock_bh();
|
||||
neigh_confirm(n);
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
/* uses ipv6_stub and is meant for use outside of IPv6 core */
|
||||
|
||||
@ -308,14 +308,14 @@ static inline struct neighbour *___neigh_lookup_noref(
|
||||
const void *pkey,
|
||||
struct net_device *dev)
|
||||
{
|
||||
struct neigh_hash_table *nht = rcu_dereference_bh(tbl->nht);
|
||||
struct neigh_hash_table *nht = rcu_dereference(tbl->nht);
|
||||
struct neighbour *n;
|
||||
u32 hash_val;
|
||||
|
||||
hash_val = hash(pkey, dev, nht->hash_rnd) >> (32 - nht->hash_shift);
|
||||
for (n = rcu_dereference_bh(nht->hash_buckets[hash_val]);
|
||||
for (n = rcu_dereference(nht->hash_buckets[hash_val]);
|
||||
n != NULL;
|
||||
n = rcu_dereference_bh(n->next)) {
|
||||
n = rcu_dereference(n->next)) {
|
||||
if (n->dev == dev && key_eq(n, pkey))
|
||||
return n;
|
||||
}
|
||||
@ -330,6 +330,17 @@ static inline struct neighbour *__neigh_lookup_noref(struct neigh_table *tbl,
|
||||
return ___neigh_lookup_noref(tbl, tbl->key_eq, tbl->hash, pkey, dev);
|
||||
}
|
||||
|
||||
static inline void neigh_confirm(struct neighbour *n)
|
||||
{
|
||||
if (n) {
|
||||
unsigned long now = jiffies;
|
||||
|
||||
/* avoid dirtying neighbour */
|
||||
if (READ_ONCE(n->confirmed) != now)
|
||||
WRITE_ONCE(n->confirmed, now);
|
||||
}
|
||||
}
|
||||
|
||||
void neigh_table_init(int index, struct neigh_table *tbl);
|
||||
int neigh_table_clear(int index, struct neigh_table *tbl);
|
||||
struct neighbour *neigh_lookup(struct neigh_table *tbl, const void *pkey,
|
||||
|
||||
@ -532,29 +532,6 @@ static inline struct fib6_nh *nexthop_fib6_nh(struct nexthop *nh)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* Variant of nexthop_fib6_nh().
|
||||
* Caller should either hold rcu_read_lock_bh(), or RTNL.
|
||||
*/
|
||||
static inline struct fib6_nh *nexthop_fib6_nh_bh(struct nexthop *nh)
|
||||
{
|
||||
struct nh_info *nhi;
|
||||
|
||||
if (nh->is_group) {
|
||||
struct nh_group *nh_grp;
|
||||
|
||||
nh_grp = rcu_dereference_bh_rtnl(nh->nh_grp);
|
||||
nh = nexthop_mpath_select(nh_grp, 0);
|
||||
if (!nh)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
nhi = rcu_dereference_bh_rtnl(nh->nh_info);
|
||||
if (nhi->family == AF_INET6)
|
||||
return &nhi->fib6_nh;
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline struct net_device *fib6_info_nh_dev(struct fib6_info *f6i)
|
||||
{
|
||||
struct fib6_nh *fib6_nh;
|
||||
|
||||
@ -359,7 +359,7 @@ static inline int ip4_dst_hoplimit(const struct dst_entry *dst)
|
||||
const struct net *net;
|
||||
|
||||
rcu_read_lock();
|
||||
net = dev_net_rcu(dst->dev);
|
||||
net = dst_dev_net_rcu(dst);
|
||||
hoplimit = READ_ONCE(net->ipv4.sysctl_ip_default_ttl);
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
@ -2236,13 +2236,10 @@ static inline void sock_confirm_neigh(struct sk_buff *skb, struct neighbour *n)
|
||||
{
|
||||
if (skb_get_dst_pending_confirm(skb)) {
|
||||
struct sock *sk = skb->sk;
|
||||
unsigned long now = jiffies;
|
||||
|
||||
/* avoid dirtying neighbour */
|
||||
if (READ_ONCE(n->confirmed) != now)
|
||||
WRITE_ONCE(n->confirmed, now);
|
||||
if (sk && READ_ONCE(sk->sk_dst_pending_confirm))
|
||||
WRITE_ONCE(sk->sk_dst_pending_confirm, 0);
|
||||
neigh_confirm(n);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@ -216,6 +216,7 @@ enum dpll_a {
|
||||
DPLL_A_LOCK_STATUS_ERROR,
|
||||
DPLL_A_CLOCK_QUALITY_LEVEL,
|
||||
DPLL_A_PHASE_OFFSET_MONITOR,
|
||||
DPLL_A_PHASE_OFFSET_AVG_FACTOR,
|
||||
|
||||
__DPLL_A_MAX,
|
||||
DPLL_A_MAX = (__DPLL_A_MAX - 1)
|
||||
@ -249,7 +250,7 @@ enum dpll_a_pin {
|
||||
DPLL_A_PIN_ESYNC_FREQUENCY,
|
||||
DPLL_A_PIN_ESYNC_FREQUENCY_SUPPORTED,
|
||||
DPLL_A_PIN_ESYNC_PULSE,
|
||||
__RH_RESERVED_DPLL_A_PIN_REFERENCE_SYNC,
|
||||
DPLL_A_PIN_REFERENCE_SYNC,
|
||||
DPLL_A_PIN_PHASE_ADJUST_GRAN,
|
||||
|
||||
__DPLL_A_PIN_MAX,
|
||||
|
||||
@ -1496,6 +1496,8 @@ enum {
|
||||
IFLA_BOND_AD_LACP_ACTIVE,
|
||||
IFLA_BOND_MISSED_MAX,
|
||||
IFLA_BOND_NS_IP6_TARGET,
|
||||
IFLA_BOND_COUPLED_CONTROL,
|
||||
IFLA_BOND_BROADCAST_NEIGH,
|
||||
__IFLA_BOND_MAX,
|
||||
};
|
||||
|
||||
|
||||
@ -171,9 +171,8 @@ static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
|
||||
* the transfer completes (or if we get -EAGAIN and must poll of
|
||||
* retry).
|
||||
*/
|
||||
req->flags &= ~REQ_F_BUFFERS_COMMIT;
|
||||
io_kbuf_commit(req, bl, 1);
|
||||
req->buf_list = NULL;
|
||||
bl->head++;
|
||||
}
|
||||
return u64_to_user_ptr(buf->addr);
|
||||
}
|
||||
@ -297,8 +296,8 @@ int io_buffers_select(struct io_kiocb *req, struct buf_sel_arg *arg,
|
||||
* committed them, they cannot be put back in the queue.
|
||||
*/
|
||||
if (ret > 0) {
|
||||
req->flags |= REQ_F_BL_NO_RECYCLE;
|
||||
req->buf_list->head += ret;
|
||||
req->flags |= REQ_F_BUFFERS_COMMIT | REQ_F_BL_NO_RECYCLE;
|
||||
io_kbuf_commit(req, bl, ret);
|
||||
}
|
||||
} else {
|
||||
ret = io_provided_buffers_select(req, &arg->out_len, bl, arg->iovs);
|
||||
|
||||
@ -117,15 +117,21 @@ static inline bool io_kbuf_recycle(struct io_kiocb *req, unsigned issue_flags)
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline void io_kbuf_commit(struct io_kiocb *req,
|
||||
struct io_buffer_list *bl, int nr)
|
||||
{
|
||||
if (unlikely(!(req->flags & REQ_F_BUFFERS_COMMIT)))
|
||||
return;
|
||||
bl->head += nr;
|
||||
req->flags &= ~REQ_F_BUFFERS_COMMIT;
|
||||
}
|
||||
|
||||
static inline void __io_put_kbuf_ring(struct io_kiocb *req, int nr)
|
||||
{
|
||||
struct io_buffer_list *bl = req->buf_list;
|
||||
|
||||
if (bl) {
|
||||
if (req->flags & REQ_F_BUFFERS_COMMIT) {
|
||||
bl->head += nr;
|
||||
req->flags &= ~REQ_F_BUFFERS_COMMIT;
|
||||
}
|
||||
io_kbuf_commit(req, bl, nr);
|
||||
req->buf_index = bl->bgid;
|
||||
}
|
||||
req->flags &= ~REQ_F_BUFFER_RING;
|
||||
|
||||
@ -478,6 +478,15 @@ static int io_bundle_nbufs(struct io_async_msghdr *kmsg, int ret)
|
||||
return nbufs;
|
||||
}
|
||||
|
||||
static int io_net_kbuf_recyle(struct io_kiocb *req,
|
||||
struct io_async_msghdr *kmsg, int len)
|
||||
{
|
||||
req->flags |= REQ_F_BL_NO_RECYCLE;
|
||||
if (req->flags & REQ_F_BUFFERS_COMMIT)
|
||||
io_kbuf_commit(req, req->buf_list, io_bundle_nbufs(kmsg, len));
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
static inline bool io_send_finish(struct io_kiocb *req, int *ret,
|
||||
struct io_async_msghdr *kmsg,
|
||||
unsigned issue_flags)
|
||||
@ -546,8 +555,7 @@ int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
|
||||
kmsg->msg.msg_controllen = 0;
|
||||
kmsg->msg.msg_control = NULL;
|
||||
sr->done_io += ret;
|
||||
req->flags |= REQ_F_BL_NO_RECYCLE;
|
||||
return -EAGAIN;
|
||||
return io_net_kbuf_recyle(req, kmsg, ret);
|
||||
}
|
||||
if (ret == -ERESTARTSYS)
|
||||
ret = -EINTR;
|
||||
@ -635,8 +643,7 @@ retry_bundle:
|
||||
sr->len -= ret;
|
||||
sr->buf += ret;
|
||||
sr->done_io += ret;
|
||||
req->flags |= REQ_F_BL_NO_RECYCLE;
|
||||
return -EAGAIN;
|
||||
return io_net_kbuf_recyle(req, kmsg, ret);
|
||||
}
|
||||
if (ret == -ERESTARTSYS)
|
||||
ret = -EINTR;
|
||||
@ -1018,8 +1025,8 @@ retry_multishot:
|
||||
}
|
||||
if (ret > 0 && io_net_retry(sock, flags)) {
|
||||
sr->done_io += ret;
|
||||
req->flags |= REQ_F_BL_NO_RECYCLE;
|
||||
return -EAGAIN;
|
||||
return io_net_kbuf_recyle(req, kmsg, ret);
|
||||
|
||||
}
|
||||
if (ret == -ERESTARTSYS)
|
||||
ret = -EINTR;
|
||||
@ -1158,8 +1165,7 @@ retry_multishot:
|
||||
sr->len -= ret;
|
||||
sr->buf += ret;
|
||||
sr->done_io += ret;
|
||||
req->flags |= REQ_F_BL_NO_RECYCLE;
|
||||
return -EAGAIN;
|
||||
return io_net_kbuf_recyle(req, kmsg, ret);
|
||||
}
|
||||
if (ret == -ERESTARTSYS)
|
||||
ret = -EINTR;
|
||||
@ -1395,8 +1401,7 @@ int io_send_zc(struct io_kiocb *req, unsigned int issue_flags)
|
||||
zc->len -= ret;
|
||||
zc->buf += ret;
|
||||
zc->done_io += ret;
|
||||
req->flags |= REQ_F_BL_NO_RECYCLE;
|
||||
return -EAGAIN;
|
||||
return io_net_kbuf_recyle(req, kmsg, ret);
|
||||
}
|
||||
if (ret == -ERESTARTSYS)
|
||||
ret = -EINTR;
|
||||
@ -1455,8 +1460,7 @@ int io_sendmsg_zc(struct io_kiocb *req, unsigned int issue_flags)
|
||||
|
||||
if (ret > 0 && io_net_retry(sock, flags)) {
|
||||
sr->done_io += ret;
|
||||
req->flags |= REQ_F_BL_NO_RECYCLE;
|
||||
return -EAGAIN;
|
||||
return io_net_kbuf_recyle(req, kmsg, ret);
|
||||
}
|
||||
if (ret == -ERESTARTSYS)
|
||||
ret = -EINTR;
|
||||
|
||||
@ -418,6 +418,8 @@ static int clip_mkip(struct atm_vcc *vcc, int timeout)
|
||||
|
||||
if (!vcc->push)
|
||||
return -EBADFD;
|
||||
if (vcc->user_back)
|
||||
return -EINVAL;
|
||||
clip_vcc = kmalloc(sizeof(struct clip_vcc), GFP_KERNEL);
|
||||
if (!clip_vcc)
|
||||
return -ENOMEM;
|
||||
|
||||
@ -863,11 +863,17 @@ bool hci_cmd_sync_dequeue_once(struct hci_dev *hdev,
|
||||
{
|
||||
struct hci_cmd_sync_work_entry *entry;
|
||||
|
||||
entry = hci_cmd_sync_lookup_entry(hdev, func, data, destroy);
|
||||
if (!entry)
|
||||
return false;
|
||||
mutex_lock(&hdev->cmd_sync_work_lock);
|
||||
|
||||
hci_cmd_sync_cancel_entry(hdev, entry);
|
||||
entry = _hci_cmd_sync_lookup_entry(hdev, func, data, destroy);
|
||||
if (!entry) {
|
||||
mutex_unlock(&hdev->cmd_sync_work_lock);
|
||||
return false;
|
||||
}
|
||||
|
||||
_hci_cmd_sync_cancel_entry(hdev, entry, -ECANCELED);
|
||||
|
||||
mutex_unlock(&hdev->cmd_sync_work_lock);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -743,6 +743,13 @@ static void iso_sock_kill(struct sock *sk)
|
||||
|
||||
BT_DBG("sk %p state %d", sk, sk->sk_state);
|
||||
|
||||
/* Sock is dead, so set conn->sk to NULL to avoid possible UAF */
|
||||
if (iso_pi(sk)->conn) {
|
||||
iso_conn_lock(iso_pi(sk)->conn);
|
||||
iso_pi(sk)->conn->sk = NULL;
|
||||
iso_conn_unlock(iso_pi(sk)->conn);
|
||||
}
|
||||
|
||||
/* Kill poor orphan */
|
||||
bt_sock_unlink(&iso_sk_list, sk);
|
||||
sock_set_flag(sk, SOCK_DEAD);
|
||||
|
||||
@ -152,7 +152,7 @@ void dst_dev_put(struct dst_entry *dst)
|
||||
dst->ops->ifdown(dst, dev, true);
|
||||
dst->input = dst_discard;
|
||||
dst->output = dst_discard_out;
|
||||
dst->dev = blackhole_netdev;
|
||||
rcu_assign_pointer(dst->dev_rcu, blackhole_netdev);
|
||||
netdev_ref_replace(dev, blackhole_netdev, &dst->dev_tracker,
|
||||
GFP_ATOMIC);
|
||||
}
|
||||
@ -265,7 +265,7 @@ unsigned int dst_blackhole_mtu(const struct dst_entry *dst)
|
||||
{
|
||||
unsigned int mtu = dst_metric_raw(dst, RTAX_MTU);
|
||||
|
||||
return mtu ? : dst->dev->mtu;
|
||||
return mtu ? : dst_dev(dst)->mtu;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dst_blackhole_mtu);
|
||||
|
||||
|
||||
@ -2206,7 +2206,7 @@ static int bpf_out_neigh_v6(struct net *net, struct sk_buff *skb,
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
if (!nh) {
|
||||
dst = skb_dst(skb);
|
||||
nexthop = rt6_nexthop(container_of(dst, struct rt6_info, dst),
|
||||
@ -2219,13 +2219,15 @@ static int bpf_out_neigh_v6(struct net *net, struct sk_buff *skb,
|
||||
int ret;
|
||||
|
||||
sock_confirm_neigh(skb, neigh);
|
||||
local_bh_disable();
|
||||
dev_xmit_recursion_inc();
|
||||
ret = neigh_output(neigh, skb, false);
|
||||
dev_xmit_recursion_dec();
|
||||
rcu_read_unlock_bh();
|
||||
local_bh_enable();
|
||||
rcu_read_unlock();
|
||||
return ret;
|
||||
}
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
if (dst)
|
||||
IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES);
|
||||
out_drop:
|
||||
@ -2305,7 +2307,7 @@ static int bpf_out_neigh_v4(struct net *net, struct sk_buff *skb,
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
if (!nh) {
|
||||
struct dst_entry *dst = skb_dst(skb);
|
||||
struct rtable *rt = container_of(dst, struct rtable, dst);
|
||||
@ -2317,7 +2319,7 @@ static int bpf_out_neigh_v4(struct net *net, struct sk_buff *skb,
|
||||
} else if (nh->nh_family == AF_INET) {
|
||||
neigh = ip_neigh_gw4(dev, nh->ipv4_nh);
|
||||
} else {
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
goto out_drop;
|
||||
}
|
||||
|
||||
@ -2325,13 +2327,15 @@ static int bpf_out_neigh_v4(struct net *net, struct sk_buff *skb,
|
||||
int ret;
|
||||
|
||||
sock_confirm_neigh(skb, neigh);
|
||||
local_bh_disable();
|
||||
dev_xmit_recursion_inc();
|
||||
ret = neigh_output(neigh, skb, is_v6gw);
|
||||
dev_xmit_recursion_dec();
|
||||
rcu_read_unlock_bh();
|
||||
local_bh_enable();
|
||||
rcu_read_unlock();
|
||||
return ret;
|
||||
}
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
out_drop:
|
||||
kfree_skb(skb);
|
||||
return -ENETDOWN;
|
||||
|
||||
@ -571,7 +571,7 @@ struct neighbour *neigh_lookup(struct neigh_table *tbl, const void *pkey,
|
||||
|
||||
NEIGH_CACHE_STAT_INC(tbl, lookups);
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
n = __neigh_lookup_noref(tbl, pkey, dev);
|
||||
if (n) {
|
||||
if (!refcount_inc_not_zero(&n->refcnt))
|
||||
@ -579,7 +579,7 @@ struct neighbour *neigh_lookup(struct neigh_table *tbl, const void *pkey,
|
||||
NEIGH_CACHE_STAT_INC(tbl, hits);
|
||||
}
|
||||
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
return n;
|
||||
}
|
||||
EXPORT_SYMBOL(neigh_lookup);
|
||||
@ -2164,11 +2164,11 @@ static int neightbl_fill_info(struct sk_buff *skb, struct neigh_table *tbl,
|
||||
.ndtc_proxy_qlen = tbl->proxy_queue.qlen,
|
||||
};
|
||||
|
||||
rcu_read_lock_bh();
|
||||
nht = rcu_dereference_bh(tbl->nht);
|
||||
rcu_read_lock();
|
||||
nht = rcu_dereference(tbl->nht);
|
||||
ndc.ndtc_hash_rnd = nht->hash_rnd[0];
|
||||
ndc.ndtc_hash_mask = ((1 << nht->hash_shift) - 1);
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
|
||||
if (nla_put(skb, NDTA_CONFIG, sizeof(ndc), &ndc))
|
||||
goto nla_put_failure;
|
||||
@ -2678,15 +2678,15 @@ static int neigh_dump_table(struct neigh_table *tbl, struct sk_buff *skb,
|
||||
if (filter->dev_idx || filter->master_idx)
|
||||
flags |= NLM_F_DUMP_FILTERED;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
nht = rcu_dereference_bh(tbl->nht);
|
||||
rcu_read_lock();
|
||||
nht = rcu_dereference(tbl->nht);
|
||||
|
||||
for (h = s_h; h < (1 << nht->hash_shift); h++) {
|
||||
if (h > s_h)
|
||||
s_idx = 0;
|
||||
for (n = rcu_dereference_bh(nht->hash_buckets[h]), idx = 0;
|
||||
for (n = rcu_dereference(nht->hash_buckets[h]), idx = 0;
|
||||
n != NULL;
|
||||
n = rcu_dereference_bh(n->next)) {
|
||||
n = rcu_dereference(n->next)) {
|
||||
if (idx < s_idx || !net_eq(dev_net(n->dev), net))
|
||||
goto next;
|
||||
if (neigh_ifindex_filtered(n->dev, filter->dev_idx) ||
|
||||
@ -2705,7 +2705,7 @@ next:
|
||||
}
|
||||
rc = skb->len;
|
||||
out:
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
cb->args[1] = h;
|
||||
cb->args[2] = idx;
|
||||
return rc;
|
||||
@ -3050,20 +3050,20 @@ void neigh_for_each(struct neigh_table *tbl, void (*cb)(struct neighbour *, void
|
||||
int chain;
|
||||
struct neigh_hash_table *nht;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
nht = rcu_dereference_bh(tbl->nht);
|
||||
rcu_read_lock();
|
||||
nht = rcu_dereference(tbl->nht);
|
||||
|
||||
read_lock(&tbl->lock); /* avoid resizes */
|
||||
read_lock_bh(&tbl->lock); /* avoid resizes */
|
||||
for (chain = 0; chain < (1 << nht->hash_shift); chain++) {
|
||||
struct neighbour *n;
|
||||
|
||||
for (n = rcu_dereference_bh(nht->hash_buckets[chain]);
|
||||
for (n = rcu_dereference(nht->hash_buckets[chain]);
|
||||
n != NULL;
|
||||
n = rcu_dereference_bh(n->next))
|
||||
n = rcu_dereference(n->next))
|
||||
cb(n, cookie);
|
||||
}
|
||||
read_unlock(&tbl->lock);
|
||||
rcu_read_unlock_bh();
|
||||
read_unlock_bh(&tbl->lock);
|
||||
rcu_read_unlock();
|
||||
}
|
||||
EXPORT_SYMBOL(neigh_for_each);
|
||||
|
||||
@ -3113,7 +3113,7 @@ int neigh_xmit(int index, struct net_device *dev,
|
||||
tbl = neigh_tables[index];
|
||||
if (!tbl)
|
||||
goto out;
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
if (index == NEIGH_ARP_TABLE) {
|
||||
u32 key = *((u32 *)addr);
|
||||
|
||||
@ -3125,11 +3125,11 @@ int neigh_xmit(int index, struct net_device *dev,
|
||||
neigh = __neigh_create(tbl, addr, dev, false);
|
||||
err = PTR_ERR(neigh);
|
||||
if (IS_ERR(neigh)) {
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
goto out_kfree_skb;
|
||||
}
|
||||
err = neigh->output(neigh, skb);
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
}
|
||||
else if (index == NEIGH_LINK_TABLE) {
|
||||
err = dev_hard_header(skb, dev, ntohs(skb->protocol),
|
||||
@ -3158,7 +3158,7 @@ static struct neighbour *neigh_get_first(struct seq_file *seq)
|
||||
|
||||
state->flags &= ~NEIGH_SEQ_IS_PNEIGH;
|
||||
for (bucket = 0; bucket < (1 << nht->hash_shift); bucket++) {
|
||||
n = rcu_dereference_bh(nht->hash_buckets[bucket]);
|
||||
n = rcu_dereference(nht->hash_buckets[bucket]);
|
||||
|
||||
while (n) {
|
||||
if (!net_eq(dev_net(n->dev), net))
|
||||
@ -3176,7 +3176,7 @@ static struct neighbour *neigh_get_first(struct seq_file *seq)
|
||||
if (READ_ONCE(n->nud_state) & ~NUD_NOARP)
|
||||
break;
|
||||
next:
|
||||
n = rcu_dereference_bh(n->next);
|
||||
n = rcu_dereference(n->next);
|
||||
}
|
||||
|
||||
if (n)
|
||||
@ -3200,7 +3200,7 @@ static struct neighbour *neigh_get_next(struct seq_file *seq,
|
||||
if (v)
|
||||
return n;
|
||||
}
|
||||
n = rcu_dereference_bh(n->next);
|
||||
n = rcu_dereference(n->next);
|
||||
|
||||
while (1) {
|
||||
while (n) {
|
||||
@ -3218,7 +3218,7 @@ static struct neighbour *neigh_get_next(struct seq_file *seq,
|
||||
if (READ_ONCE(n->nud_state) & ~NUD_NOARP)
|
||||
break;
|
||||
next:
|
||||
n = rcu_dereference_bh(n->next);
|
||||
n = rcu_dereference(n->next);
|
||||
}
|
||||
|
||||
if (n)
|
||||
@ -3227,7 +3227,7 @@ next:
|
||||
if (++state->bucket >= (1 << nht->hash_shift))
|
||||
break;
|
||||
|
||||
n = rcu_dereference_bh(nht->hash_buckets[state->bucket]);
|
||||
n = rcu_dereference(nht->hash_buckets[state->bucket]);
|
||||
}
|
||||
|
||||
if (n && pos)
|
||||
@ -3329,7 +3329,7 @@ static void *neigh_get_idx_any(struct seq_file *seq, loff_t *pos)
|
||||
|
||||
void *neigh_seq_start(struct seq_file *seq, loff_t *pos, struct neigh_table *tbl, unsigned int neigh_seq_flags)
|
||||
__acquires(tbl->lock)
|
||||
__acquires(rcu_bh)
|
||||
__acquires(rcu)
|
||||
{
|
||||
struct neigh_seq_state *state = seq->private;
|
||||
|
||||
@ -3337,9 +3337,9 @@ void *neigh_seq_start(struct seq_file *seq, loff_t *pos, struct neigh_table *tbl
|
||||
state->bucket = 0;
|
||||
state->flags = (neigh_seq_flags & ~NEIGH_SEQ_IS_PNEIGH);
|
||||
|
||||
rcu_read_lock_bh();
|
||||
state->nht = rcu_dereference_bh(tbl->nht);
|
||||
read_lock(&tbl->lock);
|
||||
rcu_read_lock();
|
||||
state->nht = rcu_dereference(tbl->nht);
|
||||
read_lock_bh(&tbl->lock);
|
||||
|
||||
return *pos ? neigh_get_idx_any(seq, pos) : SEQ_START_TOKEN;
|
||||
}
|
||||
@ -3374,13 +3374,13 @@ EXPORT_SYMBOL(neigh_seq_next);
|
||||
|
||||
void neigh_seq_stop(struct seq_file *seq, void *v)
|
||||
__releases(tbl->lock)
|
||||
__releases(rcu_bh)
|
||||
__releases(rcu)
|
||||
{
|
||||
struct neigh_seq_state *state = seq->private;
|
||||
struct neigh_table *tbl = state->tbl;
|
||||
|
||||
read_unlock(&tbl->lock);
|
||||
rcu_read_unlock_bh();
|
||||
read_unlock_bh(&tbl->lock);
|
||||
rcu_read_unlock();
|
||||
}
|
||||
EXPORT_SYMBOL(neigh_seq_stop);
|
||||
|
||||
|
||||
@ -2341,7 +2341,7 @@ void sk_free_unlock_clone(struct sock *sk)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(sk_free_unlock_clone);
|
||||
|
||||
static u32 sk_dst_gso_max_size(struct sock *sk, struct dst_entry *dst)
|
||||
static u32 sk_dst_gso_max_size(struct sock *sk, const struct net_device *dev)
|
||||
{
|
||||
bool is_ipv6 = false;
|
||||
u32 max_size;
|
||||
@ -2351,8 +2351,8 @@ static u32 sk_dst_gso_max_size(struct sock *sk, struct dst_entry *dst)
|
||||
!ipv6_addr_v4mapped(&sk->sk_v6_rcv_saddr));
|
||||
#endif
|
||||
/* pairs with the WRITE_ONCE() in netif_set_gso(_ipv4)_max_size() */
|
||||
max_size = is_ipv6 ? READ_ONCE(dst->dev->gso_max_size) :
|
||||
READ_ONCE(dst->dev->gso_ipv4_max_size);
|
||||
max_size = is_ipv6 ? READ_ONCE(dev->gso_max_size) :
|
||||
READ_ONCE(dev->gso_ipv4_max_size);
|
||||
if (max_size > GSO_LEGACY_MAX_SIZE && !sk_is_tcp(sk))
|
||||
max_size = GSO_LEGACY_MAX_SIZE;
|
||||
|
||||
@ -2361,9 +2361,12 @@ static u32 sk_dst_gso_max_size(struct sock *sk, struct dst_entry *dst)
|
||||
|
||||
void sk_setup_caps(struct sock *sk, struct dst_entry *dst)
|
||||
{
|
||||
const struct net_device *dev;
|
||||
u32 max_segs = 1;
|
||||
|
||||
sk->sk_route_caps = dst->dev->features;
|
||||
rcu_read_lock();
|
||||
dev = dst_dev_rcu(dst);
|
||||
sk->sk_route_caps = dev->features;
|
||||
if (sk_is_tcp(sk))
|
||||
sk->sk_route_caps |= NETIF_F_GSO;
|
||||
if (sk->sk_route_caps & NETIF_F_GSO)
|
||||
@ -2375,13 +2378,14 @@ void sk_setup_caps(struct sock *sk, struct dst_entry *dst)
|
||||
sk->sk_route_caps &= ~NETIF_F_GSO_MASK;
|
||||
} else {
|
||||
sk->sk_route_caps |= NETIF_F_SG | NETIF_F_HW_CSUM;
|
||||
sk->sk_gso_max_size = sk_dst_gso_max_size(sk, dst);
|
||||
sk->sk_gso_max_size = sk_dst_gso_max_size(sk, dev);
|
||||
/* pairs with the WRITE_ONCE() in netif_set_gso_max_segs() */
|
||||
max_segs = max_t(u32, READ_ONCE(dst->dev->gso_max_segs), 1);
|
||||
max_segs = max_t(u32, READ_ONCE(dev->gso_max_segs), 1);
|
||||
}
|
||||
}
|
||||
sk->sk_gso_max_segs = max_segs;
|
||||
sk_dst_set(sk, dst);
|
||||
rcu_read_unlock();
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(sk_setup_caps);
|
||||
|
||||
|
||||
@ -2221,7 +2221,7 @@ static bool fib_good_nh(const struct fib_nh *nh)
|
||||
if (nh->fib_nh_scope == RT_SCOPE_LINK) {
|
||||
struct neighbour *n;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
|
||||
if (likely(nh->fib_nh_gw_family == AF_INET))
|
||||
n = __ipv4_neigh_lookup_noref(nh->fib_nh_dev,
|
||||
@ -2234,7 +2234,7 @@ static bool fib_good_nh(const struct fib_nh *nh)
|
||||
if (n)
|
||||
state = READ_ONCE(n->nud_state);
|
||||
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
return !!(state & NUD_VALID);
|
||||
|
||||
@ -225,7 +225,7 @@ static int ip_finish_output2(struct net *net, struct sock *sk, struct sk_buff *s
|
||||
return res;
|
||||
}
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
neigh = ip_neigh_for_gw(rt, skb, &is_v6gw);
|
||||
if (!IS_ERR(neigh)) {
|
||||
int res;
|
||||
@ -233,10 +233,10 @@ static int ip_finish_output2(struct net *net, struct sock *sk, struct sk_buff *s
|
||||
sock_confirm_neigh(skb, neigh);
|
||||
/* if crossing protocols, can not use the cached header */
|
||||
res = neigh_output(neigh, skb, is_v6gw);
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
return res;
|
||||
}
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
|
||||
net_dbg_ratelimited("%s: No header cache and no neighbour!\n",
|
||||
__func__);
|
||||
@ -425,15 +425,20 @@ int ip_mc_output(struct net *net, struct sock *sk, struct sk_buff *skb)
|
||||
|
||||
int ip_output(struct net *net, struct sock *sk, struct sk_buff *skb)
|
||||
{
|
||||
struct net_device *dev = skb_dst(skb)->dev, *indev = skb->dev;
|
||||
struct net_device *dev, *indev = skb->dev;
|
||||
int ret_val;
|
||||
|
||||
rcu_read_lock();
|
||||
dev = skb_dst_dev_rcu(skb);
|
||||
skb->dev = dev;
|
||||
skb->protocol = htons(ETH_P_IP);
|
||||
|
||||
return NF_HOOK_COND(NFPROTO_IPV4, NF_INET_POST_ROUTING,
|
||||
net, sk, skb, indev, dev,
|
||||
ip_finish_output,
|
||||
!(IPCB(skb)->flags & IPSKB_REROUTED));
|
||||
ret_val = NF_HOOK_COND(NFPROTO_IPV4, NF_INET_POST_ROUTING,
|
||||
net, sk, skb, indev, dev,
|
||||
ip_finish_output,
|
||||
!(IPCB(skb)->flags & IPSKB_REROUTED));
|
||||
rcu_read_unlock();
|
||||
return ret_val;
|
||||
}
|
||||
EXPORT_SYMBOL(ip_output);
|
||||
|
||||
|
||||
@ -1357,13 +1357,13 @@ static bool ipv6_good_nh(const struct fib6_nh *nh)
|
||||
int state = NUD_REACHABLE;
|
||||
struct neighbour *n;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
|
||||
n = __ipv6_neigh_lookup_noref_stub(nh->fib_nh_dev, &nh->fib_nh_gw6);
|
||||
if (n)
|
||||
state = READ_ONCE(n->nud_state);
|
||||
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
|
||||
return !!(state & NUD_VALID);
|
||||
}
|
||||
@ -1373,14 +1373,14 @@ static bool ipv4_good_nh(const struct fib_nh *nh)
|
||||
int state = NUD_REACHABLE;
|
||||
struct neighbour *n;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
|
||||
n = __ipv4_neigh_lookup_noref(nh->fib_nh_dev,
|
||||
(__force u32)nh->fib_nh_gw4);
|
||||
if (n)
|
||||
state = READ_ONCE(n->nud_state);
|
||||
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
|
||||
return !!(state & NUD_VALID);
|
||||
}
|
||||
|
||||
@ -422,7 +422,7 @@ static struct neighbour *ipv4_neigh_lookup(const struct dst_entry *dst,
|
||||
struct net_device *dev = dst->dev;
|
||||
struct neighbour *n;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
|
||||
if (likely(rt->rt_gw_family == AF_INET)) {
|
||||
n = ip_neigh_gw4(dev, rt->rt_gw4);
|
||||
@ -438,7 +438,7 @@ static struct neighbour *ipv4_neigh_lookup(const struct dst_entry *dst,
|
||||
if (!IS_ERR(n) && !refcount_inc_not_zero(&n->refcnt))
|
||||
n = NULL;
|
||||
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
|
||||
return n;
|
||||
}
|
||||
@ -1027,9 +1027,9 @@ out: kfree_skb_reason(skb, reason);
|
||||
static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
|
||||
{
|
||||
struct dst_entry *dst = &rt->dst;
|
||||
struct net *net = dev_net(dst->dev);
|
||||
struct fib_result res;
|
||||
bool lock = false;
|
||||
struct net *net;
|
||||
u32 old_mtu;
|
||||
|
||||
if (ip_mtu_locked(dst))
|
||||
@ -1039,6 +1039,8 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
|
||||
if (old_mtu < mtu)
|
||||
return;
|
||||
|
||||
rcu_read_lock();
|
||||
net = dst_dev_net_rcu(dst);
|
||||
if (mtu < ip_rt_min_pmtu) {
|
||||
lock = true;
|
||||
mtu = min(old_mtu, ip_rt_min_pmtu);
|
||||
@ -1046,9 +1048,8 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
|
||||
|
||||
if (rt->rt_pmtu == mtu && !lock &&
|
||||
time_before(jiffies, dst->expires - ip_rt_mtu_expires / 2))
|
||||
return;
|
||||
goto out;
|
||||
|
||||
rcu_read_lock();
|
||||
if (fib_lookup(net, fl4, &res, 0) == 0) {
|
||||
struct fib_nh_common *nhc;
|
||||
|
||||
@ -1057,6 +1058,7 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
|
||||
update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock,
|
||||
jiffies + ip_rt_mtu_expires);
|
||||
}
|
||||
out:
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
@ -1332,26 +1334,7 @@ static unsigned int ipv4_default_advmss(const struct dst_entry *dst)
|
||||
|
||||
INDIRECT_CALLABLE_SCOPE unsigned int ipv4_mtu(const struct dst_entry *dst)
|
||||
{
|
||||
const struct rtable *rt = (const struct rtable *)dst;
|
||||
unsigned int mtu = rt->rt_pmtu;
|
||||
|
||||
if (!mtu || time_after_eq(jiffies, rt->dst.expires))
|
||||
mtu = dst_metric_raw(dst, RTAX_MTU);
|
||||
|
||||
if (mtu)
|
||||
goto out;
|
||||
|
||||
mtu = READ_ONCE(dst->dev->mtu);
|
||||
|
||||
if (unlikely(ip_mtu_locked(dst))) {
|
||||
if (rt->rt_uses_gateway && mtu > 576)
|
||||
mtu = 576;
|
||||
}
|
||||
|
||||
out:
|
||||
mtu = min_t(unsigned int, mtu, IP_MAX_MTU);
|
||||
|
||||
return mtu - lwtunnel_headroom(dst->lwtstate, mtu);
|
||||
return ip_dst_mtu_maybe_forward(dst, false);
|
||||
}
|
||||
EXPORT_INDIRECT_CALLABLE(ipv4_mtu);
|
||||
|
||||
|
||||
@ -1033,7 +1033,7 @@ static int ipv6_add_addr_hash(struct net_device *dev, struct inet6_ifaddr *ifa)
|
||||
unsigned int hash = inet6_addr_hash(dev_net(dev), &ifa->addr);
|
||||
int err = 0;
|
||||
|
||||
spin_lock(&addrconf_hash_lock);
|
||||
spin_lock_bh(&addrconf_hash_lock);
|
||||
|
||||
/* Ignore adding duplicate addresses on an interface */
|
||||
if (ipv6_chk_same_addr(dev_net(dev), &ifa->addr, dev, hash)) {
|
||||
@ -1043,7 +1043,7 @@ static int ipv6_add_addr_hash(struct net_device *dev, struct inet6_ifaddr *ifa)
|
||||
hlist_add_head_rcu(&ifa->addr_lst, &inet6_addr_lst[hash]);
|
||||
}
|
||||
|
||||
spin_unlock(&addrconf_hash_lock);
|
||||
spin_unlock_bh(&addrconf_hash_lock);
|
||||
|
||||
return err;
|
||||
}
|
||||
@ -1145,15 +1145,15 @@ ipv6_add_addr(struct inet6_dev *idev, struct ifa6_config *cfg,
|
||||
/* For caller */
|
||||
refcount_set(&ifa->refcnt, 1);
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
|
||||
err = ipv6_add_addr_hash(idev->dev, ifa);
|
||||
if (err < 0) {
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
goto out;
|
||||
}
|
||||
|
||||
write_lock(&idev->lock);
|
||||
write_lock_bh(&idev->lock);
|
||||
|
||||
/* Add to inet6_dev unicast addr list. */
|
||||
ipv6_link_dev_addr(idev, ifa);
|
||||
@ -1164,9 +1164,9 @@ ipv6_add_addr(struct inet6_dev *idev, struct ifa6_config *cfg,
|
||||
}
|
||||
|
||||
in6_ifa_hold(ifa);
|
||||
write_unlock(&idev->lock);
|
||||
write_unlock_bh(&idev->lock);
|
||||
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
|
||||
inet6addr_notifier_call_chain(NETDEV_UP, ifa);
|
||||
out:
|
||||
|
||||
@ -2499,7 +2499,7 @@ static int ipv6_route_native_seq_show(struct seq_file *seq, void *v)
|
||||
const struct net_device *dev;
|
||||
|
||||
if (rt->nh)
|
||||
fib6_nh = nexthop_fib6_nh_bh(rt->nh);
|
||||
fib6_nh = nexthop_fib6_nh(rt->nh);
|
||||
|
||||
seq_printf(seq, "%pi6 %02x ", &rt->fib6_dst.addr, rt->fib6_dst.plen);
|
||||
|
||||
@ -2564,14 +2564,14 @@ static struct fib6_table *ipv6_route_seq_next_table(struct fib6_table *tbl,
|
||||
|
||||
if (tbl) {
|
||||
h = (tbl->tb6_id & (FIB6_TABLE_HASHSZ - 1)) + 1;
|
||||
node = rcu_dereference_bh(hlist_next_rcu(&tbl->tb6_hlist));
|
||||
node = rcu_dereference(hlist_next_rcu(&tbl->tb6_hlist));
|
||||
} else {
|
||||
h = 0;
|
||||
node = NULL;
|
||||
}
|
||||
|
||||
while (!node && h < FIB6_TABLE_HASHSZ) {
|
||||
node = rcu_dereference_bh(
|
||||
node = rcu_dereference(
|
||||
hlist_first_rcu(&net->ipv6.fib_table_hash[h++]));
|
||||
}
|
||||
return hlist_entry_safe(node, struct fib6_table, tb6_hlist);
|
||||
@ -2601,7 +2601,7 @@ static void *ipv6_route_seq_next(struct seq_file *seq, void *v, loff_t *pos)
|
||||
if (!v)
|
||||
goto iter_table;
|
||||
|
||||
n = rcu_dereference_bh(((struct fib6_info *)v)->fib6_next);
|
||||
n = rcu_dereference(((struct fib6_info *)v)->fib6_next);
|
||||
if (n)
|
||||
return n;
|
||||
|
||||
@ -2627,12 +2627,12 @@ iter_table:
|
||||
}
|
||||
|
||||
static void *ipv6_route_seq_start(struct seq_file *seq, loff_t *pos)
|
||||
__acquires(RCU_BH)
|
||||
__acquires(RCU)
|
||||
{
|
||||
struct net *net = seq_file_net(seq);
|
||||
struct ipv6_route_iter *iter = seq->private;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
iter->tbl = ipv6_route_seq_next_table(NULL, net);
|
||||
iter->skip = *pos;
|
||||
|
||||
@ -2653,7 +2653,7 @@ static bool ipv6_route_iter_active(struct ipv6_route_iter *iter)
|
||||
}
|
||||
|
||||
static void ipv6_route_native_seq_stop(struct seq_file *seq, void *v)
|
||||
__releases(RCU_BH)
|
||||
__releases(RCU)
|
||||
{
|
||||
struct net *net = seq_file_net(seq);
|
||||
struct ipv6_route_iter *iter = seq->private;
|
||||
@ -2661,7 +2661,7 @@ static void ipv6_route_native_seq_stop(struct seq_file *seq, void *v)
|
||||
if (ipv6_route_iter_active(iter))
|
||||
fib6_walker_unlink(net, &iter->w);
|
||||
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
#if IS_BUILTIN(CONFIG_IPV6) && defined(CONFIG_BPF_SYSCALL)
|
||||
|
||||
@ -58,18 +58,18 @@ DEFINE_STATIC_KEY_DEFERRED_FALSE(ipv6_flowlabel_exclusive, HZ);
|
||||
EXPORT_SYMBOL(ipv6_flowlabel_exclusive);
|
||||
|
||||
#define for_each_fl_rcu(hash, fl) \
|
||||
for (fl = rcu_dereference_bh(fl_ht[(hash)]); \
|
||||
for (fl = rcu_dereference(fl_ht[(hash)]); \
|
||||
fl != NULL; \
|
||||
fl = rcu_dereference_bh(fl->next))
|
||||
fl = rcu_dereference(fl->next))
|
||||
#define for_each_fl_continue_rcu(fl) \
|
||||
for (fl = rcu_dereference_bh(fl->next); \
|
||||
for (fl = rcu_dereference(fl->next); \
|
||||
fl != NULL; \
|
||||
fl = rcu_dereference_bh(fl->next))
|
||||
fl = rcu_dereference(fl->next))
|
||||
|
||||
#define for_each_sk_fl_rcu(np, sfl) \
|
||||
for (sfl = rcu_dereference_bh(np->ipv6_fl_list); \
|
||||
for (sfl = rcu_dereference(np->ipv6_fl_list); \
|
||||
sfl != NULL; \
|
||||
sfl = rcu_dereference_bh(sfl->next))
|
||||
sfl = rcu_dereference(sfl->next))
|
||||
|
||||
static inline struct ip6_flowlabel *__fl_lookup(struct net *net, __be32 label)
|
||||
{
|
||||
@ -86,11 +86,11 @@ static struct ip6_flowlabel *fl_lookup(struct net *net, __be32 label)
|
||||
{
|
||||
struct ip6_flowlabel *fl;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
fl = __fl_lookup(net, label);
|
||||
if (fl && !atomic_inc_not_zero(&fl->users))
|
||||
fl = NULL;
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
return fl;
|
||||
}
|
||||
|
||||
@ -217,6 +217,7 @@ static struct ip6_flowlabel *fl_intern(struct net *net,
|
||||
|
||||
fl->label = label & IPV6_FLOWLABEL_MASK;
|
||||
|
||||
rcu_read_lock();
|
||||
spin_lock_bh(&ip6_fl_lock);
|
||||
if (label == 0) {
|
||||
for (;;) {
|
||||
@ -240,6 +241,7 @@ static struct ip6_flowlabel *fl_intern(struct net *net,
|
||||
if (lfl) {
|
||||
atomic_inc(&lfl->users);
|
||||
spin_unlock_bh(&ip6_fl_lock);
|
||||
rcu_read_unlock();
|
||||
return lfl;
|
||||
}
|
||||
}
|
||||
@ -249,6 +251,7 @@ static struct ip6_flowlabel *fl_intern(struct net *net,
|
||||
rcu_assign_pointer(fl_ht[FL_HASH(fl->label)], fl);
|
||||
atomic_inc(&fl_size);
|
||||
spin_unlock_bh(&ip6_fl_lock);
|
||||
rcu_read_unlock();
|
||||
return NULL;
|
||||
}
|
||||
|
||||
@ -263,17 +266,17 @@ struct ip6_flowlabel *__fl6_sock_lookup(struct sock *sk, __be32 label)
|
||||
|
||||
label &= IPV6_FLOWLABEL_MASK;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
for_each_sk_fl_rcu(np, sfl) {
|
||||
struct ip6_flowlabel *fl = sfl->fl;
|
||||
|
||||
if (fl->label == label && atomic_inc_not_zero(&fl->users)) {
|
||||
fl->lastuse = jiffies;
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
return fl;
|
||||
}
|
||||
}
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
return NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(__fl6_sock_lookup);
|
||||
@ -475,10 +478,10 @@ static int mem_check(struct sock *sk)
|
||||
if (room > FL_MAX_SIZE - FL_MAX_PER_SOCK)
|
||||
return 0;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
for_each_sk_fl_rcu(np, sfl)
|
||||
count++;
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
|
||||
if (room <= 0 ||
|
||||
((count >= FL_MAX_PER_SOCK ||
|
||||
@ -515,7 +518,7 @@ int ipv6_flowlabel_opt_get(struct sock *sk, struct in6_flowlabel_req *freq,
|
||||
return 0;
|
||||
}
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
|
||||
for_each_sk_fl_rcu(np, sfl) {
|
||||
if (sfl->fl->label == (np->flow_label & IPV6_FLOWLABEL_MASK)) {
|
||||
@ -527,11 +530,11 @@ int ipv6_flowlabel_opt_get(struct sock *sk, struct in6_flowlabel_req *freq,
|
||||
freq->flr_linger = sfl->fl->linger / HZ;
|
||||
|
||||
spin_unlock_bh(&ip6_fl_lock);
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
|
||||
return -ENOENT;
|
||||
}
|
||||
@ -581,16 +584,16 @@ static int ipv6_flowlabel_renew(struct sock *sk, struct in6_flowlabel_req *freq)
|
||||
struct ipv6_fl_socklist *sfl;
|
||||
int err;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
for_each_sk_fl_rcu(np, sfl) {
|
||||
if (sfl->fl->label == freq->flr_label) {
|
||||
err = fl6_renew(sfl->fl, freq->flr_linger,
|
||||
freq->flr_expires);
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
return err;
|
||||
}
|
||||
}
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
|
||||
if (freq->flr_share == IPV6_FL_S_NONE &&
|
||||
ns_capable(net->user_ns, CAP_NET_ADMIN)) {
|
||||
@ -641,11 +644,11 @@ static int ipv6_flowlabel_get(struct sock *sk, struct in6_flowlabel_req *freq,
|
||||
|
||||
if (freq->flr_label) {
|
||||
err = -EEXIST;
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
for_each_sk_fl_rcu(np, sfl) {
|
||||
if (sfl->fl->label == freq->flr_label) {
|
||||
if (freq->flr_flags & IPV6_FL_F_EXCL) {
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
goto done;
|
||||
}
|
||||
fl1 = sfl->fl;
|
||||
@ -654,7 +657,7 @@ static int ipv6_flowlabel_get(struct sock *sk, struct in6_flowlabel_req *freq,
|
||||
break;
|
||||
}
|
||||
}
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
|
||||
if (!fl1)
|
||||
fl1 = fl_lookup(net, freq->flr_label);
|
||||
@ -809,7 +812,7 @@ static void *ip6fl_seq_start(struct seq_file *seq, loff_t *pos)
|
||||
|
||||
state->pid_ns = proc_pid_ns(file_inode(seq->file)->i_sb);
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
return *pos ? ip6fl_get_idx(seq, *pos - 1) : SEQ_START_TOKEN;
|
||||
}
|
||||
|
||||
@ -828,7 +831,7 @@ static void *ip6fl_seq_next(struct seq_file *seq, void *v, loff_t *pos)
|
||||
static void ip6fl_seq_stop(struct seq_file *seq, void *v)
|
||||
__releases(RCU)
|
||||
{
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
static int ip6fl_seq_show(struct seq_file *seq, void *v)
|
||||
|
||||
@ -60,7 +60,7 @@
|
||||
static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *skb)
|
||||
{
|
||||
struct dst_entry *dst = skb_dst(skb);
|
||||
struct net_device *dev = dst->dev;
|
||||
struct net_device *dev = dst_dev_rcu(dst);
|
||||
struct inet6_dev *idev = ip6_dst_idev(dst);
|
||||
unsigned int hh_len = LL_RESERVED_SPACE(dev);
|
||||
const struct in6_addr *daddr, *nexthop;
|
||||
@ -70,15 +70,12 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
|
||||
|
||||
/* Be paranoid, rather than too clever. */
|
||||
if (unlikely(hh_len > skb_headroom(skb)) && dev->header_ops) {
|
||||
/* Make sure idev stays alive */
|
||||
rcu_read_lock();
|
||||
/* idev stays alive because we hold rcu_read_lock(). */
|
||||
skb = skb_expand_head(skb, hh_len);
|
||||
if (!skb) {
|
||||
IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS);
|
||||
rcu_read_unlock();
|
||||
return -ENOMEM;
|
||||
}
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
hdr = ipv6_hdr(skb);
|
||||
@ -123,7 +120,6 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
|
||||
|
||||
IP6_UPD_PO_STATS(net, idev, IPSTATS_MIB_OUT, skb->len);
|
||||
|
||||
rcu_read_lock_bh();
|
||||
nexthop = rt6_nexthop((struct rt6_info *)dst, daddr);
|
||||
neigh = __ipv6_neigh_lookup_noref(dev, nexthop);
|
||||
|
||||
@ -131,7 +127,6 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
|
||||
if (unlikely(!neigh))
|
||||
neigh = __neigh_create(&nd_tbl, nexthop, dev, false);
|
||||
if (IS_ERR(neigh)) {
|
||||
rcu_read_unlock_bh();
|
||||
IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTNOROUTES);
|
||||
kfree_skb_reason(skb, SKB_DROP_REASON_NEIGH_CREATEFAIL);
|
||||
return -EINVAL;
|
||||
@ -139,7 +134,6 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
|
||||
}
|
||||
sock_confirm_neigh(skb, neigh);
|
||||
ret = neigh_output(neigh, skb, false);
|
||||
rcu_read_unlock_bh();
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -225,22 +219,30 @@ static int ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff *s
|
||||
|
||||
int ip6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
|
||||
{
|
||||
struct net_device *dev = skb_dst(skb)->dev, *indev = skb->dev;
|
||||
struct inet6_dev *idev = ip6_dst_idev(skb_dst(skb));
|
||||
struct dst_entry *dst = skb_dst(skb);
|
||||
struct net_device *dev, *indev = skb->dev;
|
||||
struct inet6_dev *idev;
|
||||
int ret;
|
||||
|
||||
skb->protocol = htons(ETH_P_IPV6);
|
||||
rcu_read_lock();
|
||||
dev = dst_dev_rcu(dst);
|
||||
idev = ip6_dst_idev(dst);
|
||||
skb->dev = dev;
|
||||
|
||||
if (unlikely(!idev || READ_ONCE(idev->cnf.disable_ipv6))) {
|
||||
IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS);
|
||||
rcu_read_unlock();
|
||||
kfree_skb_reason(skb, SKB_DROP_REASON_IPV6DISABLED);
|
||||
return 0;
|
||||
}
|
||||
|
||||
return NF_HOOK_COND(NFPROTO_IPV6, NF_INET_POST_ROUTING,
|
||||
net, sk, skb, indev, dev,
|
||||
ip6_finish_output,
|
||||
!(IP6CB(skb)->flags & IP6SKB_REROUTED));
|
||||
ret = NF_HOOK_COND(NFPROTO_IPV6, NF_INET_POST_ROUTING,
|
||||
net, sk, skb, indev, dev,
|
||||
ip6_finish_output,
|
||||
!(IP6CB(skb)->flags & IP6SKB_REROUTED));
|
||||
rcu_read_unlock();
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(ip6_output);
|
||||
|
||||
@ -261,35 +263,36 @@ bool ip6_autoflowlabel(struct net *net, const struct ipv6_pinfo *np)
|
||||
int ip6_xmit(const struct sock *sk, struct sk_buff *skb, struct flowi6 *fl6,
|
||||
__u32 mark, struct ipv6_txoptions *opt, int tclass, u32 priority)
|
||||
{
|
||||
struct net *net = sock_net(sk);
|
||||
const struct ipv6_pinfo *np = inet6_sk(sk);
|
||||
struct in6_addr *first_hop = &fl6->daddr;
|
||||
struct dst_entry *dst = skb_dst(skb);
|
||||
struct net_device *dev = dst->dev;
|
||||
struct inet6_dev *idev = ip6_dst_idev(dst);
|
||||
struct hop_jumbo_hdr *hop_jumbo;
|
||||
int hoplen = sizeof(*hop_jumbo);
|
||||
struct net *net = sock_net(sk);
|
||||
unsigned int head_room;
|
||||
struct net_device *dev;
|
||||
struct ipv6hdr *hdr;
|
||||
u8 proto = fl6->flowi6_proto;
|
||||
int seg_len = skb->len;
|
||||
int hlimit = -1;
|
||||
int ret, hlimit = -1;
|
||||
u32 mtu;
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
dev = dst_dev_rcu(dst);
|
||||
head_room = sizeof(struct ipv6hdr) + hoplen + LL_RESERVED_SPACE(dev);
|
||||
if (opt)
|
||||
head_room += opt->opt_nflen + opt->opt_flen;
|
||||
|
||||
if (unlikely(head_room > skb_headroom(skb))) {
|
||||
/* Make sure idev stays alive */
|
||||
rcu_read_lock();
|
||||
/* idev stays alive while we hold rcu_read_lock(). */
|
||||
skb = skb_expand_head(skb, head_room);
|
||||
if (!skb) {
|
||||
IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS);
|
||||
rcu_read_unlock();
|
||||
return -ENOBUFS;
|
||||
ret = -ENOBUFS;
|
||||
goto unlock;
|
||||
}
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
if (opt) {
|
||||
@ -351,17 +354,21 @@ int ip6_xmit(const struct sock *sk, struct sk_buff *skb, struct flowi6 *fl6,
|
||||
* skb to its handler for processing
|
||||
*/
|
||||
skb = l3mdev_ip6_out((struct sock *)sk, skb);
|
||||
if (unlikely(!skb))
|
||||
return 0;
|
||||
if (unlikely(!skb)) {
|
||||
ret = 0;
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
/* hooks should never assume socket lock is held.
|
||||
* we promote our socket to non const
|
||||
*/
|
||||
return NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT,
|
||||
net, (struct sock *)sk, skb, NULL, dev,
|
||||
dst_output);
|
||||
ret = NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT,
|
||||
net, (struct sock *)sk, skb, NULL, dev,
|
||||
dst_output);
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
ret = -EMSGSIZE;
|
||||
skb->dev = dev;
|
||||
/* ipv6_local_error() does not require socket lock,
|
||||
* we promote our socket to non const
|
||||
@ -370,7 +377,9 @@ int ip6_xmit(const struct sock *sk, struct sk_buff *skb, struct flowi6 *fl6,
|
||||
|
||||
IP6_INC_STATS(net, idev, IPSTATS_MIB_FRAGFAILS);
|
||||
kfree_skb(skb);
|
||||
return -EMSGSIZE;
|
||||
unlock:
|
||||
rcu_read_unlock();
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(ip6_xmit);
|
||||
|
||||
@ -1167,11 +1176,11 @@ static int ip6_dst_lookup_tail(struct net *net, const struct sock *sk,
|
||||
* dst entry of the nexthop router
|
||||
*/
|
||||
rt = (struct rt6_info *) *dst;
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
n = __ipv6_neigh_lookup_noref(rt->dst.dev,
|
||||
rt6_nexthop(rt, &fl6->daddr));
|
||||
err = n && !(READ_ONCE(n->nud_state) & NUD_VALID) ? -EINVAL : 0;
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
|
||||
if (err) {
|
||||
struct inet6_ifaddr *ifp;
|
||||
|
||||
@ -639,7 +639,7 @@ static void rt6_probe(struct fib6_nh *fib6_nh)
|
||||
|
||||
nh_gw = &fib6_nh->fib_nh_gw6;
|
||||
dev = fib6_nh->fib_nh_dev;
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
last_probe = READ_ONCE(fib6_nh->last_probe);
|
||||
idev = __in6_dev_get(dev);
|
||||
if (!idev)
|
||||
@ -649,7 +649,7 @@ static void rt6_probe(struct fib6_nh *fib6_nh)
|
||||
if (READ_ONCE(neigh->nud_state) & NUD_VALID)
|
||||
goto out;
|
||||
|
||||
write_lock(&neigh->lock);
|
||||
write_lock_bh(&neigh->lock);
|
||||
if (!(neigh->nud_state & NUD_VALID) &&
|
||||
time_after(jiffies,
|
||||
neigh->updated +
|
||||
@ -658,7 +658,7 @@ static void rt6_probe(struct fib6_nh *fib6_nh)
|
||||
if (work)
|
||||
__neigh_set_probe_once(neigh);
|
||||
}
|
||||
write_unlock(&neigh->lock);
|
||||
write_unlock_bh(&neigh->lock);
|
||||
} else if (time_after(jiffies, last_probe +
|
||||
READ_ONCE(idev->cnf.rtr_probe_interval))) {
|
||||
work = kmalloc(sizeof(*work), GFP_ATOMIC);
|
||||
@ -676,7 +676,7 @@ static void rt6_probe(struct fib6_nh *fib6_nh)
|
||||
}
|
||||
|
||||
out:
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
}
|
||||
#else
|
||||
static inline void rt6_probe(struct fib6_nh *fib6_nh)
|
||||
@ -692,25 +692,25 @@ static enum rt6_nud_state rt6_check_neigh(const struct fib6_nh *fib6_nh)
|
||||
enum rt6_nud_state ret = RT6_NUD_FAIL_HARD;
|
||||
struct neighbour *neigh;
|
||||
|
||||
rcu_read_lock_bh();
|
||||
rcu_read_lock();
|
||||
neigh = __ipv6_neigh_lookup_noref(fib6_nh->fib_nh_dev,
|
||||
&fib6_nh->fib_nh_gw6);
|
||||
if (neigh) {
|
||||
read_lock(&neigh->lock);
|
||||
if (neigh->nud_state & NUD_VALID)
|
||||
u8 nud_state = READ_ONCE(neigh->nud_state);
|
||||
|
||||
if (nud_state & NUD_VALID)
|
||||
ret = RT6_NUD_SUCCEED;
|
||||
#ifdef CONFIG_IPV6_ROUTER_PREF
|
||||
else if (!(neigh->nud_state & NUD_FAILED))
|
||||
else if (!(nud_state & NUD_FAILED))
|
||||
ret = RT6_NUD_SUCCEED;
|
||||
else
|
||||
ret = RT6_NUD_FAIL_PROBE;
|
||||
#endif
|
||||
read_unlock(&neigh->lock);
|
||||
} else {
|
||||
ret = IS_ENABLED(CONFIG_IPV6_ROUTER_PREF) ?
|
||||
RT6_NUD_SUCCEED : RT6_NUD_FAIL_DO_RR;
|
||||
}
|
||||
rcu_read_unlock_bh();
|
||||
rcu_read_unlock();
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -2847,7 +2847,8 @@ static int validate_set(const struct nlattr *a,
|
||||
size_t key_len;
|
||||
|
||||
/* There can be only one key in a action */
|
||||
if (nla_total_size(nla_len(ovs_key)) != nla_len(a))
|
||||
if (!nla_ok(ovs_key, nla_len(a)) ||
|
||||
nla_total_size(nla_len(ovs_key)) != nla_len(a))
|
||||
return -EINVAL;
|
||||
|
||||
key_len = nla_len(ovs_key);
|
||||
|
||||
@ -153,10 +153,19 @@ void ovs_netdev_detach_dev(struct vport *vport)
|
||||
|
||||
static void netdev_destroy(struct vport *vport)
|
||||
{
|
||||
rtnl_lock();
|
||||
if (netif_is_ovs_port(vport->dev))
|
||||
ovs_netdev_detach_dev(vport);
|
||||
rtnl_unlock();
|
||||
/* When called from ovs_db_notify_wq() after a dp_device_event(), the
|
||||
* port has already been detached, so we can avoid taking the RTNL by
|
||||
* checking this first.
|
||||
*/
|
||||
if (netif_is_ovs_port(vport->dev)) {
|
||||
rtnl_lock();
|
||||
/* Check again while holding the lock to ensure we don't race
|
||||
* with the netdev notifier and detach twice.
|
||||
*/
|
||||
if (netif_is_ovs_port(vport->dev))
|
||||
ovs_netdev_detach_dev(vport);
|
||||
rtnl_unlock();
|
||||
}
|
||||
|
||||
call_rcu(&vport->rcu, vport_netdev_free);
|
||||
}
|
||||
|
||||
@ -119,6 +119,8 @@ vmci_transport_packet_init(struct vmci_transport_packet *pkt,
|
||||
u16 proto,
|
||||
struct vmci_handle handle)
|
||||
{
|
||||
memset(pkt, 0, sizeof(*pkt));
|
||||
|
||||
/* We register the stream control handler as an any cid handle so we
|
||||
* must always send from a source address of VMADDR_CID_ANY
|
||||
*/
|
||||
@ -131,8 +133,6 @@ vmci_transport_packet_init(struct vmci_transport_packet *pkt,
|
||||
pkt->type = type;
|
||||
pkt->src_port = src->svm_port;
|
||||
pkt->dst_port = dst->svm_port;
|
||||
memset(&pkt->proto, 0, sizeof(pkt->proto));
|
||||
memset(&pkt->_reserved2, 0, sizeof(pkt->_reserved2));
|
||||
|
||||
switch (pkt->type) {
|
||||
case VMCI_TRANSPORT_PACKET_TYPE_INVALID:
|
||||
|
||||
@ -1,3 +1,76 @@
|
||||
* Thu Jan 29 2026 CKI KWF Bot <cki-ci-bot+kwf-gitlab-com@redhat.com> [5.14.0-611.30.1.el9_7]
|
||||
- io_uring/net: commit partial buffers on retry (Jeff Moyer) [RHEL-137329] {CVE-2025-38730}
|
||||
- io_uring/kbuf: add io_kbuf_commit() helper (Jeff Moyer) [RHEL-137329]
|
||||
- io_uring/kbuf: use 'bl' directly rather than req->buf_list (Jeff Moyer) [RHEL-137329]
|
||||
- ice: prevent NULL deref in ice_lag_move_new_vf_nodes() (Michal Schmidt) [RHEL-143296]
|
||||
- net: openvswitch: Avoid needlessly taking the RTNL on vport destroy (Adrian Moreno) [RHEL-141404]
|
||||
- atm: clip: Fix infinite recursive call of clip_push(). (Guillaume Nault) [RHEL-137601] {CVE-2025-38459}
|
||||
- dpll: zl3073x: Remove unused dev wrappers (Ivan Vecera) [RHEL-139699]
|
||||
- dpll: zl3073x: Cache all output properties in zl3073x_out (Ivan Vecera) [RHEL-139699]
|
||||
- dpll: zl3073x: Cache all reference properties in zl3073x_ref (Ivan Vecera) [RHEL-139699]
|
||||
- dpll: zl3073x: Cache reference monitor status (Ivan Vecera) [RHEL-139699]
|
||||
- dpll: zl3073x: Split ref, out, and synth logic from core (Ivan Vecera) [RHEL-139699]
|
||||
- dpll: zl3073x: Store raw register values instead of parsed state (Ivan Vecera) [RHEL-139699]
|
||||
- dpll: fix device-id-get and pin-id-get to return errors properly (Ivan Vecera) [RHEL-139699]
|
||||
- dpll: spec: add missing module-name and clock-id to pin-get reply (Ivan Vecera) [RHEL-139699]
|
||||
- dpll: zl3073x: Allow to configure phase offset averaging factor (Ivan Vecera) [RHEL-139699]
|
||||
- dpll: add phase_offset_avg_factor_get/set callback ops (Ivan Vecera) [RHEL-139699]
|
||||
- dpll: add phase-offset-avg-factor device attribute to netlink spec (Ivan Vecera) [RHEL-139699]
|
||||
- dpll: fix clock quality level reporting (Ivan Vecera) [RHEL-139699]
|
||||
- dpll: add reference sync get/set (Ivan Vecera) [RHEL-139699]
|
||||
- dpll: add reference-sync netlink attribute (Ivan Vecera) [RHEL-139699]
|
||||
- dpll: remove documentation of rclk_dev_name (Ivan Vecera) [RHEL-139699]
|
||||
- net: use dst_dev_rcu() in sk_setup_caps() (Hangbin Liu) [RHEL-129084] {CVE-2025-40170}
|
||||
- ipv4: use RCU protection in ip_dst_mtu_maybe_forward() (Hangbin Liu) [RHEL-129084]
|
||||
- net: ipv4: Consolidate ipv4_mtu and ip_dst_mtu_maybe_forward (Hangbin Liu) [RHEL-129084]
|
||||
- ipv6: use RCU in ip6_xmit() (Hangbin Liu) [RHEL-129018] {CVE-2025-40135}
|
||||
- ipv6: use RCU in ip6_output() (Hangbin Liu) [RHEL-128982] {CVE-2025-40158}
|
||||
- net: dst: introduce dst->dev_rcu (Hangbin Liu) [RHEL-128982]
|
||||
- ipv4: use RCU protection in __ip_rt_update_pmtu() (Hangbin Liu) [RHEL-128982]
|
||||
- net: Add locking to protect skb->dev access in ip_output (Hangbin Liu) [RHEL-128982]
|
||||
- net: dst: add four helpers to annotate data-races around dst->dev (Hangbin Liu) [RHEL-128982]
|
||||
- bpf: Fix mismatched RCU unlock flavour in bpf_out_neigh_v6 (Hangbin Liu) [RHEL-128982]
|
||||
- vrf: Fix lockdep splat in output path (Hangbin Liu) [RHEL-128982]
|
||||
- ipv6: remove nexthop_fib6_nh_bh() (Hangbin Liu) [RHEL-128982]
|
||||
- net: remove rcu_dereference_bh_rtnl() (Hangbin Liu) [RHEL-128982]
|
||||
- neighbour: switch to standard rcu, instead of rcu_bh (Hangbin Liu) [RHEL-128982]
|
||||
- ipv6: flowlabel: do not disable BH where not needed (Hangbin Liu) [RHEL-128982]
|
||||
- ipv6: remove one read_lock()/read_unlock() pair in rt6_check_neigh() (Hangbin Liu) [RHEL-128982]
|
||||
- neigh: introduce neigh_confirm() helper function (Hangbin Liu) [RHEL-128982]
|
||||
- net: bonding: update the slave array for broadcast mode (Hangbin Liu) [RHEL-132923]
|
||||
- net: bonding: add broadcast_neighbor netlink option (Hangbin Liu) [RHEL-132923]
|
||||
- net: bonding: add broadcast_neighbor option for 802.3ad (Hangbin Liu) [RHEL-132923]
|
||||
- vsock/vmci: Clear the vmci transport packet properly when initializing it (CKI Backport Bot) [RHEL-137697] {CVE-2025-38403}
|
||||
- ALSA: usb-audio: Fix potential overflow of PCM transfer buffer (CKI Backport Bot) [RHEL-136909] {CVE-2025-40269}
|
||||
- nvme: tcp: Fix compilation warning with W=1 (John Meneghini) [RHEL-129928]
|
||||
- nvme-tcp: Fix I/O queue cpu spreading for multiple controllers (John Meneghini) [RHEL-129928]
|
||||
Resolves: RHEL-128982, RHEL-129018, RHEL-129084, RHEL-129928, RHEL-132923, RHEL-136909, RHEL-137329, RHEL-137601, RHEL-137697, RHEL-139699, RHEL-141404, RHEL-143296
|
||||
|
||||
* Tue Jan 27 2026 CKI KWF Bot <cki-ci-bot+kwf-gitlab-com@redhat.com> [5.14.0-611.29.1.el9_7]
|
||||
- squashfs: fix memory leak in squashfs_fill_super (Abhi Das) [RHEL-138015] {CVE-2025-38415}
|
||||
- Squashfs: check return result of sb_min_blocksize (CKI Backport Bot) [RHEL-138015] {CVE-2025-38415}
|
||||
- usb: core: config: Prevent OOB read in SS endpoint companion parsing (CKI Backport Bot) [RHEL-137364] {CVE-2025-39760}
|
||||
- RDMA/rxe: Fix slab-use-after-free Read in rxe_queue_cleanup bug (CKI Backport Bot) [RHEL-137069] {CVE-2025-38024}
|
||||
Resolves: RHEL-137069, RHEL-137364, RHEL-138015
|
||||
|
||||
* Thu Jan 22 2026 CKI KWF Bot <cki-ci-bot+kwf-gitlab-com@redhat.com> [5.14.0-611.28.1.el9_7]
|
||||
- s390: Disable ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP (Luiz Capitulino) [RHEL-133337]
|
||||
- s390: mm: add stub for hugetlb_optimize_vmemmap_key (Luiz Capitulino) [RHEL-133337]
|
||||
- fs/proc: fix uaf in proc_readdir_de() (CKI Backport Bot) [RHEL-137098] {CVE-2025-40271}
|
||||
- Bluetooth: hci_sync: fix race in hci_cmd_sync_dequeue_once (CKI Backport Bot) [RHEL-136256] {CVE-2025-40318}
|
||||
- RDMA/core: Fix "KASAN: slab-use-after-free Read in ib_register_device" problem (CKI Backport Bot) [RHEL-134352] {CVE-2025-38022}
|
||||
- cifs: Fix deadlock in cifs_writepages during reconnect (Paulo Alcantara) [RHEL-134234]
|
||||
- irqchip/gic-v2m: Prevent use after free of gicv2m_get_fwnode() (CKI Backport Bot) [RHEL-131974] {CVE-2025-37819}
|
||||
- net: openvswitch: fix nested key length validation in the set() action (CKI Backport Bot) [RHEL-131801] {CVE-2025-37789}
|
||||
- md: avoid repeated calls to del_gendisk (Nigel Croxon) [RHEL-126532]
|
||||
- md: delete mddev kobj before deleting gendisk kobj (Nigel Croxon) [RHEL-126532]
|
||||
- md: add legacy_async_del_gendisk mode (Nigel Croxon) [RHEL-126532]
|
||||
- md: Don't clear MD_CLOSING until mddev is freed (Nigel Croxon) [RHEL-126532]
|
||||
- md: fix create on open mddev lifetime regression (Nigel Croxon) [RHEL-126532]
|
||||
- md: call del_gendisk in control path (Nigel Croxon) [RHEL-126532]
|
||||
- Bluetooth: ISO: Fix possible UAF on iso_conn_free (CKI Backport Bot) [RHEL-128891] {CVE-2025-40141}
|
||||
Resolves: RHEL-126532, RHEL-128891, RHEL-131801, RHEL-131974, RHEL-133337, RHEL-134234, RHEL-134352, RHEL-136256, RHEL-137098
|
||||
|
||||
* Tue Jan 20 2026 CKI KWF Bot <cki-ci-bot+kwf-gitlab-com@redhat.com> [5.14.0-611.27.1.el9_7]
|
||||
- net/sched: mqprio: fix stack out-of-bounds write in tc entry parsing (CKI Backport Bot) [RHEL-136822] {CVE-2025-38568}
|
||||
- devlink: rate: Unset parent pointer in devl_rate_nodes_destroy (CKI Backport Bot) [RHEL-134923] {CVE-2025-40251}
|
||||
|
||||
@ -1400,6 +1400,11 @@ int snd_usb_endpoint_set_params(struct snd_usb_audio *chip,
|
||||
ep->sample_rem = ep->cur_rate % ep->pps;
|
||||
ep->packsize[0] = ep->cur_rate / ep->pps;
|
||||
ep->packsize[1] = (ep->cur_rate + (ep->pps - 1)) / ep->pps;
|
||||
if (ep->packsize[1] > ep->maxpacksize) {
|
||||
usb_audio_dbg(chip, "Too small maxpacksize %u for rate %u / pps %u\n",
|
||||
ep->maxpacksize, ep->cur_rate, ep->pps);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* calculate the frequency in 16.16 format */
|
||||
ep->freqm = ep->freqn;
|
||||
|
||||
Loading…
Reference in New Issue
Block a user