Import of kernel-6.12.0-124.43.1.el10_1
This commit is contained in:
parent
007e3badbb
commit
2b9158cd78
@ -179,29 +179,47 @@ Phase offset measurement and adjustment
|
||||
Device may provide ability to measure a phase difference between signals
|
||||
on a pin and its parent dpll device. If pin-dpll phase offset measurement
|
||||
is supported, it shall be provided with ``DPLL_A_PIN_PHASE_OFFSET``
|
||||
attribute for each parent dpll device.
|
||||
attribute for each parent dpll device. The reported phase offset may be
|
||||
computed as the average of prior values and the current measurement, using
|
||||
the following formula:
|
||||
|
||||
.. math::
|
||||
curr\_avg = prev\_avg * \frac{2^N-1}{2^N} + new\_val * \frac{1}{2^N}
|
||||
|
||||
where `curr_avg` is the current reported phase offset, `prev_avg` is the
|
||||
previously reported value, `new_val` is the current measurement, and `N` is
|
||||
the averaging factor. Configured averaging factor value is provided with
|
||||
``DPLL_A_PHASE_OFFSET_AVG_FACTOR`` attribute of a device and value change can
|
||||
be requested with the same attribute with ``DPLL_CMD_DEVICE_SET`` command.
|
||||
|
||||
================================== ======================================
|
||||
``DPLL_A_PHASE_OFFSET_AVG_FACTOR`` attr configured value of phase offset
|
||||
averaging factor
|
||||
================================== ======================================
|
||||
|
||||
Device may also provide ability to adjust a signal phase on a pin.
|
||||
If pin phase adjustment is supported, minimal and maximal values that pin
|
||||
handle shall be provide to the user on ``DPLL_CMD_PIN_GET`` respond
|
||||
with ``DPLL_A_PIN_PHASE_ADJUST_MIN`` and ``DPLL_A_PIN_PHASE_ADJUST_MAX``
|
||||
If pin phase adjustment is supported, minimal and maximal values and
|
||||
granularity that pin handle shall be provided to the user on
|
||||
``DPLL_CMD_PIN_GET`` respond with ``DPLL_A_PIN_PHASE_ADJUST_MIN``,
|
||||
``DPLL_A_PIN_PHASE_ADJUST_MAX`` and ``DPLL_A_PIN_PHASE_ADJUST_GRAN``
|
||||
attributes. Configured phase adjust value is provided with
|
||||
``DPLL_A_PIN_PHASE_ADJUST`` attribute of a pin, and value change can be
|
||||
requested with the same attribute with ``DPLL_CMD_PIN_SET`` command.
|
||||
|
||||
=============================== ======================================
|
||||
``DPLL_A_PIN_ID`` configured pin id
|
||||
``DPLL_A_PIN_PHASE_ADJUST_MIN`` attr minimum value of phase adjustment
|
||||
``DPLL_A_PIN_PHASE_ADJUST_MAX`` attr maximum value of phase adjustment
|
||||
``DPLL_A_PIN_PHASE_ADJUST`` attr configured value of phase
|
||||
adjustment on parent dpll device
|
||||
``DPLL_A_PIN_PARENT_DEVICE`` nested attribute for requesting
|
||||
configuration on given parent dpll
|
||||
device
|
||||
``DPLL_A_PIN_PARENT_ID`` parent dpll device id
|
||||
``DPLL_A_PIN_PHASE_OFFSET`` attr measured phase difference
|
||||
between a pin and parent dpll device
|
||||
=============================== ======================================
|
||||
================================ ==========================================
|
||||
``DPLL_A_PIN_ID`` configured pin id
|
||||
``DPLL_A_PIN_PHASE_ADJUST_GRAN`` attr granularity of phase adjustment value
|
||||
``DPLL_A_PIN_PHASE_ADJUST_MIN`` attr minimum value of phase adjustment
|
||||
``DPLL_A_PIN_PHASE_ADJUST_MAX`` attr maximum value of phase adjustment
|
||||
``DPLL_A_PIN_PHASE_ADJUST`` attr configured value of phase
|
||||
adjustment on parent dpll device
|
||||
``DPLL_A_PIN_PARENT_DEVICE`` nested attribute for requesting
|
||||
configuration on given parent dpll
|
||||
device
|
||||
``DPLL_A_PIN_PARENT_ID`` parent dpll device id
|
||||
``DPLL_A_PIN_PHASE_OFFSET`` attr measured phase difference
|
||||
between a pin and parent dpll device
|
||||
================================ ==========================================
|
||||
|
||||
All phase related values are provided in pico seconds, which represents
|
||||
time difference between signals phase. The negative value means that
|
||||
@ -253,6 +271,31 @@ the pin.
|
||||
``DPLL_A_PIN_ESYNC_PULSE`` pulse type of Embedded SYNC
|
||||
========================================= =================================
|
||||
|
||||
Reference SYNC
|
||||
==============
|
||||
|
||||
The device may support the Reference SYNC feature, which allows the combination
|
||||
of two inputs into a input pair. In this configuration, clock signals
|
||||
from both inputs are used to synchronize the DPLL device. The higher frequency
|
||||
signal is utilized for the loop bandwidth of the DPLL, while the lower frequency
|
||||
signal is used to syntonize the output signal of the DPLL device. This feature
|
||||
enables the provision of a high-quality loop bandwidth signal from an external
|
||||
source.
|
||||
|
||||
A capable input provides a list of inputs that can be bound with to create
|
||||
Reference SYNC. To control this feature, the user must request a desired
|
||||
state for a target pin: use ``DPLL_PIN_STATE_CONNECTED`` to enable or
|
||||
``DPLL_PIN_STATE_DISCONNECTED`` to disable the feature. An input pin can be
|
||||
bound to only one other pin at any given time.
|
||||
|
||||
============================== ==========================================
|
||||
``DPLL_A_PIN_REFERENCE_SYNC`` nested attribute for providing info or
|
||||
requesting configuration of the Reference
|
||||
SYNC feature
|
||||
``DPLL_A_PIN_ID`` target pin id for Reference SYNC feature
|
||||
``DPLL_A_PIN_STATE`` state of Reference SYNC connection
|
||||
============================== ==========================================
|
||||
|
||||
Configuration commands group
|
||||
============================
|
||||
|
||||
@ -343,6 +386,8 @@ according to attribute purpose.
|
||||
frequencies
|
||||
``DPLL_A_PIN_ANY_FREQUENCY_MIN`` attr minimum value of frequency
|
||||
``DPLL_A_PIN_ANY_FREQUENCY_MAX`` attr maximum value of frequency
|
||||
``DPLL_A_PIN_PHASE_ADJUST_GRAN`` attr granularity of phase
|
||||
adjustment value
|
||||
``DPLL_A_PIN_PHASE_ADJUST_MIN`` attr minimum value of phase
|
||||
adjustment
|
||||
``DPLL_A_PIN_PHASE_ADJUST_MAX`` attr maximum value of phase
|
||||
|
||||
@ -315,6 +315,10 @@ attribute-sets:
|
||||
If enabled, dpll device shall monitor and notify all currently
|
||||
available inputs for changes of their phase offset against the
|
||||
dpll device.
|
||||
-
|
||||
name: phase-offset-avg-factor
|
||||
type: u32
|
||||
doc: Averaging factor applied to calculation of reported phase offset.
|
||||
-
|
||||
name: pin
|
||||
enum-name: dpll_a_pin
|
||||
@ -428,6 +432,21 @@ attribute-sets:
|
||||
doc: |
|
||||
A ratio of high to low state of a SYNC signal pulse embedded
|
||||
into base clock frequency. Value is in percents.
|
||||
-
|
||||
name: reference-sync
|
||||
type: nest
|
||||
multi-attr: true
|
||||
nested-attributes: reference-sync
|
||||
doc: |
|
||||
Capable pin provides list of pins that can be bound to create a
|
||||
reference-sync pin pair.
|
||||
-
|
||||
name: phase-adjust-gran
|
||||
type: u32
|
||||
doc: |
|
||||
Granularity of phase adjustment, in picoseconds. The value of
|
||||
phase adjustment must be a multiple of this granularity.
|
||||
|
||||
-
|
||||
name: pin-parent-device
|
||||
subset-of: pin
|
||||
@ -458,6 +477,14 @@ attribute-sets:
|
||||
name: frequency-min
|
||||
-
|
||||
name: frequency-max
|
||||
-
|
||||
name: reference-sync
|
||||
subset-of: pin
|
||||
attributes:
|
||||
-
|
||||
name: id
|
||||
-
|
||||
name: state
|
||||
|
||||
operations:
|
||||
enum-name: dpll_cmd
|
||||
@ -506,6 +533,7 @@ operations:
|
||||
- clock-id
|
||||
- type
|
||||
- phase-offset-monitor
|
||||
- phase-offset-avg-factor
|
||||
|
||||
dump:
|
||||
reply: *dev-attrs
|
||||
@ -523,6 +551,7 @@ operations:
|
||||
attributes:
|
||||
- id
|
||||
- phase-offset-monitor
|
||||
- phase-offset-avg-factor
|
||||
-
|
||||
name: device-create-ntf
|
||||
doc: Notification about device appearing
|
||||
@ -582,6 +611,8 @@ operations:
|
||||
reply: &pin-attrs
|
||||
attributes:
|
||||
- id
|
||||
- module-name
|
||||
- clock-id
|
||||
- board-label
|
||||
- panel-label
|
||||
- package-label
|
||||
@ -591,6 +622,7 @@ operations:
|
||||
- capabilities
|
||||
- parent-device
|
||||
- parent-pin
|
||||
- phase-adjust-gran
|
||||
- phase-adjust-min
|
||||
- phase-adjust-max
|
||||
- phase-adjust
|
||||
@ -598,6 +630,7 @@ operations:
|
||||
- esync-frequency
|
||||
- esync-frequency-supported
|
||||
- esync-pulse
|
||||
- reference-sync
|
||||
|
||||
dump:
|
||||
request:
|
||||
@ -625,6 +658,7 @@ operations:
|
||||
- parent-pin
|
||||
- phase-adjust
|
||||
- esync-frequency
|
||||
- reference-sync
|
||||
-
|
||||
name: pin-create-ntf
|
||||
doc: Notification about pin appearing
|
||||
|
||||
@ -12,7 +12,7 @@ RHEL_MINOR = 1
|
||||
#
|
||||
# Use this spot to avoid future merge conflicts.
|
||||
# Do not trim this comment.
|
||||
RHEL_RELEASE = 124.40.1
|
||||
RHEL_RELEASE = 124.43.1
|
||||
|
||||
#
|
||||
# RHEL_REBASE_NUM
|
||||
|
||||
@ -494,11 +494,18 @@ static int kvm_set_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid_entry2 *e2,
|
||||
u32 vcpu_caps[NR_KVM_CPU_CAPS];
|
||||
int r;
|
||||
|
||||
/*
|
||||
* Apply pending runtime CPUID updates to the current CPUID entries to
|
||||
* avoid false positives due to mismatches on KVM-owned feature flags.
|
||||
*/
|
||||
if (vcpu->arch.cpuid_dynamic_bits_dirty)
|
||||
kvm_update_cpuid_runtime(vcpu);
|
||||
|
||||
/*
|
||||
* Swap the existing (old) entries with the incoming (new) entries in
|
||||
* order to massage the new entries, e.g. to account for dynamic bits
|
||||
* that KVM controls, without clobbering the current guest CPUID, which
|
||||
* KVM needs to preserve in order to unwind on failure.
|
||||
* that KVM controls, without losing the current guest CPUID, which KVM
|
||||
* needs to preserve in order to unwind on failure.
|
||||
*
|
||||
* Similarly, save the vCPU's current cpu_caps so that the capabilities
|
||||
* can be updated alongside the CPUID entries when performing runtime
|
||||
|
||||
@ -498,9 +498,6 @@ CONFIG_PPC_TRANSACTIONAL_MEM=y
|
||||
CONFIG_PPC_UV=y
|
||||
# CONFIG_LD_HEAD_STUB_CATCH is not set
|
||||
CONFIG_MPROFILE_KERNEL=y
|
||||
CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY=y
|
||||
CONFIG_PPC_FTRACE_OUT_OF_LINE=y
|
||||
CONFIG_PPC_FTRACE_OUT_OF_LINE_NUM_RESERVE=32768
|
||||
CONFIG_HOTPLUG_CPU=y
|
||||
CONFIG_INTERRUPT_SANITIZE_REGISTERS=y
|
||||
CONFIG_PPC_QUEUED_SPINLOCKS=y
|
||||
@ -724,7 +721,6 @@ CONFIG_FUNCTION_ALIGNMENT_4B=y
|
||||
CONFIG_FUNCTION_ALIGNMENT=4
|
||||
CONFIG_CC_HAS_MIN_FUNCTION_ALIGNMENT=y
|
||||
CONFIG_CC_HAS_SANE_FUNCTION_ALIGNMENT=y
|
||||
CONFIG_ARCH_WANTS_PRE_LINK_VMLINUX=y
|
||||
# end of General architecture-dependent options
|
||||
|
||||
CONFIG_RT_MUTEXES=y
|
||||
@ -5030,7 +5026,6 @@ CONFIG_HID_KUNIT_TEST=m
|
||||
#
|
||||
# HID-BPF support
|
||||
#
|
||||
CONFIG_HID_BPF=y
|
||||
# end of HID-BPF support
|
||||
|
||||
CONFIG_I2C_HID=y
|
||||
@ -7100,8 +7095,6 @@ CONFIG_HAVE_FUNCTION_TRACER=y
|
||||
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
|
||||
CONFIG_HAVE_DYNAMIC_FTRACE=y
|
||||
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
|
||||
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
|
||||
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS=y
|
||||
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS=y
|
||||
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
|
||||
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
|
||||
@ -7121,8 +7114,6 @@ CONFIG_FUNCTION_TRACER=y
|
||||
CONFIG_FUNCTION_GRAPH_TRACER=y
|
||||
CONFIG_DYNAMIC_FTRACE=y
|
||||
CONFIG_DYNAMIC_FTRACE_WITH_REGS=y
|
||||
CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
|
||||
CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS=y
|
||||
CONFIG_DYNAMIC_FTRACE_WITH_ARGS=y
|
||||
CONFIG_FPROBE=y
|
||||
CONFIG_FUNCTION_PROFILER=y
|
||||
@ -7147,7 +7138,7 @@ CONFIG_BPF_EVENTS=y
|
||||
CONFIG_DYNAMIC_EVENTS=y
|
||||
CONFIG_PROBE_EVENTS=y
|
||||
CONFIG_FTRACE_MCOUNT_RECORD=y
|
||||
CONFIG_FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY=y
|
||||
CONFIG_FTRACE_MCOUNT_USE_CC=y
|
||||
CONFIG_TRACING_MAP=y
|
||||
CONFIG_SYNTH_EVENTS=y
|
||||
# CONFIG_USER_EVENTS is not set
|
||||
@ -7173,8 +7164,6 @@ CONFIG_RV_REACTORS=y
|
||||
CONFIG_RV_REACT_PRINTK=y
|
||||
CONFIG_RV_REACT_PANIC=y
|
||||
# CONFIG_SAMPLES is not set
|
||||
CONFIG_HAVE_SAMPLE_FTRACE_DIRECT=y
|
||||
CONFIG_HAVE_SAMPLE_FTRACE_DIRECT_MULTI=y
|
||||
CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y
|
||||
CONFIG_STRICT_DEVMEM=y
|
||||
# CONFIG_IO_STRICT_DEVMEM is not set
|
||||
|
||||
@ -83,10 +83,8 @@ dpll_xa_ref_pin_add(struct xarray *xa_pins, struct dpll_pin *pin,
|
||||
if (ref->pin != pin)
|
||||
continue;
|
||||
reg = dpll_pin_registration_find(ref, ops, priv, cookie);
|
||||
if (reg) {
|
||||
refcount_inc(&ref->refcount);
|
||||
return 0;
|
||||
}
|
||||
if (reg)
|
||||
return -EEXIST;
|
||||
ref_exists = true;
|
||||
break;
|
||||
}
|
||||
@ -164,10 +162,8 @@ dpll_xa_ref_dpll_add(struct xarray *xa_dplls, struct dpll_device *dpll,
|
||||
if (ref->dpll != dpll)
|
||||
continue;
|
||||
reg = dpll_pin_registration_find(ref, ops, priv, cookie);
|
||||
if (reg) {
|
||||
refcount_inc(&ref->refcount);
|
||||
return 0;
|
||||
}
|
||||
if (reg)
|
||||
return -EEXIST;
|
||||
ref_exists = true;
|
||||
break;
|
||||
}
|
||||
@ -506,6 +502,7 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module,
|
||||
refcount_set(&pin->refcount, 1);
|
||||
xa_init_flags(&pin->dpll_refs, XA_FLAGS_ALLOC);
|
||||
xa_init_flags(&pin->parent_refs, XA_FLAGS_ALLOC);
|
||||
xa_init_flags(&pin->ref_sync_pins, XA_FLAGS_ALLOC);
|
||||
ret = xa_alloc_cyclic(&dpll_pin_xa, &pin->id, pin, xa_limit_32b,
|
||||
&dpll_pin_xa_id, GFP_KERNEL);
|
||||
if (ret < 0)
|
||||
@ -514,6 +511,7 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module,
|
||||
err_xa_alloc:
|
||||
xa_destroy(&pin->dpll_refs);
|
||||
xa_destroy(&pin->parent_refs);
|
||||
xa_destroy(&pin->ref_sync_pins);
|
||||
dpll_pin_prop_free(&pin->prop);
|
||||
err_pin_prop:
|
||||
kfree(pin);
|
||||
@ -595,6 +593,7 @@ void dpll_pin_put(struct dpll_pin *pin)
|
||||
xa_erase(&dpll_pin_xa, pin->id);
|
||||
xa_destroy(&pin->dpll_refs);
|
||||
xa_destroy(&pin->parent_refs);
|
||||
xa_destroy(&pin->ref_sync_pins);
|
||||
dpll_pin_prop_free(&pin->prop);
|
||||
kfree_rcu(pin, rcu);
|
||||
}
|
||||
@ -659,11 +658,26 @@ dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin,
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dpll_pin_register);
|
||||
|
||||
static void dpll_pin_ref_sync_pair_del(u32 ref_sync_pin_id)
|
||||
{
|
||||
struct dpll_pin *pin, *ref_sync_pin;
|
||||
unsigned long i;
|
||||
|
||||
xa_for_each(&dpll_pin_xa, i, pin) {
|
||||
ref_sync_pin = xa_load(&pin->ref_sync_pins, ref_sync_pin_id);
|
||||
if (ref_sync_pin) {
|
||||
xa_erase(&pin->ref_sync_pins, ref_sync_pin_id);
|
||||
__dpll_pin_change_ntf(pin);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void
|
||||
__dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin,
|
||||
const struct dpll_pin_ops *ops, void *priv, void *cookie)
|
||||
{
|
||||
ASSERT_DPLL_PIN_REGISTERED(pin);
|
||||
dpll_pin_ref_sync_pair_del(pin->id);
|
||||
dpll_xa_ref_pin_del(&dpll->pin_refs, pin, ops, priv, cookie);
|
||||
dpll_xa_ref_dpll_del(&pin->dpll_refs, dpll, ops, priv, cookie);
|
||||
if (xa_empty(&pin->dpll_refs))
|
||||
@ -783,6 +797,33 @@ void dpll_pin_on_pin_unregister(struct dpll_pin *parent, struct dpll_pin *pin,
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dpll_pin_on_pin_unregister);
|
||||
|
||||
/**
|
||||
* dpll_pin_ref_sync_pair_add - create a reference sync signal pin pair
|
||||
* @pin: pin which produces the base frequency
|
||||
* @ref_sync_pin: pin which produces the sync signal
|
||||
*
|
||||
* Once pins are paired, the user-space configuration of reference sync pair
|
||||
* is possible.
|
||||
* Context: Acquires a lock (dpll_lock)
|
||||
* Return:
|
||||
* * 0 on success
|
||||
* * negative - error value
|
||||
*/
|
||||
int dpll_pin_ref_sync_pair_add(struct dpll_pin *pin,
|
||||
struct dpll_pin *ref_sync_pin)
|
||||
{
|
||||
int ret;
|
||||
|
||||
mutex_lock(&dpll_lock);
|
||||
ret = xa_insert(&pin->ref_sync_pins, ref_sync_pin->id,
|
||||
ref_sync_pin, GFP_KERNEL);
|
||||
__dpll_pin_change_ntf(pin);
|
||||
mutex_unlock(&dpll_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dpll_pin_ref_sync_pair_add);
|
||||
|
||||
static struct dpll_device_registration *
|
||||
dpll_device_registration_first(struct dpll_device *dpll)
|
||||
{
|
||||
|
||||
@ -49,8 +49,8 @@ struct dpll_device {
|
||||
* @module: module of creator
|
||||
* @dpll_refs: hold referencees to dplls pin was registered with
|
||||
* @parent_refs: hold references to parent pins pin was registered with
|
||||
* @ref_sync_pins: hold references to pins for Reference SYNC feature
|
||||
* @prop: pin properties copied from the registerer
|
||||
* @rclk_dev_name: holds name of device when pin can recover clock from it
|
||||
* @refcount: refcount
|
||||
* @rcu: rcu_head for kfree_rcu()
|
||||
*
|
||||
@ -69,6 +69,7 @@ struct dpll_pin {
|
||||
struct dpll_pin_properties prop;
|
||||
refcount_t refcount;
|
||||
struct rcu_head rcu;
|
||||
RH_KABI_EXTEND(struct xarray ref_sync_pins)
|
||||
};
|
||||
|
||||
/**
|
||||
|
||||
@ -48,6 +48,24 @@ dpll_msg_add_dev_parent_handle(struct sk_buff *msg, u32 id)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool dpll_pin_available(struct dpll_pin *pin)
|
||||
{
|
||||
struct dpll_pin_ref *par_ref;
|
||||
unsigned long i;
|
||||
|
||||
if (!xa_get_mark(&dpll_pin_xa, pin->id, DPLL_REGISTERED))
|
||||
return false;
|
||||
xa_for_each(&pin->parent_refs, i, par_ref)
|
||||
if (xa_get_mark(&dpll_pin_xa, par_ref->pin->id,
|
||||
DPLL_REGISTERED))
|
||||
return true;
|
||||
xa_for_each(&pin->dpll_refs, i, par_ref)
|
||||
if (xa_get_mark(&dpll_device_xa, par_ref->dpll->id,
|
||||
DPLL_REGISTERED))
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* dpll_msg_add_pin_handle - attach pin handle attribute to a given message
|
||||
* @msg: pointer to sk_buff message to attach a pin handle
|
||||
@ -146,6 +164,27 @@ dpll_msg_add_phase_offset_monitor(struct sk_buff *msg, struct dpll_device *dpll,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_msg_add_phase_offset_avg_factor(struct sk_buff *msg,
|
||||
struct dpll_device *dpll,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
const struct dpll_device_ops *ops = dpll_device_ops(dpll);
|
||||
u32 factor;
|
||||
int ret;
|
||||
|
||||
if (ops->phase_offset_avg_factor_get) {
|
||||
ret = ops->phase_offset_avg_factor_get(dpll, dpll_priv(dpll),
|
||||
&factor, extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (nla_put_u32(msg, DPLL_A_PHASE_OFFSET_AVG_FACTOR, factor))
|
||||
return -EMSGSIZE;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_msg_add_lock_status(struct sk_buff *msg, struct dpll_device *dpll,
|
||||
struct netlink_ext_ack *extack)
|
||||
@ -193,8 +232,8 @@ static int
|
||||
dpll_msg_add_clock_quality_level(struct sk_buff *msg, struct dpll_device *dpll,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
DECLARE_BITMAP(qls, DPLL_CLOCK_QUALITY_LEVEL_MAX + 1) = { 0 };
|
||||
const struct dpll_device_ops *ops = dpll_device_ops(dpll);
|
||||
DECLARE_BITMAP(qls, DPLL_CLOCK_QUALITY_LEVEL_MAX) = { 0 };
|
||||
enum dpll_clock_quality_level ql;
|
||||
int ret;
|
||||
|
||||
@ -203,7 +242,7 @@ dpll_msg_add_clock_quality_level(struct sk_buff *msg, struct dpll_device *dpll,
|
||||
ret = ops->clock_quality_level_get(dpll, dpll_priv(dpll), qls, extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
for_each_set_bit(ql, qls, DPLL_CLOCK_QUALITY_LEVEL_MAX)
|
||||
for_each_set_bit(ql, qls, DPLL_CLOCK_QUALITY_LEVEL_MAX + 1)
|
||||
if (nla_put_u32(msg, DPLL_A_CLOCK_QUALITY_LEVEL, ql))
|
||||
return -EMSGSIZE;
|
||||
|
||||
@ -428,6 +467,47 @@ nest_cancel:
|
||||
return -EMSGSIZE;
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_msg_add_pin_ref_sync(struct sk_buff *msg, struct dpll_pin *pin,
|
||||
struct dpll_pin_ref *ref,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
const struct dpll_pin_ops *ops = dpll_pin_ops(ref);
|
||||
struct dpll_device *dpll = ref->dpll;
|
||||
void *pin_priv, *ref_sync_pin_priv;
|
||||
struct dpll_pin *ref_sync_pin;
|
||||
enum dpll_pin_state state;
|
||||
struct nlattr *nest;
|
||||
unsigned long index;
|
||||
int ret;
|
||||
|
||||
pin_priv = dpll_pin_on_dpll_priv(dpll, pin);
|
||||
xa_for_each(&pin->ref_sync_pins, index, ref_sync_pin) {
|
||||
if (!dpll_pin_available(ref_sync_pin))
|
||||
continue;
|
||||
ref_sync_pin_priv = dpll_pin_on_dpll_priv(dpll, ref_sync_pin);
|
||||
if (WARN_ON(!ops->ref_sync_get))
|
||||
return -EOPNOTSUPP;
|
||||
ret = ops->ref_sync_get(pin, pin_priv, ref_sync_pin,
|
||||
ref_sync_pin_priv, &state, extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
nest = nla_nest_start(msg, DPLL_A_PIN_REFERENCE_SYNC);
|
||||
if (!nest)
|
||||
return -EMSGSIZE;
|
||||
if (nla_put_s32(msg, DPLL_A_PIN_ID, ref_sync_pin->id))
|
||||
goto nest_cancel;
|
||||
if (nla_put_s32(msg, DPLL_A_PIN_STATE, state))
|
||||
goto nest_cancel;
|
||||
nla_nest_end(msg, nest);
|
||||
}
|
||||
return 0;
|
||||
|
||||
nest_cancel:
|
||||
nla_nest_cancel(msg, nest);
|
||||
return -EMSGSIZE;
|
||||
}
|
||||
|
||||
static bool dpll_pin_is_freq_supported(struct dpll_pin *pin, u32 freq)
|
||||
{
|
||||
int fs;
|
||||
@ -557,6 +637,10 @@ dpll_cmd_pin_get_one(struct sk_buff *msg, struct dpll_pin *pin,
|
||||
ret = dpll_msg_add_pin_freq(msg, pin, ref, extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (prop->phase_gran &&
|
||||
nla_put_u32(msg, DPLL_A_PIN_PHASE_ADJUST_GRAN,
|
||||
prop->phase_gran))
|
||||
return -EMSGSIZE;
|
||||
if (nla_put_s32(msg, DPLL_A_PIN_PHASE_ADJUST_MIN,
|
||||
prop->phase_range.min))
|
||||
return -EMSGSIZE;
|
||||
@ -570,6 +654,10 @@ dpll_cmd_pin_get_one(struct sk_buff *msg, struct dpll_pin *pin,
|
||||
if (ret)
|
||||
return ret;
|
||||
ret = dpll_msg_add_pin_esync(msg, pin, ref, extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (!xa_empty(&pin->ref_sync_pins))
|
||||
ret = dpll_msg_add_pin_ref_sync(msg, pin, ref, extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (xa_empty(&pin->parent_refs))
|
||||
@ -612,6 +700,9 @@ dpll_device_get_one(struct dpll_device *dpll, struct sk_buff *msg,
|
||||
if (nla_put_u32(msg, DPLL_A_TYPE, dpll->type))
|
||||
return -EMSGSIZE;
|
||||
ret = dpll_msg_add_phase_offset_monitor(msg, dpll, extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
ret = dpll_msg_add_phase_offset_avg_factor(msg, dpll, extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -665,24 +756,6 @@ __dpll_device_change_ntf(struct dpll_device *dpll)
|
||||
return dpll_device_event_send(DPLL_CMD_DEVICE_CHANGE_NTF, dpll);
|
||||
}
|
||||
|
||||
static bool dpll_pin_available(struct dpll_pin *pin)
|
||||
{
|
||||
struct dpll_pin_ref *par_ref;
|
||||
unsigned long i;
|
||||
|
||||
if (!xa_get_mark(&dpll_pin_xa, pin->id, DPLL_REGISTERED))
|
||||
return false;
|
||||
xa_for_each(&pin->parent_refs, i, par_ref)
|
||||
if (xa_get_mark(&dpll_pin_xa, par_ref->pin->id,
|
||||
DPLL_REGISTERED))
|
||||
return true;
|
||||
xa_for_each(&pin->dpll_refs, i, par_ref)
|
||||
if (xa_get_mark(&dpll_device_xa, par_ref->dpll->id,
|
||||
DPLL_REGISTERED))
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* dpll_device_change_ntf - notify that the dpll device has been changed
|
||||
* @dpll: registered dpll pointer
|
||||
@ -745,7 +818,7 @@ int dpll_pin_delete_ntf(struct dpll_pin *pin)
|
||||
return dpll_pin_event_send(DPLL_CMD_PIN_DELETE_NTF, pin);
|
||||
}
|
||||
|
||||
static int __dpll_pin_change_ntf(struct dpll_pin *pin)
|
||||
int __dpll_pin_change_ntf(struct dpll_pin *pin)
|
||||
{
|
||||
return dpll_pin_event_send(DPLL_CMD_PIN_CHANGE_NTF, pin);
|
||||
}
|
||||
@ -794,6 +867,23 @@ dpll_phase_offset_monitor_set(struct dpll_device *dpll, struct nlattr *a,
|
||||
extack);
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_phase_offset_avg_factor_set(struct dpll_device *dpll, struct nlattr *a,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
const struct dpll_device_ops *ops = dpll_device_ops(dpll);
|
||||
u32 factor = nla_get_u32(a);
|
||||
|
||||
if (!ops->phase_offset_avg_factor_set) {
|
||||
NL_SET_ERR_MSG_ATTR(extack, a,
|
||||
"device not capable of changing phase offset average factor");
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
return ops->phase_offset_avg_factor_set(dpll, dpll_priv(dpll), factor,
|
||||
extack);
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_pin_freq_set(struct dpll_pin *pin, struct nlattr *a,
|
||||
struct netlink_ext_ack *extack)
|
||||
@ -935,6 +1025,108 @@ rollback:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_pin_ref_sync_state_set(struct dpll_pin *pin,
|
||||
unsigned long ref_sync_pin_idx,
|
||||
const enum dpll_pin_state state,
|
||||
struct netlink_ext_ack *extack)
|
||||
|
||||
{
|
||||
struct dpll_pin_ref *ref, *failed;
|
||||
const struct dpll_pin_ops *ops;
|
||||
enum dpll_pin_state old_state;
|
||||
struct dpll_pin *ref_sync_pin;
|
||||
struct dpll_device *dpll;
|
||||
unsigned long i;
|
||||
int ret;
|
||||
|
||||
ref_sync_pin = xa_find(&pin->ref_sync_pins, &ref_sync_pin_idx,
|
||||
ULONG_MAX, XA_PRESENT);
|
||||
if (!ref_sync_pin) {
|
||||
NL_SET_ERR_MSG(extack, "reference sync pin not found");
|
||||
return -EINVAL;
|
||||
}
|
||||
if (!dpll_pin_available(ref_sync_pin)) {
|
||||
NL_SET_ERR_MSG(extack, "reference sync pin not available");
|
||||
return -EINVAL;
|
||||
}
|
||||
ref = dpll_xa_ref_dpll_first(&pin->dpll_refs);
|
||||
ASSERT_NOT_NULL(ref);
|
||||
ops = dpll_pin_ops(ref);
|
||||
if (!ops->ref_sync_set || !ops->ref_sync_get) {
|
||||
NL_SET_ERR_MSG(extack, "reference sync not supported by this pin");
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
dpll = ref->dpll;
|
||||
ret = ops->ref_sync_get(pin, dpll_pin_on_dpll_priv(dpll, pin),
|
||||
ref_sync_pin,
|
||||
dpll_pin_on_dpll_priv(dpll, ref_sync_pin),
|
||||
&old_state, extack);
|
||||
if (ret) {
|
||||
NL_SET_ERR_MSG(extack, "unable to get old reference sync state");
|
||||
return ret;
|
||||
}
|
||||
if (state == old_state)
|
||||
return 0;
|
||||
xa_for_each(&pin->dpll_refs, i, ref) {
|
||||
ops = dpll_pin_ops(ref);
|
||||
dpll = ref->dpll;
|
||||
ret = ops->ref_sync_set(pin, dpll_pin_on_dpll_priv(dpll, pin),
|
||||
ref_sync_pin,
|
||||
dpll_pin_on_dpll_priv(dpll,
|
||||
ref_sync_pin),
|
||||
state, extack);
|
||||
if (ret) {
|
||||
failed = ref;
|
||||
NL_SET_ERR_MSG_FMT(extack, "reference sync set failed for dpll_id:%u",
|
||||
dpll->id);
|
||||
goto rollback;
|
||||
}
|
||||
}
|
||||
__dpll_pin_change_ntf(pin);
|
||||
|
||||
return 0;
|
||||
|
||||
rollback:
|
||||
xa_for_each(&pin->dpll_refs, i, ref) {
|
||||
if (ref == failed)
|
||||
break;
|
||||
ops = dpll_pin_ops(ref);
|
||||
dpll = ref->dpll;
|
||||
if (ops->ref_sync_set(pin, dpll_pin_on_dpll_priv(dpll, pin),
|
||||
ref_sync_pin,
|
||||
dpll_pin_on_dpll_priv(dpll, ref_sync_pin),
|
||||
old_state, extack))
|
||||
NL_SET_ERR_MSG(extack, "set reference sync rollback failed");
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_pin_ref_sync_set(struct dpll_pin *pin, struct nlattr *nest,
|
||||
struct netlink_ext_ack *extack)
|
||||
{
|
||||
struct nlattr *tb[DPLL_A_PIN_MAX + 1];
|
||||
enum dpll_pin_state state;
|
||||
u32 sync_pin_id;
|
||||
|
||||
nla_parse_nested(tb, DPLL_A_PIN_MAX, nest,
|
||||
dpll_reference_sync_nl_policy, extack);
|
||||
if (!tb[DPLL_A_PIN_ID]) {
|
||||
NL_SET_ERR_MSG(extack, "sync pin id expected");
|
||||
return -EINVAL;
|
||||
}
|
||||
sync_pin_id = nla_get_u32(tb[DPLL_A_PIN_ID]);
|
||||
|
||||
if (!tb[DPLL_A_PIN_STATE]) {
|
||||
NL_SET_ERR_MSG(extack, "sync pin state expected");
|
||||
return -EINVAL;
|
||||
}
|
||||
state = nla_get_u32(tb[DPLL_A_PIN_STATE]);
|
||||
|
||||
return dpll_pin_ref_sync_state_set(pin, sync_pin_id, state, extack);
|
||||
}
|
||||
|
||||
static int
|
||||
dpll_pin_on_pin_state_set(struct dpll_pin *pin, u32 parent_idx,
|
||||
enum dpll_pin_state state,
|
||||
@ -1073,7 +1265,13 @@ dpll_pin_phase_adj_set(struct dpll_pin *pin, struct nlattr *phase_adj_attr,
|
||||
if (phase_adj > pin->prop.phase_range.max ||
|
||||
phase_adj < pin->prop.phase_range.min) {
|
||||
NL_SET_ERR_MSG_ATTR(extack, phase_adj_attr,
|
||||
"phase adjust value not supported");
|
||||
"phase adjust value of out range");
|
||||
return -EINVAL;
|
||||
}
|
||||
if (pin->prop.phase_gran && phase_adj % (s32)pin->prop.phase_gran) {
|
||||
NL_SET_ERR_MSG_ATTR_FMT(extack, phase_adj_attr,
|
||||
"phase adjust value not multiple of %u",
|
||||
pin->prop.phase_gran);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@ -1241,6 +1439,11 @@ dpll_pin_set_from_nlattr(struct dpll_pin *pin, struct genl_info *info)
|
||||
if (ret)
|
||||
return ret;
|
||||
break;
|
||||
case DPLL_A_PIN_REFERENCE_SYNC:
|
||||
ret = dpll_pin_ref_sync_set(pin, a, info->extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
@ -1366,16 +1569,18 @@ int dpll_nl_pin_id_get_doit(struct sk_buff *skb, struct genl_info *info)
|
||||
return -EMSGSIZE;
|
||||
}
|
||||
pin = dpll_pin_find_from_nlattr(info);
|
||||
if (!IS_ERR(pin)) {
|
||||
if (!dpll_pin_available(pin)) {
|
||||
nlmsg_free(msg);
|
||||
return -ENODEV;
|
||||
}
|
||||
ret = dpll_msg_add_pin_handle(msg, pin);
|
||||
if (ret) {
|
||||
nlmsg_free(msg);
|
||||
return ret;
|
||||
}
|
||||
if (IS_ERR(pin)) {
|
||||
nlmsg_free(msg);
|
||||
return PTR_ERR(pin);
|
||||
}
|
||||
if (!dpll_pin_available(pin)) {
|
||||
nlmsg_free(msg);
|
||||
return -ENODEV;
|
||||
}
|
||||
ret = dpll_msg_add_pin_handle(msg, pin);
|
||||
if (ret) {
|
||||
nlmsg_free(msg);
|
||||
return ret;
|
||||
}
|
||||
genlmsg_end(msg, hdr);
|
||||
|
||||
@ -1542,12 +1747,14 @@ int dpll_nl_device_id_get_doit(struct sk_buff *skb, struct genl_info *info)
|
||||
}
|
||||
|
||||
dpll = dpll_device_find_from_nlattr(info);
|
||||
if (!IS_ERR(dpll)) {
|
||||
ret = dpll_msg_add_dev_handle(msg, dpll);
|
||||
if (ret) {
|
||||
nlmsg_free(msg);
|
||||
return ret;
|
||||
}
|
||||
if (IS_ERR(dpll)) {
|
||||
nlmsg_free(msg);
|
||||
return PTR_ERR(dpll);
|
||||
}
|
||||
ret = dpll_msg_add_dev_handle(msg, dpll);
|
||||
if (ret) {
|
||||
nlmsg_free(msg);
|
||||
return ret;
|
||||
}
|
||||
genlmsg_end(msg, hdr);
|
||||
|
||||
@ -1584,14 +1791,25 @@ int dpll_nl_device_get_doit(struct sk_buff *skb, struct genl_info *info)
|
||||
static int
|
||||
dpll_set_from_nlattr(struct dpll_device *dpll, struct genl_info *info)
|
||||
{
|
||||
int ret;
|
||||
struct nlattr *a;
|
||||
int rem, ret;
|
||||
|
||||
if (info->attrs[DPLL_A_PHASE_OFFSET_MONITOR]) {
|
||||
struct nlattr *a = info->attrs[DPLL_A_PHASE_OFFSET_MONITOR];
|
||||
|
||||
ret = dpll_phase_offset_monitor_set(dpll, a, info->extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
nla_for_each_attr(a, genlmsg_data(info->genlhdr),
|
||||
genlmsg_len(info->genlhdr), rem) {
|
||||
switch (nla_type(a)) {
|
||||
case DPLL_A_PHASE_OFFSET_MONITOR:
|
||||
ret = dpll_phase_offset_monitor_set(dpll, a,
|
||||
info->extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
break;
|
||||
case DPLL_A_PHASE_OFFSET_AVG_FACTOR:
|
||||
ret = dpll_phase_offset_avg_factor_set(dpll, a,
|
||||
info->extack);
|
||||
if (ret)
|
||||
return ret;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
@ -11,3 +11,5 @@ int dpll_device_delete_ntf(struct dpll_device *dpll);
|
||||
int dpll_pin_create_ntf(struct dpll_pin *pin);
|
||||
|
||||
int dpll_pin_delete_ntf(struct dpll_pin *pin);
|
||||
|
||||
int __dpll_pin_change_ntf(struct dpll_pin *pin);
|
||||
|
||||
@ -24,6 +24,11 @@ const struct nla_policy dpll_pin_parent_pin_nl_policy[DPLL_A_PIN_STATE + 1] = {
|
||||
[DPLL_A_PIN_STATE] = NLA_POLICY_RANGE(NLA_U32, 1, 3),
|
||||
};
|
||||
|
||||
const struct nla_policy dpll_reference_sync_nl_policy[DPLL_A_PIN_STATE + 1] = {
|
||||
[DPLL_A_PIN_ID] = { .type = NLA_U32, },
|
||||
[DPLL_A_PIN_STATE] = NLA_POLICY_RANGE(NLA_U32, 1, 3),
|
||||
};
|
||||
|
||||
/* DPLL_CMD_DEVICE_ID_GET - do */
|
||||
static const struct nla_policy dpll_device_id_get_nl_policy[DPLL_A_TYPE + 1] = {
|
||||
[DPLL_A_MODULE_NAME] = { .type = NLA_NUL_STRING, },
|
||||
@ -37,9 +42,10 @@ static const struct nla_policy dpll_device_get_nl_policy[DPLL_A_ID + 1] = {
|
||||
};
|
||||
|
||||
/* DPLL_CMD_DEVICE_SET - do */
|
||||
static const struct nla_policy dpll_device_set_nl_policy[DPLL_A_PHASE_OFFSET_MONITOR + 1] = {
|
||||
static const struct nla_policy dpll_device_set_nl_policy[DPLL_A_PHASE_OFFSET_AVG_FACTOR + 1] = {
|
||||
[DPLL_A_ID] = { .type = NLA_U32, },
|
||||
[DPLL_A_PHASE_OFFSET_MONITOR] = NLA_POLICY_MAX(NLA_U32, 1),
|
||||
[DPLL_A_PHASE_OFFSET_AVG_FACTOR] = { .type = NLA_U32, },
|
||||
};
|
||||
|
||||
/* DPLL_CMD_PIN_ID_GET - do */
|
||||
@ -63,7 +69,7 @@ static const struct nla_policy dpll_pin_get_dump_nl_policy[DPLL_A_PIN_ID + 1] =
|
||||
};
|
||||
|
||||
/* DPLL_CMD_PIN_SET - do */
|
||||
static const struct nla_policy dpll_pin_set_nl_policy[DPLL_A_PIN_ESYNC_FREQUENCY + 1] = {
|
||||
static const struct nla_policy dpll_pin_set_nl_policy[DPLL_A_PIN_REFERENCE_SYNC + 1] = {
|
||||
[DPLL_A_PIN_ID] = { .type = NLA_U32, },
|
||||
[DPLL_A_PIN_FREQUENCY] = { .type = NLA_U64, },
|
||||
[DPLL_A_PIN_DIRECTION] = NLA_POLICY_RANGE(NLA_U32, 1, 2),
|
||||
@ -73,6 +79,7 @@ static const struct nla_policy dpll_pin_set_nl_policy[DPLL_A_PIN_ESYNC_FREQUENCY
|
||||
[DPLL_A_PIN_PARENT_PIN] = NLA_POLICY_NESTED(dpll_pin_parent_pin_nl_policy),
|
||||
[DPLL_A_PIN_PHASE_ADJUST] = { .type = NLA_S32, },
|
||||
[DPLL_A_PIN_ESYNC_FREQUENCY] = { .type = NLA_U64, },
|
||||
[DPLL_A_PIN_REFERENCE_SYNC] = NLA_POLICY_NESTED(dpll_reference_sync_nl_policy),
|
||||
};
|
||||
|
||||
/* Ops table for dpll */
|
||||
@ -106,7 +113,7 @@ static const struct genl_split_ops dpll_nl_ops[] = {
|
||||
.doit = dpll_nl_device_set_doit,
|
||||
.post_doit = dpll_post_doit,
|
||||
.policy = dpll_device_set_nl_policy,
|
||||
.maxattr = DPLL_A_PHASE_OFFSET_MONITOR,
|
||||
.maxattr = DPLL_A_PHASE_OFFSET_AVG_FACTOR,
|
||||
.flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
|
||||
},
|
||||
{
|
||||
@ -140,7 +147,7 @@ static const struct genl_split_ops dpll_nl_ops[] = {
|
||||
.doit = dpll_nl_pin_set_doit,
|
||||
.post_doit = dpll_pin_post_doit,
|
||||
.policy = dpll_pin_set_nl_policy,
|
||||
.maxattr = DPLL_A_PIN_ESYNC_FREQUENCY,
|
||||
.maxattr = DPLL_A_PIN_REFERENCE_SYNC,
|
||||
.flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
|
||||
},
|
||||
};
|
||||
|
||||
@ -14,6 +14,7 @@
|
||||
/* Common nested types */
|
||||
extern const struct nla_policy dpll_pin_parent_device_nl_policy[DPLL_A_PIN_PHASE_OFFSET + 1];
|
||||
extern const struct nla_policy dpll_pin_parent_pin_nl_policy[DPLL_A_PIN_STATE + 1];
|
||||
extern const struct nla_policy dpll_reference_sync_nl_policy[DPLL_A_PIN_STATE + 1];
|
||||
|
||||
int dpll_lock_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
|
||||
struct genl_info *info);
|
||||
|
||||
@ -1,7 +1,8 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
obj-$(CONFIG_ZL3073X) += zl3073x.o
|
||||
zl3073x-objs := core.o devlink.o dpll.o flash.o fw.o prop.o
|
||||
zl3073x-objs := core.o devlink.o dpll.o flash.o fw.o \
|
||||
out.o prop.o ref.o synth.o
|
||||
|
||||
obj-$(CONFIG_ZL3073X_I2C) += zl3073x_i2c.o
|
||||
zl3073x_i2c-objs := i2c.o
|
||||
|
||||
@ -129,47 +129,6 @@ const struct regmap_config zl3073x_regmap_config = {
|
||||
};
|
||||
EXPORT_SYMBOL_NS_GPL(zl3073x_regmap_config, "ZL3073X");
|
||||
|
||||
/**
|
||||
* zl3073x_ref_freq_factorize - factorize given frequency
|
||||
* @freq: input frequency
|
||||
* @base: base frequency
|
||||
* @mult: multiplier
|
||||
*
|
||||
* Checks if the given frequency can be factorized using one of the
|
||||
* supported base frequencies. If so the base frequency and multiplier
|
||||
* are stored into appropriate parameters if they are not NULL.
|
||||
*
|
||||
* Return: 0 on success, -EINVAL if the frequency cannot be factorized
|
||||
*/
|
||||
int
|
||||
zl3073x_ref_freq_factorize(u32 freq, u16 *base, u16 *mult)
|
||||
{
|
||||
static const u16 base_freqs[] = {
|
||||
1, 2, 4, 5, 8, 10, 16, 20, 25, 32, 40, 50, 64, 80, 100, 125,
|
||||
128, 160, 200, 250, 256, 320, 400, 500, 625, 640, 800, 1000,
|
||||
1250, 1280, 1600, 2000, 2500, 3125, 3200, 4000, 5000, 6250,
|
||||
6400, 8000, 10000, 12500, 15625, 16000, 20000, 25000, 31250,
|
||||
32000, 40000, 50000, 62500,
|
||||
};
|
||||
u32 div;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(base_freqs); i++) {
|
||||
div = freq / base_freqs[i];
|
||||
|
||||
if (div <= U16_MAX && (freq % base_freqs[i]) == 0) {
|
||||
if (base)
|
||||
*base = base_freqs[i];
|
||||
if (mult)
|
||||
*mult = div;
|
||||
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static bool
|
||||
zl3073x_check_reg(struct zl3073x_dev *zldev, unsigned int reg, size_t size)
|
||||
{
|
||||
@ -593,190 +552,6 @@ int zl3073x_write_hwreg_seq(struct zl3073x_dev *zldev,
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_state_fetch - get input reference state
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: input reference index to fetch state for
|
||||
*
|
||||
* Function fetches information for the given input reference that are
|
||||
* invariant and stores them for later use.
|
||||
*
|
||||
* Return: 0 on success, <0 on error
|
||||
*/
|
||||
static int
|
||||
zl3073x_ref_state_fetch(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
struct zl3073x_ref *input = &zldev->ref[index];
|
||||
u8 ref_config;
|
||||
int rc;
|
||||
|
||||
/* If the input is differential then the configuration for N-pin
|
||||
* reference is ignored and P-pin config is used for both.
|
||||
*/
|
||||
if (zl3073x_is_n_pin(index) &&
|
||||
zl3073x_ref_is_diff(zldev, index - 1)) {
|
||||
input->enabled = zl3073x_ref_is_enabled(zldev, index - 1);
|
||||
input->diff = true;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read reference configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_REF_MB_SEM, ZL_REF_MB_SEM_RD,
|
||||
ZL_REG_REF_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read ref_config register */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_REF_CONFIG, &ref_config);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
input->enabled = FIELD_GET(ZL_REF_CONFIG_ENABLE, ref_config);
|
||||
input->diff = FIELD_GET(ZL_REF_CONFIG_DIFF_EN, ref_config);
|
||||
|
||||
dev_dbg(zldev->dev, "REF%u is %s and configured as %s\n", index,
|
||||
str_enabled_disabled(input->enabled),
|
||||
input->diff ? "differential" : "single-ended");
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_state_fetch - get output state
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: output index to fetch state for
|
||||
*
|
||||
* Function fetches information for the given output (not output pin)
|
||||
* that are invariant and stores them for later use.
|
||||
*
|
||||
* Return: 0 on success, <0 on error
|
||||
*/
|
||||
static int
|
||||
zl3073x_out_state_fetch(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
struct zl3073x_out *out = &zldev->out[index];
|
||||
u8 output_ctrl, output_mode;
|
||||
int rc;
|
||||
|
||||
/* Read output configuration */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_OUTPUT_CTRL(index), &output_ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Store info about output enablement and synthesizer the output
|
||||
* is connected to.
|
||||
*/
|
||||
out->enabled = FIELD_GET(ZL_OUTPUT_CTRL_EN, output_ctrl);
|
||||
out->synth = FIELD_GET(ZL_OUTPUT_CTRL_SYNTH_SEL, output_ctrl);
|
||||
|
||||
dev_dbg(zldev->dev, "OUT%u is %s and connected to SYNTH%u\n", index,
|
||||
str_enabled_disabled(out->enabled), out->synth);
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read output configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_OUTPUT_MB_SEM, ZL_OUTPUT_MB_SEM_RD,
|
||||
ZL_REG_OUTPUT_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read output_mode */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_OUTPUT_MODE, &output_mode);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Extract and store output signal format */
|
||||
out->signal_format = FIELD_GET(ZL_OUTPUT_MODE_SIGNAL_FORMAT,
|
||||
output_mode);
|
||||
|
||||
dev_dbg(zldev->dev, "OUT%u has signal format 0x%02x\n", index,
|
||||
out->signal_format);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_synth_state_fetch - get synth state
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: synth index to fetch state for
|
||||
*
|
||||
* Function fetches information for the given synthesizer that are
|
||||
* invariant and stores them for later use.
|
||||
*
|
||||
* Return: 0 on success, <0 on error
|
||||
*/
|
||||
static int
|
||||
zl3073x_synth_state_fetch(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
struct zl3073x_synth *synth = &zldev->synth[index];
|
||||
u16 base, m, n;
|
||||
u8 synth_ctrl;
|
||||
u32 mult;
|
||||
int rc;
|
||||
|
||||
/* Read synth control register */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_SYNTH_CTRL(index), &synth_ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Store info about synth enablement and DPLL channel the synth is
|
||||
* driven by.
|
||||
*/
|
||||
synth->enabled = FIELD_GET(ZL_SYNTH_CTRL_EN, synth_ctrl);
|
||||
synth->dpll = FIELD_GET(ZL_SYNTH_CTRL_DPLL_SEL, synth_ctrl);
|
||||
|
||||
dev_dbg(zldev->dev, "SYNTH%u is %s and driven by DPLL%u\n", index,
|
||||
str_enabled_disabled(synth->enabled), synth->dpll);
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read synth configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_SYNTH_MB_SEM, ZL_SYNTH_MB_SEM_RD,
|
||||
ZL_REG_SYNTH_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* The output frequency is determined by the following formula:
|
||||
* base * multiplier * numerator / denominator
|
||||
*
|
||||
* Read registers with these values
|
||||
*/
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_SYNTH_FREQ_BASE, &base);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_SYNTH_FREQ_MULT, &mult);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_SYNTH_FREQ_M, &m);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_SYNTH_FREQ_N, &n);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Check denominator for zero to avoid div by 0 */
|
||||
if (!n) {
|
||||
dev_err(zldev->dev,
|
||||
"Zero divisor for SYNTH%u retrieved from device\n",
|
||||
index);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Compute and store synth frequency */
|
||||
zldev->synth[index].freq = div_u64(mul_u32_u32(base * m, mult), n);
|
||||
|
||||
dev_dbg(zldev->dev, "SYNTH%u frequency: %u Hz\n", index,
|
||||
zldev->synth[index].freq);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int
|
||||
zl3073x_dev_state_fetch(struct zl3073x_dev *zldev)
|
||||
{
|
||||
@ -816,6 +591,21 @@ zl3073x_dev_state_fetch(struct zl3073x_dev *zldev)
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void
|
||||
zl3073x_dev_ref_status_update(struct zl3073x_dev *zldev)
|
||||
{
|
||||
int i, rc;
|
||||
|
||||
for (i = 0; i < ZL3073X_NUM_REFS; i++) {
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_REF_MON_STATUS(i),
|
||||
&zldev->ref[i].mon_status);
|
||||
if (rc)
|
||||
dev_warn(zldev->dev,
|
||||
"Failed to get REF%u status: %pe\n", i,
|
||||
ERR_PTR(rc));
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_phase_offsets_update - update reference phase offsets
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
@ -935,6 +725,9 @@ zl3073x_dev_periodic_work(struct kthread_work *work)
|
||||
struct zl3073x_dpll *zldpll;
|
||||
int rc;
|
||||
|
||||
/* Update input references status */
|
||||
zl3073x_dev_ref_status_update(zldev);
|
||||
|
||||
/* Update DPLL-to-connected-ref phase offsets registers */
|
||||
rc = zl3073x_ref_phase_offsets_update(zldev, -1);
|
||||
if (rc)
|
||||
@ -956,6 +749,32 @@ zl3073x_dev_periodic_work(struct kthread_work *work)
|
||||
msecs_to_jiffies(500));
|
||||
}
|
||||
|
||||
int zl3073x_dev_phase_avg_factor_set(struct zl3073x_dev *zldev, u8 factor)
|
||||
{
|
||||
u8 dpll_meas_ctrl, value;
|
||||
int rc;
|
||||
|
||||
/* Read DPLL phase measurement control register */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_DPLL_MEAS_CTRL, &dpll_meas_ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Convert requested factor to register value */
|
||||
value = (factor + 1) & 0x0f;
|
||||
|
||||
/* Update phase measurement control register */
|
||||
dpll_meas_ctrl &= ~ZL_DPLL_MEAS_CTRL_AVG_FACTOR;
|
||||
dpll_meas_ctrl |= FIELD_PREP(ZL_DPLL_MEAS_CTRL_AVG_FACTOR, value);
|
||||
rc = zl3073x_write_u8(zldev, ZL_REG_DPLL_MEAS_CTRL, dpll_meas_ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Save the new factor */
|
||||
zldev->phase_avg_factor = factor;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_dev_phase_meas_setup - setup phase offset measurement
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
@ -972,15 +791,16 @@ zl3073x_dev_phase_meas_setup(struct zl3073x_dev *zldev)
|
||||
u8 dpll_meas_ctrl, mask = 0;
|
||||
int rc;
|
||||
|
||||
/* Setup phase measurement averaging factor */
|
||||
rc = zl3073x_dev_phase_avg_factor_set(zldev, zldev->phase_avg_factor);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read DPLL phase measurement control register */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_DPLL_MEAS_CTRL, &dpll_meas_ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Setup phase measurement averaging factor */
|
||||
dpll_meas_ctrl &= ~ZL_DPLL_MEAS_CTRL_AVG_FACTOR;
|
||||
dpll_meas_ctrl |= FIELD_PREP(ZL_DPLL_MEAS_CTRL_AVG_FACTOR, 3);
|
||||
|
||||
/* Enable DPLL measurement block */
|
||||
dpll_meas_ctrl |= ZL_DPLL_MEAS_CTRL_EN;
|
||||
|
||||
@ -1229,6 +1049,9 @@ int zl3073x_dev_probe(struct zl3073x_dev *zldev,
|
||||
*/
|
||||
zldev->clock_id = get_random_u64();
|
||||
|
||||
/* Default phase offset averaging factor */
|
||||
zldev->phase_avg_factor = 2;
|
||||
|
||||
/* Initialize mutex for operations where multiple reads, writes
|
||||
* and/or polls are required to be done atomically.
|
||||
*/
|
||||
|
||||
@ -9,7 +9,10 @@
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "out.h"
|
||||
#include "ref.h"
|
||||
#include "regs.h"
|
||||
#include "synth.h"
|
||||
|
||||
struct device;
|
||||
struct regmap;
|
||||
@ -27,60 +30,24 @@ struct zl3073x_dpll;
|
||||
#define ZL3073X_NUM_PINS (ZL3073X_NUM_INPUT_PINS + \
|
||||
ZL3073X_NUM_OUTPUT_PINS)
|
||||
|
||||
/**
|
||||
* struct zl3073x_ref - input reference invariant info
|
||||
* @enabled: input reference is enabled or disabled
|
||||
* @diff: true if input reference is differential
|
||||
* @ffo: current fractional frequency offset
|
||||
*/
|
||||
struct zl3073x_ref {
|
||||
bool enabled;
|
||||
bool diff;
|
||||
s64 ffo;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct zl3073x_out - output invariant info
|
||||
* @enabled: out is enabled or disabled
|
||||
* @synth: synthesizer the out is connected to
|
||||
* @signal_format: out signal format
|
||||
*/
|
||||
struct zl3073x_out {
|
||||
bool enabled;
|
||||
u8 synth;
|
||||
u8 signal_format;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct zl3073x_synth - synthesizer invariant info
|
||||
* @freq: synthesizer frequency
|
||||
* @dpll: ID of DPLL the synthesizer is driven by
|
||||
* @enabled: synth is enabled or disabled
|
||||
*/
|
||||
struct zl3073x_synth {
|
||||
u32 freq;
|
||||
u8 dpll;
|
||||
bool enabled;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct zl3073x_dev - zl3073x device
|
||||
* @dev: pointer to device
|
||||
* @regmap: regmap to access device registers
|
||||
* @multiop_lock: to serialize multiple register operations
|
||||
* @clock_id: clock id of the device
|
||||
* @ref: array of input references' invariants
|
||||
* @out: array of outs' invariants
|
||||
* @synth: array of synths' invariants
|
||||
* @dplls: list of DPLLs
|
||||
* @kworker: thread for periodic work
|
||||
* @work: periodic work
|
||||
* @clock_id: clock id of the device
|
||||
* @phase_avg_factor: phase offset measurement averaging factor
|
||||
*/
|
||||
struct zl3073x_dev {
|
||||
struct device *dev;
|
||||
struct regmap *regmap;
|
||||
struct mutex multiop_lock;
|
||||
u64 clock_id;
|
||||
|
||||
/* Invariants */
|
||||
struct zl3073x_ref ref[ZL3073X_NUM_REFS];
|
||||
@ -93,6 +60,10 @@ struct zl3073x_dev {
|
||||
/* Monitor */
|
||||
struct kthread_worker *kworker;
|
||||
struct kthread_delayed_work work;
|
||||
|
||||
/* Devlink parameters */
|
||||
u64 clock_id;
|
||||
u8 phase_avg_factor;
|
||||
};
|
||||
|
||||
struct zl3073x_chip_info {
|
||||
@ -115,6 +86,13 @@ int zl3073x_dev_probe(struct zl3073x_dev *zldev,
|
||||
int zl3073x_dev_start(struct zl3073x_dev *zldev, bool full);
|
||||
void zl3073x_dev_stop(struct zl3073x_dev *zldev);
|
||||
|
||||
static inline u8 zl3073x_dev_phase_avg_factor_get(struct zl3073x_dev *zldev)
|
||||
{
|
||||
return zldev->phase_avg_factor;
|
||||
}
|
||||
|
||||
int zl3073x_dev_phase_avg_factor_set(struct zl3073x_dev *zldev, u8 factor);
|
||||
|
||||
/**********************
|
||||
* Registers operations
|
||||
**********************/
|
||||
@ -164,7 +142,6 @@ int zl3073x_write_hwreg_seq(struct zl3073x_dev *zldev,
|
||||
* Misc operations
|
||||
*****************/
|
||||
|
||||
int zl3073x_ref_freq_factorize(u32 freq, u16 *base, u16 *mult);
|
||||
int zl3073x_ref_phase_offsets_update(struct zl3073x_dev *zldev, int channel);
|
||||
|
||||
static inline bool
|
||||
@ -206,172 +183,141 @@ zl3073x_output_pin_out_get(u8 id)
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_ffo_get - get current fractional frequency offset
|
||||
* zl3073x_dev_ref_freq_get - get input reference frequency
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: input reference index
|
||||
*
|
||||
* Return: the latest measured fractional frequency offset
|
||||
* Return: frequency of given input reference
|
||||
*/
|
||||
static inline s64
|
||||
zl3073x_ref_ffo_get(struct zl3073x_dev *zldev, u8 index)
|
||||
static inline u32
|
||||
zl3073x_dev_ref_freq_get(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->ref[index].ffo;
|
||||
const struct zl3073x_ref *ref = zl3073x_ref_state_get(zldev, index);
|
||||
|
||||
return zl3073x_ref_freq_get(ref);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_is_diff - check if the given input reference is differential
|
||||
* zl3073x_dev_ref_is_diff - check if the given input reference is differential
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: input reference index
|
||||
*
|
||||
* Return: true if reference is differential, false if reference is single-ended
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_ref_is_diff(struct zl3073x_dev *zldev, u8 index)
|
||||
zl3073x_dev_ref_is_diff(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->ref[index].diff;
|
||||
const struct zl3073x_ref *ref = zl3073x_ref_state_get(zldev, index);
|
||||
|
||||
return zl3073x_ref_is_diff(ref);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_is_enabled - check if the given input reference is enabled
|
||||
/*
|
||||
* zl3073x_dev_ref_is_status_ok - check the given input reference status
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: input reference index
|
||||
*
|
||||
* Return: true if input refernce is enabled, false otherwise
|
||||
* Return: true if the status is ok, false otherwise
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_ref_is_enabled(struct zl3073x_dev *zldev, u8 index)
|
||||
zl3073x_dev_ref_is_status_ok(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->ref[index].enabled;
|
||||
const struct zl3073x_ref *ref = zl3073x_ref_state_get(zldev, index);
|
||||
|
||||
return zl3073x_ref_is_status_ok(ref);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_synth_dpll_get - get DPLL ID the synth is driven by
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: synth index
|
||||
*
|
||||
* Return: ID of DPLL the given synthetizer is driven by
|
||||
*/
|
||||
static inline u8
|
||||
zl3073x_synth_dpll_get(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->synth[index].dpll;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_synth_freq_get - get synth current freq
|
||||
* zl3073x_dev_synth_freq_get - get synth current freq
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: synth index
|
||||
*
|
||||
* Return: frequency of given synthetizer
|
||||
*/
|
||||
static inline u32
|
||||
zl3073x_synth_freq_get(struct zl3073x_dev *zldev, u8 index)
|
||||
zl3073x_dev_synth_freq_get(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->synth[index].freq;
|
||||
const struct zl3073x_synth *synth;
|
||||
|
||||
synth = zl3073x_synth_state_get(zldev, index);
|
||||
return zl3073x_synth_freq_get(synth);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_synth_is_enabled - check if the given synth is enabled
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: synth index
|
||||
*
|
||||
* Return: true if synth is enabled, false otherwise
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_synth_is_enabled(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->synth[index].enabled;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_synth_get - get synth connected to given output
|
||||
* zl3073x_dev_out_synth_get - get synth connected to given output
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: output index
|
||||
*
|
||||
* Return: index of synth connected to given output.
|
||||
*/
|
||||
static inline u8
|
||||
zl3073x_out_synth_get(struct zl3073x_dev *zldev, u8 index)
|
||||
zl3073x_dev_out_synth_get(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->out[index].synth;
|
||||
const struct zl3073x_out *out = zl3073x_out_state_get(zldev, index);
|
||||
|
||||
return zl3073x_out_synth_get(out);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_is_enabled - check if the given output is enabled
|
||||
* zl3073x_dev_out_is_enabled - check if the given output is enabled
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: output index
|
||||
*
|
||||
* Return: true if the output is enabled, false otherwise
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_out_is_enabled(struct zl3073x_dev *zldev, u8 index)
|
||||
zl3073x_dev_out_is_enabled(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
u8 synth;
|
||||
const struct zl3073x_out *out = zl3073x_out_state_get(zldev, index);
|
||||
const struct zl3073x_synth *synth;
|
||||
u8 synth_id;
|
||||
|
||||
/* Output is enabled only if associated synth is enabled */
|
||||
synth = zl3073x_out_synth_get(zldev, index);
|
||||
if (zl3073x_synth_is_enabled(zldev, synth))
|
||||
return zldev->out[index].enabled;
|
||||
synth_id = zl3073x_out_synth_get(out);
|
||||
synth = zl3073x_synth_state_get(zldev, synth_id);
|
||||
|
||||
return false;
|
||||
return zl3073x_synth_is_enabled(synth) && zl3073x_out_is_enabled(out);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_signal_format_get - get output signal format
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: output index
|
||||
*
|
||||
* Return: signal format of given output
|
||||
*/
|
||||
static inline u8
|
||||
zl3073x_out_signal_format_get(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return zldev->out[index].signal_format;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_dpll_get - get DPLL ID the output is driven by
|
||||
* zl3073x_dev_out_dpll_get - get DPLL ID the output is driven by
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: output index
|
||||
*
|
||||
* Return: ID of DPLL the given output is driven by
|
||||
*/
|
||||
static inline
|
||||
u8 zl3073x_out_dpll_get(struct zl3073x_dev *zldev, u8 index)
|
||||
u8 zl3073x_dev_out_dpll_get(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
u8 synth;
|
||||
const struct zl3073x_out *out = zl3073x_out_state_get(zldev, index);
|
||||
const struct zl3073x_synth *synth;
|
||||
u8 synth_id;
|
||||
|
||||
/* Get synthesizer connected to given output */
|
||||
synth = zl3073x_out_synth_get(zldev, index);
|
||||
synth_id = zl3073x_out_synth_get(out);
|
||||
synth = zl3073x_synth_state_get(zldev, synth_id);
|
||||
|
||||
/* Return DPLL that drives the synth */
|
||||
return zl3073x_synth_dpll_get(zldev, synth);
|
||||
return zl3073x_synth_dpll_get(synth);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_is_diff - check if the given output is differential
|
||||
* zl3073x_dev_out_is_diff - check if the given output is differential
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @index: output index
|
||||
*
|
||||
* Return: true if output is differential, false if output is single-ended
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_out_is_diff(struct zl3073x_dev *zldev, u8 index)
|
||||
zl3073x_dev_out_is_diff(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
switch (zl3073x_out_signal_format_get(zldev, index)) {
|
||||
case ZL_OUTPUT_MODE_SIGNAL_FORMAT_LVDS:
|
||||
case ZL_OUTPUT_MODE_SIGNAL_FORMAT_DIFF:
|
||||
case ZL_OUTPUT_MODE_SIGNAL_FORMAT_LOWVCM:
|
||||
return true;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
const struct zl3073x_out *out = zl3073x_out_state_get(zldev, index);
|
||||
|
||||
return false;
|
||||
return zl3073x_out_is_diff(out);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_output_pin_is_enabled - check if the given output pin is enabled
|
||||
* zl3073x_dev_output_pin_is_enabled - check if the given output pin is enabled
|
||||
* @zldev: pointer to zl3073x device
|
||||
* @id: output pin id
|
||||
*
|
||||
@ -381,16 +327,21 @@ zl3073x_out_is_diff(struct zl3073x_dev *zldev, u8 index)
|
||||
* Return: true if output pin is enabled, false if output pin is disabled
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_output_pin_is_enabled(struct zl3073x_dev *zldev, u8 id)
|
||||
zl3073x_dev_output_pin_is_enabled(struct zl3073x_dev *zldev, u8 id)
|
||||
{
|
||||
u8 output = zl3073x_output_pin_out_get(id);
|
||||
u8 out_id = zl3073x_output_pin_out_get(id);
|
||||
const struct zl3073x_out *out;
|
||||
|
||||
/* Check if the whole output is enabled */
|
||||
if (!zl3073x_out_is_enabled(zldev, output))
|
||||
out = zl3073x_out_state_get(zldev, out_id);
|
||||
|
||||
/* Check if the output is enabled - call _dev_ helper that
|
||||
* additionally checks for attached synth enablement.
|
||||
*/
|
||||
if (!zl3073x_dev_out_is_enabled(zldev, out_id))
|
||||
return false;
|
||||
|
||||
/* Check signal format */
|
||||
switch (zl3073x_out_signal_format_get(zldev, output)) {
|
||||
switch (zl3073x_out_signal_format_get(out)) {
|
||||
case ZL_OUTPUT_MODE_SIGNAL_FORMAT_DISABLED:
|
||||
/* Both output pins are disabled by signal format */
|
||||
return false;
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@ -20,6 +20,7 @@
|
||||
* @dpll_dev: pointer to registered DPLL device
|
||||
* @lock_status: last saved DPLL lock status
|
||||
* @pins: list of pins
|
||||
* @change_work: device change notification work
|
||||
*/
|
||||
struct zl3073x_dpll {
|
||||
struct list_head list;
|
||||
@ -32,6 +33,7 @@ struct zl3073x_dpll {
|
||||
struct dpll_device *dpll_dev;
|
||||
enum dpll_lock_status lock_status;
|
||||
struct list_head pins;
|
||||
struct work_struct change_work;
|
||||
};
|
||||
|
||||
struct zl3073x_dpll *zl3073x_dpll_alloc(struct zl3073x_dev *zldev, u8 ch);
|
||||
|
||||
@ -352,12 +352,12 @@ struct zl3073x_fw *zl3073x_fw_load(struct zl3073x_dev *zldev, const char *data,
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_flash_bundle_flash - Flash all components
|
||||
* zl3073x_fw_component_flash - Flash all components
|
||||
* @zldev: zl3073x device structure
|
||||
* @components: pointer to components array
|
||||
* @comp: pointer to components array
|
||||
* @extack: netlink extack pointer to report errors
|
||||
*
|
||||
* Returns 0 in case of success or negative number otherwise.
|
||||
* Return: 0 in case of success or negative number otherwise.
|
||||
*/
|
||||
static int
|
||||
zl3073x_fw_component_flash(struct zl3073x_dev *zldev,
|
||||
|
||||
157
drivers/dpll/zl3073x/out.c
Normal file
157
drivers/dpll/zl3073x/out.c
Normal file
@ -0,0 +1,157 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/cleanup.h>
|
||||
#include <linux/dev_printk.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/string_choices.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "core.h"
|
||||
#include "out.h"
|
||||
|
||||
/**
|
||||
* zl3073x_out_state_fetch - fetch output state from hardware
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: output index to fetch state for
|
||||
*
|
||||
* Function fetches state of the given output from hardware and stores it
|
||||
* for later use.
|
||||
*
|
||||
* Return: 0 on success, <0 on error
|
||||
*/
|
||||
int zl3073x_out_state_fetch(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
struct zl3073x_out *out = &zldev->out[index];
|
||||
int rc;
|
||||
|
||||
/* Read output configuration */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_OUTPUT_CTRL(index), &out->ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
dev_dbg(zldev->dev, "OUT%u is %s and connected to SYNTH%u\n", index,
|
||||
str_enabled_disabled(zl3073x_out_is_enabled(out)),
|
||||
zl3073x_out_synth_get(out));
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read output configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_OUTPUT_MB_SEM, ZL_OUTPUT_MB_SEM_RD,
|
||||
ZL_REG_OUTPUT_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read output mode */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_OUTPUT_MODE, &out->mode);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
dev_dbg(zldev->dev, "OUT%u has signal format 0x%02x\n", index,
|
||||
zl3073x_out_signal_format_get(out));
|
||||
|
||||
/* Read output divisor */
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_OUTPUT_DIV, &out->div);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
if (!out->div) {
|
||||
dev_err(zldev->dev, "Zero divisor for OUT%u got from device\n",
|
||||
index);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
dev_dbg(zldev->dev, "OUT%u divisor: %u\n", index, out->div);
|
||||
|
||||
/* Read output width */
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_OUTPUT_WIDTH, &out->width);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_OUTPUT_ESYNC_PERIOD,
|
||||
&out->esync_n_period);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
if (!out->esync_n_period) {
|
||||
dev_err(zldev->dev,
|
||||
"Zero esync divisor for OUT%u got from device\n",
|
||||
index);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_OUTPUT_ESYNC_WIDTH,
|
||||
&out->esync_n_width);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_OUTPUT_PHASE_COMP,
|
||||
&out->phase_comp);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_state_get - get current output state
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: output index to get state for
|
||||
*
|
||||
* Return: pointer to given output state
|
||||
*/
|
||||
const struct zl3073x_out *zl3073x_out_state_get(struct zl3073x_dev *zldev,
|
||||
u8 index)
|
||||
{
|
||||
return &zldev->out[index];
|
||||
}
|
||||
|
||||
int zl3073x_out_state_set(struct zl3073x_dev *zldev, u8 index,
|
||||
const struct zl3073x_out *out)
|
||||
{
|
||||
struct zl3073x_out *dout = &zldev->out[index];
|
||||
int rc;
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read output configuration into mailbox */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_OUTPUT_MB_SEM, ZL_OUTPUT_MB_SEM_RD,
|
||||
ZL_REG_OUTPUT_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Update mailbox with changed values */
|
||||
if (dout->div != out->div)
|
||||
rc = zl3073x_write_u32(zldev, ZL_REG_OUTPUT_DIV, out->div);
|
||||
if (!rc && dout->width != out->width)
|
||||
rc = zl3073x_write_u32(zldev, ZL_REG_OUTPUT_WIDTH, out->width);
|
||||
if (!rc && dout->esync_n_period != out->esync_n_period)
|
||||
rc = zl3073x_write_u32(zldev, ZL_REG_OUTPUT_ESYNC_PERIOD,
|
||||
out->esync_n_period);
|
||||
if (!rc && dout->esync_n_width != out->esync_n_width)
|
||||
rc = zl3073x_write_u32(zldev, ZL_REG_OUTPUT_ESYNC_WIDTH,
|
||||
out->esync_n_width);
|
||||
if (!rc && dout->mode != out->mode)
|
||||
rc = zl3073x_write_u8(zldev, ZL_REG_OUTPUT_MODE, out->mode);
|
||||
if (!rc && dout->phase_comp != out->phase_comp)
|
||||
rc = zl3073x_write_u32(zldev, ZL_REG_OUTPUT_PHASE_COMP,
|
||||
out->phase_comp);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Commit output configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_OUTPUT_MB_SEM, ZL_OUTPUT_MB_SEM_WR,
|
||||
ZL_REG_OUTPUT_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* After successful commit store new state */
|
||||
dout->div = out->div;
|
||||
dout->width = out->width;
|
||||
dout->esync_n_period = out->esync_n_period;
|
||||
dout->esync_n_width = out->esync_n_width;
|
||||
dout->mode = out->mode;
|
||||
dout->phase_comp = out->phase_comp;
|
||||
|
||||
return 0;
|
||||
}
|
||||
93
drivers/dpll/zl3073x/out.h
Normal file
93
drivers/dpll/zl3073x/out.h
Normal file
@ -0,0 +1,93 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
|
||||
#ifndef _ZL3073X_OUT_H
|
||||
#define _ZL3073X_OUT_H
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "regs.h"
|
||||
|
||||
struct zl3073x_dev;
|
||||
|
||||
/**
|
||||
* struct zl3073x_out - output state
|
||||
* @div: output divisor
|
||||
* @width: output pulse width
|
||||
* @esync_n_period: embedded sync or n-pin period (for n-div formats)
|
||||
* @esync_n_width: embedded sync or n-pin pulse width
|
||||
* @phase_comp: phase compensation
|
||||
* @ctrl: output control
|
||||
* @mode: output mode
|
||||
*/
|
||||
struct zl3073x_out {
|
||||
u32 div;
|
||||
u32 width;
|
||||
u32 esync_n_period;
|
||||
u32 esync_n_width;
|
||||
s32 phase_comp;
|
||||
u8 ctrl;
|
||||
u8 mode;
|
||||
};
|
||||
|
||||
int zl3073x_out_state_fetch(struct zl3073x_dev *zldev, u8 index);
|
||||
const struct zl3073x_out *zl3073x_out_state_get(struct zl3073x_dev *zldev,
|
||||
u8 index);
|
||||
|
||||
int zl3073x_out_state_set(struct zl3073x_dev *zldev, u8 index,
|
||||
const struct zl3073x_out *out);
|
||||
|
||||
/**
|
||||
* zl3073x_out_signal_format_get - get output signal format
|
||||
* @out: pointer to out state
|
||||
*
|
||||
* Return: signal format of given output
|
||||
*/
|
||||
static inline u8 zl3073x_out_signal_format_get(const struct zl3073x_out *out)
|
||||
{
|
||||
return FIELD_GET(ZL_OUTPUT_MODE_SIGNAL_FORMAT, out->mode);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_is_diff - check if the given output is differential
|
||||
* @out: pointer to out state
|
||||
*
|
||||
* Return: true if output is differential, false if output is single-ended
|
||||
*/
|
||||
static inline bool zl3073x_out_is_diff(const struct zl3073x_out *out)
|
||||
{
|
||||
switch (zl3073x_out_signal_format_get(out)) {
|
||||
case ZL_OUTPUT_MODE_SIGNAL_FORMAT_LVDS:
|
||||
case ZL_OUTPUT_MODE_SIGNAL_FORMAT_DIFF:
|
||||
case ZL_OUTPUT_MODE_SIGNAL_FORMAT_LOWVCM:
|
||||
return true;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_is_enabled - check if the given output is enabled
|
||||
* @out: pointer to out state
|
||||
*
|
||||
* Return: true if output is enabled, false if output is disabled
|
||||
*/
|
||||
static inline bool zl3073x_out_is_enabled(const struct zl3073x_out *out)
|
||||
{
|
||||
return !!FIELD_GET(ZL_OUTPUT_CTRL_EN, out->ctrl);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_out_synth_get - get synth connected to given output
|
||||
* @out: pointer to out state
|
||||
*
|
||||
* Return: index of synth connected to given output.
|
||||
*/
|
||||
static inline u8 zl3073x_out_synth_get(const struct zl3073x_out *out)
|
||||
{
|
||||
return FIELD_GET(ZL_OUTPUT_CTRL_SYNTH_SEL, out->ctrl);
|
||||
}
|
||||
|
||||
#endif /* _ZL3073X_OUT_H */
|
||||
@ -46,10 +46,10 @@ zl3073x_pin_check_freq(struct zl3073x_dev *zldev, enum dpll_pin_direction dir,
|
||||
|
||||
/* Get output pin synthesizer */
|
||||
out = zl3073x_output_pin_out_get(id);
|
||||
synth = zl3073x_out_synth_get(zldev, out);
|
||||
synth = zl3073x_dev_out_synth_get(zldev, out);
|
||||
|
||||
/* Get synth frequency */
|
||||
synth_freq = zl3073x_synth_freq_get(zldev, synth);
|
||||
synth_freq = zl3073x_dev_synth_freq_get(zldev, synth);
|
||||
|
||||
/* Check the frequency divides synth frequency */
|
||||
if (synth_freq % (u32)freq)
|
||||
@ -93,13 +93,13 @@ zl3073x_prop_pin_package_label_set(struct zl3073x_dev *zldev,
|
||||
|
||||
prefix = "REF";
|
||||
ref = zl3073x_input_pin_ref_get(id);
|
||||
is_diff = zl3073x_ref_is_diff(zldev, ref);
|
||||
is_diff = zl3073x_dev_ref_is_diff(zldev, ref);
|
||||
} else {
|
||||
u8 out;
|
||||
|
||||
prefix = "OUT";
|
||||
out = zl3073x_output_pin_out_get(id);
|
||||
is_diff = zl3073x_out_is_diff(zldev, out);
|
||||
is_diff = zl3073x_dev_out_is_diff(zldev, out);
|
||||
}
|
||||
|
||||
if (!is_diff)
|
||||
@ -208,7 +208,18 @@ struct zl3073x_pin_props *zl3073x_pin_props_get(struct zl3073x_dev *zldev,
|
||||
DPLL_PIN_CAPABILITIES_PRIORITY_CAN_CHANGE |
|
||||
DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE;
|
||||
} else {
|
||||
u8 out, synth;
|
||||
u32 f;
|
||||
|
||||
props->dpll_props.type = DPLL_PIN_TYPE_GNSS;
|
||||
|
||||
/* The output pin phase adjustment granularity equals half of
|
||||
* the synth frequency count.
|
||||
*/
|
||||
out = zl3073x_output_pin_out_get(index);
|
||||
synth = zl3073x_dev_out_synth_get(zldev, out);
|
||||
f = 2 * zl3073x_dev_synth_freq_get(zldev, synth);
|
||||
props->dpll_props.phase_gran = f ? div_u64(PSEC_PER_SEC, f) : 1;
|
||||
}
|
||||
|
||||
props->dpll_props.phase_range.min = S32_MIN;
|
||||
|
||||
204
drivers/dpll/zl3073x/ref.c
Normal file
204
drivers/dpll/zl3073x/ref.c
Normal file
@ -0,0 +1,204 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/cleanup.h>
|
||||
#include <linux/dev_printk.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/string_choices.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "core.h"
|
||||
#include "ref.h"
|
||||
|
||||
/**
|
||||
* zl3073x_ref_freq_factorize - factorize given frequency
|
||||
* @freq: input frequency
|
||||
* @base: base frequency
|
||||
* @mult: multiplier
|
||||
*
|
||||
* Checks if the given frequency can be factorized using one of the
|
||||
* supported base frequencies. If so the base frequency and multiplier
|
||||
* are stored into appropriate parameters if they are not NULL.
|
||||
*
|
||||
* Return: 0 on success, -EINVAL if the frequency cannot be factorized
|
||||
*/
|
||||
int
|
||||
zl3073x_ref_freq_factorize(u32 freq, u16 *base, u16 *mult)
|
||||
{
|
||||
static const u16 base_freqs[] = {
|
||||
1, 2, 4, 5, 8, 10, 16, 20, 25, 32, 40, 50, 64, 80, 100, 125,
|
||||
128, 160, 200, 250, 256, 320, 400, 500, 625, 640, 800, 1000,
|
||||
1250, 1280, 1600, 2000, 2500, 3125, 3200, 4000, 5000, 6250,
|
||||
6400, 8000, 10000, 12500, 15625, 16000, 20000, 25000, 31250,
|
||||
32000, 40000, 50000, 62500,
|
||||
};
|
||||
u32 div;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(base_freqs); i++) {
|
||||
div = freq / base_freqs[i];
|
||||
|
||||
if (div <= U16_MAX && (freq % base_freqs[i]) == 0) {
|
||||
if (base)
|
||||
*base = base_freqs[i];
|
||||
if (mult)
|
||||
*mult = div;
|
||||
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_state_fetch - fetch input reference state from hardware
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: input reference index to fetch state for
|
||||
*
|
||||
* Function fetches state for the given input reference from hardware and
|
||||
* stores it for later use.
|
||||
*
|
||||
* Return: 0 on success, <0 on error
|
||||
*/
|
||||
int zl3073x_ref_state_fetch(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
struct zl3073x_ref *ref = &zldev->ref[index];
|
||||
int rc;
|
||||
|
||||
/* For differential type inputs the N-pin reference shares
|
||||
* part of the configuration with the P-pin counterpart.
|
||||
*/
|
||||
if (zl3073x_is_n_pin(index) && zl3073x_ref_is_diff(ref - 1)) {
|
||||
struct zl3073x_ref *p_ref = ref - 1; /* P-pin counterpart*/
|
||||
|
||||
/* Copy the shared items from the P-pin */
|
||||
ref->config = p_ref->config;
|
||||
ref->esync_n_div = p_ref->esync_n_div;
|
||||
ref->freq_base = p_ref->freq_base;
|
||||
ref->freq_mult = p_ref->freq_mult;
|
||||
ref->freq_ratio_m = p_ref->freq_ratio_m;
|
||||
ref->freq_ratio_n = p_ref->freq_ratio_n;
|
||||
ref->phase_comp = p_ref->phase_comp;
|
||||
ref->sync_ctrl = p_ref->sync_ctrl;
|
||||
|
||||
return 0; /* Finish - no non-shared items for now */
|
||||
}
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read reference configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_REF_MB_SEM, ZL_REF_MB_SEM_RD,
|
||||
ZL_REG_REF_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read ref_config register */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_REF_CONFIG, &ref->config);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read frequency related registers */
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_REF_FREQ_BASE, &ref->freq_base);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_REF_FREQ_MULT, &ref->freq_mult);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_REF_RATIO_M, &ref->freq_ratio_m);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_REF_RATIO_N, &ref->freq_ratio_n);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read eSync and N-div rated registers */
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_REF_ESYNC_DIV, &ref->esync_n_div);
|
||||
if (rc)
|
||||
return rc;
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_REF_SYNC_CTRL, &ref->sync_ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Read phase compensation register */
|
||||
rc = zl3073x_read_u48(zldev, ZL_REG_REF_PHASE_OFFSET_COMP,
|
||||
&ref->phase_comp);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
dev_dbg(zldev->dev, "REF%u is %s and configured as %s\n", index,
|
||||
str_enabled_disabled(zl3073x_ref_is_enabled(ref)),
|
||||
zl3073x_ref_is_diff(ref) ? "differential" : "single-ended");
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_state_get - get current input reference state
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: input reference index to get state for
|
||||
*
|
||||
* Return: pointer to given input reference state
|
||||
*/
|
||||
const struct zl3073x_ref *
|
||||
zl3073x_ref_state_get(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
return &zldev->ref[index];
|
||||
}
|
||||
|
||||
int zl3073x_ref_state_set(struct zl3073x_dev *zldev, u8 index,
|
||||
const struct zl3073x_ref *ref)
|
||||
{
|
||||
struct zl3073x_ref *dref = &zldev->ref[index];
|
||||
int rc;
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read reference configuration into mailbox */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_REF_MB_SEM, ZL_REF_MB_SEM_RD,
|
||||
ZL_REG_REF_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Update mailbox with changed values */
|
||||
if (dref->freq_base != ref->freq_base)
|
||||
rc = zl3073x_write_u16(zldev, ZL_REG_REF_FREQ_BASE,
|
||||
ref->freq_base);
|
||||
if (!rc && dref->freq_mult != ref->freq_mult)
|
||||
rc = zl3073x_write_u16(zldev, ZL_REG_REF_FREQ_MULT,
|
||||
ref->freq_mult);
|
||||
if (!rc && dref->freq_ratio_m != ref->freq_ratio_m)
|
||||
rc = zl3073x_write_u16(zldev, ZL_REG_REF_RATIO_M,
|
||||
ref->freq_ratio_m);
|
||||
if (!rc && dref->freq_ratio_n != ref->freq_ratio_n)
|
||||
rc = zl3073x_write_u16(zldev, ZL_REG_REF_RATIO_N,
|
||||
ref->freq_ratio_n);
|
||||
if (!rc && dref->esync_n_div != ref->esync_n_div)
|
||||
rc = zl3073x_write_u32(zldev, ZL_REG_REF_ESYNC_DIV,
|
||||
ref->esync_n_div);
|
||||
if (!rc && dref->sync_ctrl != ref->sync_ctrl)
|
||||
rc = zl3073x_write_u8(zldev, ZL_REG_REF_SYNC_CTRL,
|
||||
ref->sync_ctrl);
|
||||
if (!rc && dref->phase_comp != ref->phase_comp)
|
||||
rc = zl3073x_write_u48(zldev, ZL_REG_REF_PHASE_OFFSET_COMP,
|
||||
ref->phase_comp);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Commit reference configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_REF_MB_SEM, ZL_REF_MB_SEM_WR,
|
||||
ZL_REG_REF_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* After successful commit store new state */
|
||||
dref->freq_base = ref->freq_base;
|
||||
dref->freq_mult = ref->freq_mult;
|
||||
dref->freq_ratio_m = ref->freq_ratio_m;
|
||||
dref->freq_ratio_n = ref->freq_ratio_n;
|
||||
dref->esync_n_div = ref->esync_n_div;
|
||||
dref->sync_ctrl = ref->sync_ctrl;
|
||||
dref->phase_comp = ref->phase_comp;
|
||||
|
||||
return 0;
|
||||
}
|
||||
136
drivers/dpll/zl3073x/ref.h
Normal file
136
drivers/dpll/zl3073x/ref.h
Normal file
@ -0,0 +1,136 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
|
||||
#ifndef _ZL3073X_REF_H
|
||||
#define _ZL3073X_REF_H
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/math64.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "regs.h"
|
||||
|
||||
struct zl3073x_dev;
|
||||
|
||||
/**
|
||||
* struct zl3073x_ref - input reference state
|
||||
* @ffo: current fractional frequency offset
|
||||
* @phase_comp: phase compensation
|
||||
* @esync_n_div: divisor for embedded sync or n-divided signal formats
|
||||
* @freq_base: frequency base
|
||||
* @freq_mult: frequnecy multiplier
|
||||
* @freq_ratio_m: FEC mode multiplier
|
||||
* @freq_ratio_n: FEC mode divisor
|
||||
* @config: reference config
|
||||
* @sync_ctrl: reference sync control
|
||||
* @mon_status: reference monitor status
|
||||
*/
|
||||
struct zl3073x_ref {
|
||||
s64 ffo;
|
||||
u64 phase_comp;
|
||||
u32 esync_n_div;
|
||||
u16 freq_base;
|
||||
u16 freq_mult;
|
||||
u16 freq_ratio_m;
|
||||
u16 freq_ratio_n;
|
||||
u8 config;
|
||||
u8 sync_ctrl;
|
||||
u8 mon_status;
|
||||
};
|
||||
|
||||
int zl3073x_ref_state_fetch(struct zl3073x_dev *zldev, u8 index);
|
||||
|
||||
const struct zl3073x_ref *zl3073x_ref_state_get(struct zl3073x_dev *zldev,
|
||||
u8 index);
|
||||
|
||||
int zl3073x_ref_state_set(struct zl3073x_dev *zldev, u8 index,
|
||||
const struct zl3073x_ref *ref);
|
||||
|
||||
int zl3073x_ref_freq_factorize(u32 freq, u16 *base, u16 *mult);
|
||||
|
||||
/**
|
||||
* zl3073x_ref_ffo_get - get current fractional frequency offset
|
||||
* @ref: pointer to ref state
|
||||
*
|
||||
* Return: the latest measured fractional frequency offset
|
||||
*/
|
||||
static inline s64
|
||||
zl3073x_ref_ffo_get(const struct zl3073x_ref *ref)
|
||||
{
|
||||
return ref->ffo;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_freq_get - get given input reference frequency
|
||||
* @ref: pointer to ref state
|
||||
*
|
||||
* Return: frequency of the given input reference
|
||||
*/
|
||||
static inline u32
|
||||
zl3073x_ref_freq_get(const struct zl3073x_ref *ref)
|
||||
{
|
||||
return mul_u64_u32_div(ref->freq_base * ref->freq_mult,
|
||||
ref->freq_ratio_m, ref->freq_ratio_n);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_freq_set - set given input reference frequency
|
||||
* @ref: pointer to ref state
|
||||
* @freq: frequency to be set
|
||||
*
|
||||
* Return: 0 on success, <0 when frequency cannot be factorized
|
||||
*/
|
||||
static inline int
|
||||
zl3073x_ref_freq_set(struct zl3073x_ref *ref, u32 freq)
|
||||
{
|
||||
u16 base, mult;
|
||||
int rc;
|
||||
|
||||
rc = zl3073x_ref_freq_factorize(freq, &base, &mult);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
ref->freq_base = base;
|
||||
ref->freq_mult = mult;
|
||||
ref->freq_ratio_m = 1;
|
||||
ref->freq_ratio_n = 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_is_diff - check if the given input reference is differential
|
||||
* @ref: pointer to ref state
|
||||
*
|
||||
* Return: true if reference is differential, false if reference is single-ended
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_ref_is_diff(const struct zl3073x_ref *ref)
|
||||
{
|
||||
return !!FIELD_GET(ZL_REF_CONFIG_DIFF_EN, ref->config);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_is_enabled - check if the given input reference is enabled
|
||||
* @ref: pointer to ref state
|
||||
*
|
||||
* Return: true if input refernce is enabled, false otherwise
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_ref_is_enabled(const struct zl3073x_ref *ref)
|
||||
{
|
||||
return !!FIELD_GET(ZL_REF_CONFIG_ENABLE, ref->config);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_ref_is_status_ok - check the given input reference status
|
||||
* @ref: pointer to ref state
|
||||
*
|
||||
* Return: true if the status is ok, false otherwise
|
||||
*/
|
||||
static inline bool
|
||||
zl3073x_ref_is_status_ok(const struct zl3073x_ref *ref)
|
||||
{
|
||||
return ref->mon_status == ZL_REF_MON_STATUS_OK;
|
||||
}
|
||||
|
||||
#endif /* _ZL3073X_REF_H */
|
||||
87
drivers/dpll/zl3073x/synth.c
Normal file
87
drivers/dpll/zl3073x/synth.c
Normal file
@ -0,0 +1,87 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/cleanup.h>
|
||||
#include <linux/dev_printk.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/string_choices.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "core.h"
|
||||
#include "synth.h"
|
||||
|
||||
/**
|
||||
* zl3073x_synth_state_fetch - fetch synth state from hardware
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: synth index to fetch state for
|
||||
*
|
||||
* Function fetches state of the given synthesizer from the hardware and
|
||||
* stores it for later use.
|
||||
*
|
||||
* Return: 0 on success, <0 on error
|
||||
*/
|
||||
int zl3073x_synth_state_fetch(struct zl3073x_dev *zldev, u8 index)
|
||||
{
|
||||
struct zl3073x_synth *synth = &zldev->synth[index];
|
||||
int rc;
|
||||
|
||||
/* Read synth control register */
|
||||
rc = zl3073x_read_u8(zldev, ZL_REG_SYNTH_CTRL(index), &synth->ctrl);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
guard(mutex)(&zldev->multiop_lock);
|
||||
|
||||
/* Read synth configuration */
|
||||
rc = zl3073x_mb_op(zldev, ZL_REG_SYNTH_MB_SEM, ZL_SYNTH_MB_SEM_RD,
|
||||
ZL_REG_SYNTH_MB_MASK, BIT(index));
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* The output frequency is determined by the following formula:
|
||||
* base * multiplier * numerator / denominator
|
||||
*
|
||||
* Read registers with these values
|
||||
*/
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_SYNTH_FREQ_BASE, &synth->freq_base);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u32(zldev, ZL_REG_SYNTH_FREQ_MULT, &synth->freq_mult);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_SYNTH_FREQ_M, &synth->freq_m);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = zl3073x_read_u16(zldev, ZL_REG_SYNTH_FREQ_N, &synth->freq_n);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/* Check denominator for zero to avoid div by 0 */
|
||||
if (!synth->freq_n) {
|
||||
dev_err(zldev->dev,
|
||||
"Zero divisor for SYNTH%u retrieved from device\n",
|
||||
index);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
dev_dbg(zldev->dev, "SYNTH%u frequency: %u Hz\n", index,
|
||||
zl3073x_synth_freq_get(synth));
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_synth_state_get - get current synth state
|
||||
* @zldev: pointer to zl3073x_dev structure
|
||||
* @index: synth index to get state for
|
||||
*
|
||||
* Return: pointer to given synth state
|
||||
*/
|
||||
const struct zl3073x_synth *zl3073x_synth_state_get(struct zl3073x_dev *zldev,
|
||||
u8 index)
|
||||
{
|
||||
return &zldev->synth[index];
|
||||
}
|
||||
72
drivers/dpll/zl3073x/synth.h
Normal file
72
drivers/dpll/zl3073x/synth.h
Normal file
@ -0,0 +1,72 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
|
||||
#ifndef _ZL3073X_SYNTH_H
|
||||
#define _ZL3073X_SYNTH_H
|
||||
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/math64.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "regs.h"
|
||||
|
||||
struct zl3073x_dev;
|
||||
|
||||
/**
|
||||
* struct zl3073x_synth - synthesizer state
|
||||
* @freq_mult: frequency multiplier
|
||||
* @freq_base: frequency base
|
||||
* @freq_m: frequency numerator
|
||||
* @freq_n: frequency denominator
|
||||
* @ctrl: synth control
|
||||
*/
|
||||
struct zl3073x_synth {
|
||||
u32 freq_mult;
|
||||
u16 freq_base;
|
||||
u16 freq_m;
|
||||
u16 freq_n;
|
||||
u8 ctrl;
|
||||
};
|
||||
|
||||
int zl3073x_synth_state_fetch(struct zl3073x_dev *zldev, u8 synth_id);
|
||||
|
||||
const struct zl3073x_synth *zl3073x_synth_state_get(struct zl3073x_dev *zldev,
|
||||
u8 synth_id);
|
||||
|
||||
int zl3073x_synth_state_set(struct zl3073x_dev *zldev, u8 synth_id,
|
||||
const struct zl3073x_synth *synth);
|
||||
|
||||
/**
|
||||
* zl3073x_synth_dpll_get - get DPLL ID the synth is driven by
|
||||
* @synth: pointer to synth state
|
||||
*
|
||||
* Return: ID of DPLL the given synthetizer is driven by
|
||||
*/
|
||||
static inline u8 zl3073x_synth_dpll_get(const struct zl3073x_synth *synth)
|
||||
{
|
||||
return FIELD_GET(ZL_SYNTH_CTRL_DPLL_SEL, synth->ctrl);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_synth_freq_get - get synth current freq
|
||||
* @synth: pointer to synth state
|
||||
*
|
||||
* Return: frequency of given synthetizer
|
||||
*/
|
||||
static inline u32 zl3073x_synth_freq_get(const struct zl3073x_synth *synth)
|
||||
{
|
||||
return mul_u64_u32_div(synth->freq_base * synth->freq_m,
|
||||
synth->freq_mult, synth->freq_n);
|
||||
}
|
||||
|
||||
/**
|
||||
* zl3073x_synth_is_enabled - check if the given synth is enabled
|
||||
* @synth: pointer to synth state
|
||||
*
|
||||
* Return: true if synth is enabled, false otherwise
|
||||
*/
|
||||
static inline bool zl3073x_synth_is_enabled(const struct zl3073x_synth *synth)
|
||||
{
|
||||
return FIELD_GET(ZL_SYNTH_CTRL_EN, synth->ctrl);
|
||||
}
|
||||
|
||||
#endif /* _ZL3073X_SYNTH_H */
|
||||
@ -2548,10 +2548,17 @@ static int lineinfo_changed_notify(struct notifier_block *nb,
|
||||
container_of(nb, struct gpio_chardev_data, lineinfo_changed_nb);
|
||||
struct lineinfo_changed_ctx *ctx;
|
||||
struct gpio_desc *desc = data;
|
||||
struct file *fp;
|
||||
|
||||
if (!test_bit(gpio_chip_hwgpio(desc), cdev->watched_lines))
|
||||
return NOTIFY_DONE;
|
||||
|
||||
/* Keep the file descriptor alive for the duration of the notification. */
|
||||
fp = get_file_active(&cdev->fp);
|
||||
if (!fp)
|
||||
/* Chardev file descriptor was or is being released. */
|
||||
return NOTIFY_DONE;
|
||||
|
||||
/*
|
||||
* If this is called from atomic context (for instance: with a spinlock
|
||||
* taken by the atomic notifier chain), any sleeping calls must be done
|
||||
@ -2566,6 +2573,7 @@ static int lineinfo_changed_notify(struct notifier_block *nb,
|
||||
ctx = kzalloc(sizeof(*ctx), GFP_ATOMIC);
|
||||
if (!ctx) {
|
||||
pr_err("Failed to allocate memory for line info notification\n");
|
||||
fput(fp);
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
|
||||
@ -2575,8 +2583,6 @@ static int lineinfo_changed_notify(struct notifier_block *nb,
|
||||
/* Keep the GPIO device alive until we emit the event. */
|
||||
ctx->gdev = gpio_device_get(desc->gdev);
|
||||
ctx->cdev = cdev;
|
||||
/* Keep the file descriptor alive too. */
|
||||
get_file(ctx->cdev->fp);
|
||||
|
||||
INIT_WORK(&ctx->work, lineinfo_changed_func);
|
||||
queue_work(ctx->gdev->line_state_wq, &ctx->work);
|
||||
|
||||
@ -20,6 +20,8 @@ struct xe_exec_queue;
|
||||
struct xe_guc_exec_queue {
|
||||
/** @q: Backpointer to parent xe_exec_queue */
|
||||
struct xe_exec_queue *q;
|
||||
/** @rcu: For safe freeing of exported dma fences */
|
||||
struct rcu_head rcu;
|
||||
/** @sched: GPU scheduler for this xe_exec_queue */
|
||||
struct xe_gpu_scheduler sched;
|
||||
/** @entity: Scheduler entity for this xe_exec_queue */
|
||||
|
||||
@ -1282,7 +1282,11 @@ static void __guc_exec_queue_fini_async(struct work_struct *w)
|
||||
xe_sched_entity_fini(&ge->entity);
|
||||
xe_sched_fini(&ge->sched);
|
||||
|
||||
kfree(ge);
|
||||
/*
|
||||
* RCU free due sched being exported via DRM scheduler fences
|
||||
* (timeline name).
|
||||
*/
|
||||
kfree_rcu(ge, rcu);
|
||||
xe_exec_queue_fini(q);
|
||||
xe_pm_runtime_put(guc_to_xe(guc));
|
||||
}
|
||||
@ -1465,6 +1469,7 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
|
||||
|
||||
q->guc = ge;
|
||||
ge->q = q;
|
||||
init_rcu_head(&ge->rcu);
|
||||
init_waitqueue_head(&ge->suspend_wait);
|
||||
|
||||
for (i = 0; i < MAX_STATIC_MSG_TYPE; ++i)
|
||||
|
||||
@ -100,6 +100,9 @@ void xe_hw_fence_irq_finish(struct xe_hw_fence_irq *irq)
|
||||
spin_unlock_irqrestore(&irq->lock, flags);
|
||||
dma_fence_end_signalling(tmp);
|
||||
}
|
||||
|
||||
/* Safe release of the irq->lock used in dma_fence_init. */
|
||||
synchronize_rcu();
|
||||
}
|
||||
|
||||
void xe_hw_fence_irq_run(struct xe_hw_fence_irq *irq)
|
||||
|
||||
@ -1539,7 +1539,7 @@ int thc_i2c_subip_regs_save(struct thc_device *dev)
|
||||
|
||||
for (int i = 0; i < ARRAY_SIZE(i2c_subip_regs); i++) {
|
||||
ret = thc_i2c_subip_pio_read(dev, i2c_subip_regs[i],
|
||||
&read_size, (u32 *)&dev->i2c_subip_regs + i);
|
||||
&read_size, &dev->i2c_subip_regs[i]);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
@ -1562,7 +1562,7 @@ int thc_i2c_subip_regs_restore(struct thc_device *dev)
|
||||
|
||||
for (int i = 0; i < ARRAY_SIZE(i2c_subip_regs); i++) {
|
||||
ret = thc_i2c_subip_pio_write(dev, i2c_subip_regs[i],
|
||||
write_size, (u32 *)&dev->i2c_subip_regs + i);
|
||||
write_size, &dev->i2c_subip_regs[i]);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -141,6 +141,7 @@ struct mapped_device {
|
||||
#ifdef CONFIG_BLK_DEV_ZONED
|
||||
unsigned int nr_zones;
|
||||
void *zone_revalidate_map;
|
||||
struct task_struct *revalidate_map_task;
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_IMA
|
||||
|
||||
@ -56,24 +56,34 @@ int dm_blk_report_zones(struct gendisk *disk, sector_t sector,
|
||||
{
|
||||
struct mapped_device *md = disk->private_data;
|
||||
struct dm_table *map;
|
||||
int srcu_idx, ret;
|
||||
|
||||
if (!md->zone_revalidate_map) {
|
||||
/* Regular user context */
|
||||
if (dm_suspended_md(md))
|
||||
return -EAGAIN;
|
||||
struct dm_table *zone_revalidate_map = READ_ONCE(md->zone_revalidate_map);
|
||||
int srcu_idx, ret = -EIO;
|
||||
bool put_table = false;
|
||||
|
||||
if (!zone_revalidate_map || md->revalidate_map_task != current) {
|
||||
/*
|
||||
* Regular user context or
|
||||
* Zone revalidation during __bind() is in progress, but this
|
||||
* call is from a different process
|
||||
*/
|
||||
map = dm_get_live_table(md, &srcu_idx);
|
||||
if (!map)
|
||||
return -EIO;
|
||||
put_table = true;
|
||||
|
||||
if (dm_suspended_md(md)) {
|
||||
ret = -EAGAIN;
|
||||
goto do_put_table;
|
||||
}
|
||||
} else {
|
||||
/* Zone revalidation during __bind() */
|
||||
map = md->zone_revalidate_map;
|
||||
map = zone_revalidate_map;
|
||||
}
|
||||
|
||||
ret = dm_blk_do_report_zones(md, map, sector, nr_zones, cb, data);
|
||||
if (map)
|
||||
ret = dm_blk_do_report_zones(md, map, sector, nr_zones, cb,
|
||||
data);
|
||||
|
||||
if (!md->zone_revalidate_map)
|
||||
do_put_table:
|
||||
if (put_table)
|
||||
dm_put_live_table(md, srcu_idx);
|
||||
|
||||
return ret;
|
||||
@ -175,7 +185,9 @@ int dm_revalidate_zones(struct dm_table *t, struct request_queue *q)
|
||||
* our table for dm_blk_report_zones() to use directly.
|
||||
*/
|
||||
md->zone_revalidate_map = t;
|
||||
md->revalidate_map_task = current;
|
||||
ret = blk_revalidate_disk_zones(disk);
|
||||
md->revalidate_map_task = NULL;
|
||||
md->zone_revalidate_map = NULL;
|
||||
|
||||
if (ret) {
|
||||
|
||||
@ -58,7 +58,7 @@ struct macvlan_port {
|
||||
|
||||
struct macvlan_source_entry {
|
||||
struct hlist_node hlist;
|
||||
struct macvlan_dev *vlan;
|
||||
struct macvlan_dev __rcu *vlan;
|
||||
unsigned char addr[6+2] __aligned(sizeof(u16));
|
||||
struct rcu_head rcu;
|
||||
};
|
||||
@ -145,7 +145,7 @@ static struct macvlan_source_entry *macvlan_hash_lookup_source(
|
||||
|
||||
hlist_for_each_entry_rcu(entry, h, hlist, lockdep_rtnl_is_held()) {
|
||||
if (ether_addr_equal_64bits(entry->addr, addr) &&
|
||||
entry->vlan == vlan)
|
||||
rcu_access_pointer(entry->vlan) == vlan)
|
||||
return entry;
|
||||
}
|
||||
return NULL;
|
||||
@ -167,7 +167,7 @@ static int macvlan_hash_add_source(struct macvlan_dev *vlan,
|
||||
return -ENOMEM;
|
||||
|
||||
ether_addr_copy(entry->addr, addr);
|
||||
entry->vlan = vlan;
|
||||
RCU_INIT_POINTER(entry->vlan, vlan);
|
||||
h = &port->vlan_source_hash[macvlan_eth_hash(addr)];
|
||||
hlist_add_head_rcu(&entry->hlist, h);
|
||||
vlan->macaddr_count++;
|
||||
@ -186,6 +186,7 @@ static void macvlan_hash_add(struct macvlan_dev *vlan)
|
||||
|
||||
static void macvlan_hash_del_source(struct macvlan_source_entry *entry)
|
||||
{
|
||||
RCU_INIT_POINTER(entry->vlan, NULL);
|
||||
hlist_del_rcu(&entry->hlist);
|
||||
kfree_rcu(entry, rcu);
|
||||
}
|
||||
@ -389,7 +390,7 @@ static void macvlan_flush_sources(struct macvlan_port *port,
|
||||
int i;
|
||||
|
||||
hash_for_each_safe(port->vlan_source_hash, i, next, entry, hlist)
|
||||
if (entry->vlan == vlan)
|
||||
if (rcu_access_pointer(entry->vlan) == vlan)
|
||||
macvlan_hash_del_source(entry);
|
||||
|
||||
vlan->macaddr_count = 0;
|
||||
@ -432,9 +433,14 @@ static bool macvlan_forward_source(struct sk_buff *skb,
|
||||
|
||||
hlist_for_each_entry_rcu(entry, h, hlist) {
|
||||
if (ether_addr_equal_64bits(entry->addr, addr)) {
|
||||
if (entry->vlan->flags & MACVLAN_FLAG_NODST)
|
||||
struct macvlan_dev *vlan = rcu_dereference(entry->vlan);
|
||||
|
||||
if (!vlan)
|
||||
continue;
|
||||
|
||||
if (vlan->flags & MACVLAN_FLAG_NODST)
|
||||
consume = true;
|
||||
macvlan_forward_source_one(skb, entry->vlan);
|
||||
macvlan_forward_source_one(skb, vlan);
|
||||
}
|
||||
}
|
||||
|
||||
@ -1676,7 +1682,7 @@ static int macvlan_fill_info_macaddr(struct sk_buff *skb,
|
||||
struct macvlan_source_entry *entry;
|
||||
|
||||
hlist_for_each_entry_rcu(entry, h, hlist, lockdep_rtnl_is_held()) {
|
||||
if (entry->vlan != vlan)
|
||||
if (rcu_access_pointer(entry->vlan) != vlan)
|
||||
continue;
|
||||
if (nla_put(skb, IFLA_MACVLAN_MACADDR, ETH_ALEN, entry->addr))
|
||||
return 1;
|
||||
|
||||
@ -107,8 +107,14 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
|
||||
*/
|
||||
desc = (struct usb_ss_ep_comp_descriptor *) buffer;
|
||||
|
||||
if (desc->bDescriptorType != USB_DT_SS_ENDPOINT_COMP ||
|
||||
size < USB_DT_SS_EP_COMP_SIZE) {
|
||||
if (size < USB_DT_SS_EP_COMP_SIZE) {
|
||||
dev_notice(ddev,
|
||||
"invalid SuperSpeed endpoint companion descriptor "
|
||||
"of length %d, skipping\n", size);
|
||||
return;
|
||||
}
|
||||
|
||||
if (desc->bDescriptorType != USB_DT_SS_ENDPOINT_COMP) {
|
||||
dev_notice(ddev, "No SuperSpeed endpoint companion for config %d "
|
||||
" interface %d altsetting %d ep %d: "
|
||||
"using minimum values\n",
|
||||
|
||||
@ -609,7 +609,7 @@ int efivar_entry_get(struct efivar_entry *entry, u32 *attributes,
|
||||
err = __efivar_entry_get(entry, attributes, size, data);
|
||||
efivar_unlock();
|
||||
|
||||
return 0;
|
||||
return err;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@ -1889,8 +1889,11 @@ static const char *pick_link(struct nameidata *nd, struct path *link,
|
||||
if (!res) {
|
||||
const char * (*get)(struct dentry *, struct inode *,
|
||||
struct delayed_call *);
|
||||
get = inode->i_op->get_link;
|
||||
get = READ_ONCE(inode->i_op->get_link);
|
||||
if (nd->flags & LOOKUP_RCU) {
|
||||
/* Does the inode still match the associated dentry? */
|
||||
if (unlikely(read_seqcount_retry(&link->dentry->d_seq, last->seq)))
|
||||
return ERR_PTR(-ECHILD);
|
||||
res = get(NULL, inode, &last->done);
|
||||
if (res == ERR_PTR(-ECHILD) && try_to_unlazy(nd))
|
||||
res = get(link->dentry, inode, &last->done);
|
||||
|
||||
@ -41,8 +41,12 @@ struct dpll_device_ops {
|
||||
enum dpll_feature_state *state,
|
||||
struct netlink_ext_ack *extack);
|
||||
|
||||
RH_KABI_RESERVE(1)
|
||||
RH_KABI_RESERVE(2)
|
||||
RH_KABI_USE(1, int (*phase_offset_avg_factor_set)(const struct dpll_device *dpll,
|
||||
void *dpll_priv, u32 factor,
|
||||
struct netlink_ext_ack *extack))
|
||||
RH_KABI_USE(2, int (*phase_offset_avg_factor_get)(const struct dpll_device *dpll,
|
||||
void *dpll_priv, u32 *factor,
|
||||
struct netlink_ext_ack *extack))
|
||||
RH_KABI_RESERVE(3)
|
||||
RH_KABI_RESERVE(4)
|
||||
RH_KABI_RESERVE(5)
|
||||
@ -117,8 +121,16 @@ struct dpll_pin_ops {
|
||||
struct dpll_pin_esync *esync,
|
||||
struct netlink_ext_ack *extack);
|
||||
|
||||
RH_KABI_RESERVE(1)
|
||||
RH_KABI_RESERVE(2)
|
||||
RH_KABI_USE(1, int (*ref_sync_set)(const struct dpll_pin *pin, void *pin_priv,
|
||||
const struct dpll_pin *ref_sync_pin,
|
||||
void *ref_sync_pin_priv,
|
||||
const enum dpll_pin_state state,
|
||||
struct netlink_ext_ack *extack))
|
||||
RH_KABI_USE(2, int (*ref_sync_get)(const struct dpll_pin *pin, void *pin_priv,
|
||||
const struct dpll_pin *ref_sync_pin,
|
||||
void *ref_sync_pin_priv,
|
||||
enum dpll_pin_state *state,
|
||||
struct netlink_ext_ack *extack))
|
||||
RH_KABI_RESERVE(3)
|
||||
RH_KABI_RESERVE(4)
|
||||
RH_KABI_RESERVE(5)
|
||||
@ -173,6 +185,7 @@ struct dpll_pin_properties {
|
||||
const char *panel_label;
|
||||
const char *package_label;
|
||||
enum dpll_pin_type type;
|
||||
RH_KABI_FILL_HOLE(u32 phase_gran)
|
||||
unsigned long capabilities;
|
||||
u32 freq_supported_num;
|
||||
struct dpll_pin_frequency *freq_supported;
|
||||
@ -232,6 +245,9 @@ int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin,
|
||||
void dpll_pin_on_pin_unregister(struct dpll_pin *parent, struct dpll_pin *pin,
|
||||
const struct dpll_pin_ops *ops, void *priv);
|
||||
|
||||
int dpll_pin_ref_sync_pair_add(struct dpll_pin *pin,
|
||||
struct dpll_pin *ref_sync_pin);
|
||||
|
||||
int dpll_device_change_ntf(struct dpll_device *dpll);
|
||||
|
||||
int dpll_pin_change_ntf(struct dpll_pin *pin);
|
||||
|
||||
@ -216,6 +216,7 @@ enum dpll_a {
|
||||
DPLL_A_LOCK_STATUS_ERROR,
|
||||
DPLL_A_CLOCK_QUALITY_LEVEL,
|
||||
DPLL_A_PHASE_OFFSET_MONITOR,
|
||||
DPLL_A_PHASE_OFFSET_AVG_FACTOR,
|
||||
|
||||
__DPLL_A_MAX,
|
||||
DPLL_A_MAX = (__DPLL_A_MAX - 1)
|
||||
@ -249,6 +250,8 @@ enum dpll_a_pin {
|
||||
DPLL_A_PIN_ESYNC_FREQUENCY,
|
||||
DPLL_A_PIN_ESYNC_FREQUENCY_SUPPORTED,
|
||||
DPLL_A_PIN_ESYNC_PULSE,
|
||||
DPLL_A_PIN_REFERENCE_SYNC,
|
||||
DPLL_A_PIN_PHASE_ADJUST_GRAN,
|
||||
|
||||
__DPLL_A_PIN_MAX,
|
||||
DPLL_A_PIN_MAX = (__DPLL_A_PIN_MAX - 1)
|
||||
|
||||
@ -146,18 +146,26 @@ __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)
|
||||
|
||||
if (has_lock && (ctx->flags & IORING_SETUP_SQPOLL)) {
|
||||
struct io_sq_data *sq = ctx->sq_data;
|
||||
struct task_struct *tsk;
|
||||
|
||||
rcu_read_lock();
|
||||
tsk = rcu_dereference(sq->thread);
|
||||
/*
|
||||
* sq->thread might be NULL if we raced with the sqpoll
|
||||
* thread termination.
|
||||
*/
|
||||
if (sq->thread) {
|
||||
if (tsk) {
|
||||
get_task_struct(tsk);
|
||||
rcu_read_unlock();
|
||||
getrusage(tsk, RUSAGE_SELF, &sq_usage);
|
||||
put_task_struct(tsk);
|
||||
sq_pid = sq->task_pid;
|
||||
sq_cpu = sq->sq_cpu;
|
||||
getrusage(sq->thread, RUSAGE_SELF, &sq_usage);
|
||||
sq_total_time = (sq_usage.ru_stime.tv_sec * 1000000
|
||||
+ sq_usage.ru_stime.tv_usec);
|
||||
sq_work_time = sq->work_time;
|
||||
} else {
|
||||
rcu_read_unlock();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@ -2929,7 +2929,7 @@ static __cold void io_ring_exit_work(struct work_struct *work)
|
||||
struct task_struct *tsk;
|
||||
|
||||
io_sq_thread_park(sqd);
|
||||
tsk = sqd->thread;
|
||||
tsk = sqpoll_task_locked(sqd);
|
||||
if (tsk && tsk->io_uring && tsk->io_uring->io_wq)
|
||||
io_wq_cancel_cb(tsk->io_uring->io_wq,
|
||||
io_cancel_ctx_cb, ctx, true);
|
||||
@ -3166,7 +3166,7 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
|
||||
s64 inflight;
|
||||
DEFINE_WAIT(wait);
|
||||
|
||||
WARN_ON_ONCE(sqd && sqd->thread != current);
|
||||
WARN_ON_ONCE(sqd && sqpoll_task_locked(sqd) != current);
|
||||
|
||||
if (!current->io_uring)
|
||||
return;
|
||||
|
||||
@ -268,6 +268,8 @@ static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
|
||||
if (ctx->flags & IORING_SETUP_SQPOLL) {
|
||||
sqd = ctx->sq_data;
|
||||
if (sqd) {
|
||||
struct task_struct *tsk;
|
||||
|
||||
/*
|
||||
* Observe the correct sqd->lock -> ctx->uring_lock
|
||||
* ordering. Fine to drop uring_lock here, we hold
|
||||
@ -277,8 +279,9 @@ static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
|
||||
mutex_unlock(&ctx->uring_lock);
|
||||
mutex_lock(&sqd->lock);
|
||||
mutex_lock(&ctx->uring_lock);
|
||||
if (sqd->thread)
|
||||
tctx = sqd->thread->io_uring;
|
||||
tsk = sqpoll_task_locked(sqd);
|
||||
if (tsk)
|
||||
tctx = tsk->io_uring;
|
||||
}
|
||||
} else {
|
||||
tctx = current->io_uring;
|
||||
|
||||
@ -30,7 +30,7 @@ enum {
|
||||
void io_sq_thread_unpark(struct io_sq_data *sqd)
|
||||
__releases(&sqd->lock)
|
||||
{
|
||||
WARN_ON_ONCE(sqd->thread == current);
|
||||
WARN_ON_ONCE(sqpoll_task_locked(sqd) == current);
|
||||
|
||||
/*
|
||||
* Do the dance but not conditional clear_bit() because it'd race with
|
||||
@ -45,24 +45,32 @@ void io_sq_thread_unpark(struct io_sq_data *sqd)
|
||||
void io_sq_thread_park(struct io_sq_data *sqd)
|
||||
__acquires(&sqd->lock)
|
||||
{
|
||||
WARN_ON_ONCE(data_race(sqd->thread) == current);
|
||||
struct task_struct *tsk;
|
||||
|
||||
atomic_inc(&sqd->park_pending);
|
||||
set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);
|
||||
mutex_lock(&sqd->lock);
|
||||
if (sqd->thread)
|
||||
wake_up_process(sqd->thread);
|
||||
|
||||
tsk = sqpoll_task_locked(sqd);
|
||||
if (tsk) {
|
||||
WARN_ON_ONCE(tsk == current);
|
||||
wake_up_process(tsk);
|
||||
}
|
||||
}
|
||||
|
||||
void io_sq_thread_stop(struct io_sq_data *sqd)
|
||||
{
|
||||
WARN_ON_ONCE(sqd->thread == current);
|
||||
struct task_struct *tsk;
|
||||
|
||||
WARN_ON_ONCE(test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state));
|
||||
|
||||
set_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);
|
||||
mutex_lock(&sqd->lock);
|
||||
if (sqd->thread)
|
||||
wake_up_process(sqd->thread);
|
||||
tsk = sqpoll_task_locked(sqd);
|
||||
if (tsk) {
|
||||
WARN_ON_ONCE(tsk == current);
|
||||
wake_up_process(tsk);
|
||||
}
|
||||
mutex_unlock(&sqd->lock);
|
||||
wait_for_completion(&sqd->exited);
|
||||
}
|
||||
@ -277,7 +285,8 @@ static int io_sq_thread(void *data)
|
||||
/* offload context creation failed, just exit */
|
||||
if (!current->io_uring) {
|
||||
mutex_lock(&sqd->lock);
|
||||
sqd->thread = NULL;
|
||||
rcu_assign_pointer(sqd->thread, NULL);
|
||||
put_task_struct(current);
|
||||
mutex_unlock(&sqd->lock);
|
||||
goto err_out;
|
||||
}
|
||||
@ -386,7 +395,8 @@ static int io_sq_thread(void *data)
|
||||
io_sq_tw(&retry_list, UINT_MAX);
|
||||
|
||||
io_uring_cancel_generic(true, sqd);
|
||||
sqd->thread = NULL;
|
||||
rcu_assign_pointer(sqd->thread, NULL);
|
||||
put_task_struct(current);
|
||||
list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
|
||||
atomic_or(IORING_SQ_NEED_WAKEUP, &ctx->rings->sq_flags);
|
||||
io_run_task_work();
|
||||
@ -495,7 +505,11 @@ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
|
||||
goto err_sqpoll;
|
||||
}
|
||||
|
||||
sqd->thread = tsk;
|
||||
mutex_lock(&sqd->lock);
|
||||
rcu_assign_pointer(sqd->thread, tsk);
|
||||
mutex_unlock(&sqd->lock);
|
||||
|
||||
get_task_struct(tsk);
|
||||
ret = io_uring_alloc_task_context(tsk, ctx);
|
||||
wake_up_new_task(tsk);
|
||||
if (ret)
|
||||
@ -505,7 +519,6 @@ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
return 0;
|
||||
err_sqpoll:
|
||||
complete(&ctx->sq_data->exited);
|
||||
@ -521,10 +534,13 @@ __cold int io_sqpoll_wq_cpu_affinity(struct io_ring_ctx *ctx,
|
||||
int ret = -EINVAL;
|
||||
|
||||
if (sqd) {
|
||||
struct task_struct *tsk;
|
||||
|
||||
io_sq_thread_park(sqd);
|
||||
/* Don't set affinity for a dying thread */
|
||||
if (sqd->thread)
|
||||
ret = io_wq_cpu_affinity(sqd->thread->io_uring, mask);
|
||||
tsk = sqpoll_task_locked(sqd);
|
||||
if (tsk)
|
||||
ret = io_wq_cpu_affinity(tsk->io_uring, mask);
|
||||
io_sq_thread_unpark(sqd);
|
||||
}
|
||||
|
||||
|
||||
@ -8,7 +8,7 @@ struct io_sq_data {
|
||||
/* ctx's that are using this sqd */
|
||||
struct list_head ctx_list;
|
||||
|
||||
struct task_struct *thread;
|
||||
struct task_struct __rcu *thread;
|
||||
struct wait_queue_head wait;
|
||||
|
||||
unsigned sq_thread_idle;
|
||||
@ -29,3 +29,9 @@ void io_sq_thread_unpark(struct io_sq_data *sqd);
|
||||
void io_put_sq_data(struct io_sq_data *sqd);
|
||||
void io_sqpoll_wait_sq(struct io_ring_ctx *ctx);
|
||||
int io_sqpoll_wq_cpu_affinity(struct io_ring_ctx *ctx, cpumask_var_t mask);
|
||||
|
||||
static inline struct task_struct *sqpoll_task_locked(struct io_sq_data *sqd)
|
||||
{
|
||||
return rcu_dereference_protected(sqd->thread,
|
||||
lockdep_is_held(&sqd->lock));
|
||||
}
|
||||
|
||||
@ -225,6 +225,8 @@ struct console_flush_type {
|
||||
bool legacy_offload;
|
||||
};
|
||||
|
||||
extern bool console_irqwork_blocked;
|
||||
|
||||
/*
|
||||
* Identify which console flushing methods should be used in the context of
|
||||
* the caller.
|
||||
@ -236,7 +238,7 @@ static inline void printk_get_console_flush_type(struct console_flush_type *ft)
|
||||
switch (nbcon_get_default_prio()) {
|
||||
case NBCON_PRIO_NORMAL:
|
||||
if (have_nbcon_console && !have_boot_console) {
|
||||
if (printk_kthreads_running)
|
||||
if (printk_kthreads_running && !console_irqwork_blocked)
|
||||
ft->nbcon_offload = true;
|
||||
else
|
||||
ft->nbcon_atomic = true;
|
||||
@ -246,7 +248,7 @@ static inline void printk_get_console_flush_type(struct console_flush_type *ft)
|
||||
if (have_legacy_console || have_boot_console) {
|
||||
if (!is_printk_legacy_deferred())
|
||||
ft->legacy_direct = true;
|
||||
else
|
||||
else if (!console_irqwork_blocked)
|
||||
ft->legacy_offload = true;
|
||||
}
|
||||
break;
|
||||
@ -259,7 +261,7 @@ static inline void printk_get_console_flush_type(struct console_flush_type *ft)
|
||||
if (have_legacy_console || have_boot_console) {
|
||||
if (!is_printk_legacy_deferred())
|
||||
ft->legacy_direct = true;
|
||||
else
|
||||
else if (!console_irqwork_blocked)
|
||||
ft->legacy_offload = true;
|
||||
}
|
||||
break;
|
||||
|
||||
@ -214,8 +214,9 @@ static void nbcon_seq_try_update(struct nbcon_context *ctxt, u64 new_seq)
|
||||
|
||||
/**
|
||||
* nbcon_context_try_acquire_direct - Try to acquire directly
|
||||
* @ctxt: The context of the caller
|
||||
* @cur: The current console state
|
||||
* @ctxt: The context of the caller
|
||||
* @cur: The current console state
|
||||
* @is_reacquire: This acquire is a reacquire
|
||||
*
|
||||
* Acquire the console when it is released. Also acquire the console when
|
||||
* the current owner has a lower priority and the console is in a safe state.
|
||||
@ -225,17 +226,17 @@ static void nbcon_seq_try_update(struct nbcon_context *ctxt, u64 new_seq)
|
||||
*
|
||||
* Errors:
|
||||
*
|
||||
* -EPERM: A panic is in progress and this is not the panic CPU.
|
||||
* Or the current owner or waiter has the same or higher
|
||||
* priority. No acquire method can be successful in
|
||||
* this case.
|
||||
* -EPERM: A panic is in progress and this is neither the panic
|
||||
* CPU nor is this a reacquire. Or the current owner or
|
||||
* waiter has the same or higher priority. No acquire
|
||||
* method can be successful in these cases.
|
||||
*
|
||||
* -EBUSY: The current owner has a lower priority but the console
|
||||
* in an unsafe state. The caller should try using
|
||||
* the handover acquire method.
|
||||
*/
|
||||
static int nbcon_context_try_acquire_direct(struct nbcon_context *ctxt,
|
||||
struct nbcon_state *cur)
|
||||
struct nbcon_state *cur, bool is_reacquire)
|
||||
{
|
||||
unsigned int cpu = smp_processor_id();
|
||||
struct console *con = ctxt->console;
|
||||
@ -243,14 +244,20 @@ static int nbcon_context_try_acquire_direct(struct nbcon_context *ctxt,
|
||||
|
||||
do {
|
||||
/*
|
||||
* Panic does not imply that the console is owned. However, it
|
||||
* is critical that non-panic CPUs during panic are unable to
|
||||
* acquire ownership in order to satisfy the assumptions of
|
||||
* nbcon_waiter_matches(). In particular, the assumption that
|
||||
* lower priorities are ignored during panic.
|
||||
* Panic does not imply that the console is owned. However,
|
||||
* since all non-panic CPUs are stopped during panic(), it
|
||||
* is safer to have them avoid gaining console ownership.
|
||||
*
|
||||
* If this acquire is a reacquire (and an unsafe takeover
|
||||
* has not previously occurred) then it is allowed to attempt
|
||||
* a direct acquire in panic. This gives console drivers an
|
||||
* opportunity to perform any necessary cleanup if they were
|
||||
* interrupted by the panic CPU while printing.
|
||||
*/
|
||||
if (other_cpu_in_panic())
|
||||
if (other_cpu_in_panic() &&
|
||||
(!is_reacquire || cur->unsafe_takeover)) {
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
if (ctxt->prio <= cur->prio || ctxt->prio <= cur->req_prio)
|
||||
return -EPERM;
|
||||
@ -301,8 +308,9 @@ static bool nbcon_waiter_matches(struct nbcon_state *cur, int expected_prio)
|
||||
* Event #1 implies this context is EMERGENCY.
|
||||
* Event #2 implies the new context is PANIC.
|
||||
* Event #3 occurs when panic() has flushed the console.
|
||||
* Events #4 and #5 are not possible due to the other_cpu_in_panic()
|
||||
* check in nbcon_context_try_acquire_direct().
|
||||
* Event #4 occurs when a non-panic CPU reacquires.
|
||||
* Event #5 is not possible due to the other_cpu_in_panic() check
|
||||
* in nbcon_context_try_acquire_handover().
|
||||
*/
|
||||
|
||||
return (cur->req_prio == expected_prio);
|
||||
@ -431,6 +439,16 @@ static int nbcon_context_try_acquire_handover(struct nbcon_context *ctxt,
|
||||
WARN_ON_ONCE(ctxt->prio <= cur->prio || ctxt->prio <= cur->req_prio);
|
||||
WARN_ON_ONCE(!cur->unsafe);
|
||||
|
||||
/*
|
||||
* Panic does not imply that the console is owned. However, it
|
||||
* is critical that non-panic CPUs during panic are unable to
|
||||
* wait for a handover in order to satisfy the assumptions of
|
||||
* nbcon_waiter_matches(). In particular, the assumption that
|
||||
* lower priorities are ignored during panic.
|
||||
*/
|
||||
if (other_cpu_in_panic())
|
||||
return -EPERM;
|
||||
|
||||
/* Handover is not possible on the same CPU. */
|
||||
if (cur->cpu == cpu)
|
||||
return -EBUSY;
|
||||
@ -558,7 +576,8 @@ static struct printk_buffers panic_nbcon_pbufs;
|
||||
|
||||
/**
|
||||
* nbcon_context_try_acquire - Try to acquire nbcon console
|
||||
* @ctxt: The context of the caller
|
||||
* @ctxt: The context of the caller
|
||||
* @is_reacquire: This acquire is a reacquire
|
||||
*
|
||||
* Context: Under @ctxt->con->device_lock() or local_irq_save().
|
||||
* Return: True if the console was acquired. False otherwise.
|
||||
@ -568,7 +587,7 @@ static struct printk_buffers panic_nbcon_pbufs;
|
||||
* in an unsafe state. Otherwise, on success the caller may assume
|
||||
* the console is not in an unsafe state.
|
||||
*/
|
||||
static bool nbcon_context_try_acquire(struct nbcon_context *ctxt)
|
||||
static bool nbcon_context_try_acquire(struct nbcon_context *ctxt, bool is_reacquire)
|
||||
{
|
||||
unsigned int cpu = smp_processor_id();
|
||||
struct console *con = ctxt->console;
|
||||
@ -577,7 +596,7 @@ static bool nbcon_context_try_acquire(struct nbcon_context *ctxt)
|
||||
|
||||
nbcon_state_read(con, &cur);
|
||||
try_again:
|
||||
err = nbcon_context_try_acquire_direct(ctxt, &cur);
|
||||
err = nbcon_context_try_acquire_direct(ctxt, &cur, is_reacquire);
|
||||
if (err != -EBUSY)
|
||||
goto out;
|
||||
|
||||
@ -913,7 +932,7 @@ void nbcon_reacquire_nobuf(struct nbcon_write_context *wctxt)
|
||||
{
|
||||
struct nbcon_context *ctxt = &ACCESS_PRIVATE(wctxt, ctxt);
|
||||
|
||||
while (!nbcon_context_try_acquire(ctxt))
|
||||
while (!nbcon_context_try_acquire(ctxt, true))
|
||||
cpu_relax();
|
||||
|
||||
nbcon_write_context_set_buf(wctxt, NULL, 0);
|
||||
@ -1101,7 +1120,7 @@ static bool nbcon_emit_one(struct nbcon_write_context *wctxt, bool use_atomic)
|
||||
cant_migrate();
|
||||
}
|
||||
|
||||
if (!nbcon_context_try_acquire(ctxt))
|
||||
if (!nbcon_context_try_acquire(ctxt, false))
|
||||
goto out;
|
||||
|
||||
/*
|
||||
@ -1257,6 +1276,13 @@ void nbcon_kthreads_wake(void)
|
||||
if (!printk_kthreads_running)
|
||||
return;
|
||||
|
||||
/*
|
||||
* It is not allowed to call this function when console irq_work
|
||||
* is blocked.
|
||||
*/
|
||||
if (WARN_ON_ONCE(console_irqwork_blocked))
|
||||
return;
|
||||
|
||||
cookie = console_srcu_read_lock();
|
||||
for_each_console_srcu(con) {
|
||||
if (!(console_srcu_read_flags(con) & CON_NBCON))
|
||||
@ -1486,7 +1512,7 @@ static int __nbcon_atomic_flush_pending_con(struct console *con, u64 stop_seq,
|
||||
ctxt->prio = nbcon_get_default_prio();
|
||||
ctxt->allow_unsafe_takeover = allow_unsafe_takeover;
|
||||
|
||||
if (!nbcon_context_try_acquire(ctxt))
|
||||
if (!nbcon_context_try_acquire(ctxt, false))
|
||||
return -EPERM;
|
||||
|
||||
while (nbcon_seq_read(con) < stop_seq) {
|
||||
@ -1762,7 +1788,7 @@ bool nbcon_device_try_acquire(struct console *con)
|
||||
ctxt->console = con;
|
||||
ctxt->prio = NBCON_PRIO_NORMAL;
|
||||
|
||||
if (!nbcon_context_try_acquire(ctxt))
|
||||
if (!nbcon_context_try_acquire(ctxt, false))
|
||||
return false;
|
||||
|
||||
if (!nbcon_context_enter_unsafe(ctxt))
|
||||
@ -1808,7 +1834,7 @@ void nbcon_device_release(struct console *con)
|
||||
if (console_trylock())
|
||||
console_unlock();
|
||||
} else if (ft.legacy_offload) {
|
||||
printk_trigger_flush();
|
||||
defer_console_output();
|
||||
}
|
||||
}
|
||||
console_srcu_read_unlock(cookie);
|
||||
|
||||
@ -489,6 +489,9 @@ bool have_boot_console;
|
||||
/* See printk_legacy_allow_panic_sync() for details. */
|
||||
bool legacy_allow_panic_sync;
|
||||
|
||||
/* Avoid using irq_work when suspending. */
|
||||
bool console_irqwork_blocked;
|
||||
|
||||
#ifdef CONFIG_PRINTK
|
||||
DECLARE_WAIT_QUEUE_HEAD(log_wait);
|
||||
static DECLARE_WAIT_QUEUE_HEAD(legacy_wait);
|
||||
@ -2374,7 +2377,7 @@ asmlinkage int vprintk_emit(int facility, int level,
|
||||
/* If called from the scheduler, we can not call up(). */
|
||||
if (level == LOGLEVEL_SCHED) {
|
||||
level = LOGLEVEL_DEFAULT;
|
||||
ft.legacy_offload |= ft.legacy_direct;
|
||||
ft.legacy_offload |= ft.legacy_direct && !console_irqwork_blocked;
|
||||
ft.legacy_direct = false;
|
||||
}
|
||||
|
||||
@ -2410,7 +2413,7 @@ asmlinkage int vprintk_emit(int facility, int level,
|
||||
|
||||
if (ft.legacy_offload)
|
||||
defer_console_output();
|
||||
else
|
||||
else if (!console_irqwork_blocked)
|
||||
wake_up_klogd();
|
||||
|
||||
return printed_len;
|
||||
@ -2714,10 +2717,20 @@ void suspend_console(void)
|
||||
{
|
||||
struct console *con;
|
||||
|
||||
if (console_suspend_enabled)
|
||||
pr_info("Suspending console(s) (use no_console_suspend to debug)\n");
|
||||
|
||||
/*
|
||||
* Flush any console backlog and then avoid queueing irq_work until
|
||||
* console_resume_all(). Until then deferred printing is no longer
|
||||
* triggered, NBCON consoles transition to atomic flushing, and
|
||||
* any klogd waiters are not triggered.
|
||||
*/
|
||||
pr_flush(1000, true);
|
||||
console_irqwork_blocked = true;
|
||||
|
||||
if (!console_suspend_enabled)
|
||||
return;
|
||||
pr_info("Suspending console(s) (use no_console_suspend to debug)\n");
|
||||
pr_flush(1000, true);
|
||||
|
||||
console_list_lock();
|
||||
for_each_console(con)
|
||||
@ -2738,26 +2751,34 @@ void resume_console(void)
|
||||
struct console_flush_type ft;
|
||||
struct console *con;
|
||||
|
||||
if (!console_suspend_enabled)
|
||||
return;
|
||||
|
||||
console_list_lock();
|
||||
for_each_console(con)
|
||||
console_srcu_write_flags(con, con->flags & ~CON_SUSPENDED);
|
||||
console_list_unlock();
|
||||
|
||||
/*
|
||||
* Ensure that all SRCU list walks have completed. All printing
|
||||
* contexts must be able to see they are no longer suspended so
|
||||
* that they are guaranteed to wake up and resume printing.
|
||||
* Allow queueing irq_work. After restoring console state, deferred
|
||||
* printing and any klogd waiters need to be triggered in case there
|
||||
* is now a console backlog.
|
||||
*/
|
||||
synchronize_srcu(&console_srcu);
|
||||
console_irqwork_blocked = false;
|
||||
|
||||
if (console_suspend_enabled) {
|
||||
console_list_lock();
|
||||
for_each_console(con)
|
||||
console_srcu_write_flags(con, con->flags & ~CON_SUSPENDED);
|
||||
console_list_unlock();
|
||||
|
||||
/*
|
||||
* Ensure that all SRCU list walks have completed. All printing
|
||||
* contexts must be able to see they are no longer suspended so
|
||||
* that they are guaranteed to wake up and resume printing.
|
||||
*/
|
||||
synchronize_srcu(&console_srcu);
|
||||
}
|
||||
|
||||
printk_get_console_flush_type(&ft);
|
||||
if (ft.nbcon_offload)
|
||||
nbcon_kthreads_wake();
|
||||
if (ft.legacy_offload)
|
||||
defer_console_output();
|
||||
else
|
||||
wake_up_klogd();
|
||||
|
||||
pr_flush(1000, true);
|
||||
}
|
||||
@ -3310,7 +3331,10 @@ void console_unblank(void)
|
||||
*/
|
||||
cookie = console_srcu_read_lock();
|
||||
for_each_console_srcu(c) {
|
||||
if ((console_srcu_read_flags(c) & CON_ENABLED) && c->unblank) {
|
||||
if (!console_is_usable(c, console_srcu_read_flags(c), true))
|
||||
continue;
|
||||
|
||||
if (c->unblank) {
|
||||
found_unblank = true;
|
||||
break;
|
||||
}
|
||||
@ -3347,7 +3371,10 @@ void console_unblank(void)
|
||||
|
||||
cookie = console_srcu_read_lock();
|
||||
for_each_console_srcu(c) {
|
||||
if ((console_srcu_read_flags(c) & CON_ENABLED) && c->unblank)
|
||||
if (!console_is_usable(c, console_srcu_read_flags(c), true))
|
||||
continue;
|
||||
|
||||
if (c->unblank)
|
||||
c->unblank();
|
||||
}
|
||||
console_srcu_read_unlock(cookie);
|
||||
@ -4473,6 +4500,13 @@ static void __wake_up_klogd(int val)
|
||||
if (!printk_percpu_data_ready())
|
||||
return;
|
||||
|
||||
/*
|
||||
* It is not allowed to call this function when console irq_work
|
||||
* is blocked.
|
||||
*/
|
||||
if (WARN_ON_ONCE(console_irqwork_blocked))
|
||||
return;
|
||||
|
||||
preempt_disable();
|
||||
/*
|
||||
* Guarantee any new records can be seen by tasks preparing to wait
|
||||
@ -4529,9 +4563,30 @@ void defer_console_output(void)
|
||||
__wake_up_klogd(PRINTK_PENDING_WAKEUP | PRINTK_PENDING_OUTPUT);
|
||||
}
|
||||
|
||||
/**
|
||||
* printk_trigger_flush - Attempt to flush printk buffer to consoles.
|
||||
*
|
||||
* If possible, flush the printk buffer to all consoles in the caller's
|
||||
* context. If offloading is available, trigger deferred printing.
|
||||
*
|
||||
* This is best effort. Depending on the system state, console states,
|
||||
* and caller context, no actual flushing may result from this call.
|
||||
*/
|
||||
void printk_trigger_flush(void)
|
||||
{
|
||||
defer_console_output();
|
||||
struct console_flush_type ft;
|
||||
|
||||
printk_get_console_flush_type(&ft);
|
||||
if (ft.nbcon_atomic)
|
||||
nbcon_atomic_flush_pending();
|
||||
if (ft.nbcon_offload)
|
||||
nbcon_kthreads_wake();
|
||||
if (ft.legacy_direct) {
|
||||
if (console_trylock())
|
||||
console_unlock();
|
||||
}
|
||||
if (ft.legacy_offload)
|
||||
defer_console_output();
|
||||
}
|
||||
|
||||
int vprintk_deferred(const char *fmt, va_list args)
|
||||
|
||||
@ -953,8 +953,19 @@ static struct {
|
||||
|
||||
#endif /* CONFIG_SMP */
|
||||
|
||||
/* for %SCX_KICK_WAIT */
|
||||
static unsigned long __percpu *scx_kick_cpus_pnt_seqs;
|
||||
/*
|
||||
* For %SCX_KICK_WAIT: Each CPU has a pointer to an array of pick_task sequence
|
||||
* numbers. The arrays are allocated with kvzalloc() as size can exceed percpu
|
||||
* allocator limits on large machines. O(nr_cpu_ids^2) allocation, allocated
|
||||
* lazily when enabling and freed when disabling to avoid waste when sched_ext
|
||||
* isn't active.
|
||||
*/
|
||||
struct scx_kick_pseqs {
|
||||
struct rcu_head rcu;
|
||||
unsigned long seqs[];
|
||||
};
|
||||
|
||||
static DEFINE_PER_CPU(struct scx_kick_pseqs __rcu *, scx_kick_pseqs);
|
||||
|
||||
/*
|
||||
* Direct dispatch marker.
|
||||
@ -4992,6 +5003,27 @@ static const char *scx_exit_reason(enum scx_exit_kind kind)
|
||||
}
|
||||
}
|
||||
|
||||
static void free_kick_pseqs_rcu(struct rcu_head *rcu)
|
||||
{
|
||||
struct scx_kick_pseqs *pseqs = container_of(rcu, struct scx_kick_pseqs, rcu);
|
||||
|
||||
kvfree(pseqs);
|
||||
}
|
||||
|
||||
static void free_kick_pseqs(void)
|
||||
{
|
||||
int cpu;
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
struct scx_kick_pseqs **pseqs = per_cpu_ptr(&scx_kick_pseqs, cpu);
|
||||
struct scx_kick_pseqs *to_free;
|
||||
|
||||
to_free = rcu_replace_pointer(*pseqs, NULL, true);
|
||||
if (to_free)
|
||||
call_rcu(&to_free->rcu, free_kick_pseqs_rcu);
|
||||
}
|
||||
}
|
||||
|
||||
static void scx_ops_disable_workfn(struct kthread_work *work)
|
||||
{
|
||||
struct scx_exit_info *ei = scx_exit_info;
|
||||
@ -5147,6 +5179,7 @@ static void scx_ops_disable_workfn(struct kthread_work *work)
|
||||
free_percpu(scx_dsp_ctx);
|
||||
scx_dsp_ctx = NULL;
|
||||
scx_dsp_max_batch = 0;
|
||||
free_kick_pseqs();
|
||||
|
||||
free_exit_info(scx_exit_info);
|
||||
scx_exit_info = NULL;
|
||||
@ -5476,6 +5509,33 @@ static void scx_ops_error_irq_workfn(struct irq_work *irq_work)
|
||||
|
||||
static DEFINE_IRQ_WORK(scx_ops_error_irq_work, scx_ops_error_irq_workfn);
|
||||
|
||||
static int alloc_kick_pseqs(void)
|
||||
{
|
||||
int cpu;
|
||||
|
||||
/*
|
||||
* Allocate per-CPU arrays sized by nr_cpu_ids. Use kvzalloc as size
|
||||
* can exceed percpu allocator limits on large machines.
|
||||
*/
|
||||
for_each_possible_cpu(cpu) {
|
||||
struct scx_kick_pseqs **pseqs = per_cpu_ptr(&scx_kick_pseqs, cpu);
|
||||
struct scx_kick_pseqs *new_pseqs;
|
||||
|
||||
WARN_ON_ONCE(rcu_access_pointer(*pseqs));
|
||||
|
||||
new_pseqs = kvzalloc_node(struct_size(new_pseqs, seqs, nr_cpu_ids),
|
||||
GFP_KERNEL, cpu_to_node(cpu));
|
||||
if (!new_pseqs) {
|
||||
free_kick_pseqs();
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
rcu_assign_pointer(*pseqs, new_pseqs);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static __printf(3, 4) void scx_ops_exit_kind(enum scx_exit_kind kind,
|
||||
s64 exit_code,
|
||||
const char *fmt, ...)
|
||||
@ -5606,10 +5666,14 @@ static int scx_ops_enable(struct sched_ext_ops *ops, struct bpf_link *link)
|
||||
goto err_unlock;
|
||||
}
|
||||
|
||||
ret = alloc_kick_pseqs();
|
||||
if (ret)
|
||||
goto err_unlock;
|
||||
|
||||
scx_root_kobj = kzalloc(sizeof(*scx_root_kobj), GFP_KERNEL);
|
||||
if (!scx_root_kobj) {
|
||||
ret = -ENOMEM;
|
||||
goto err_unlock;
|
||||
goto err_free_pseqs;
|
||||
}
|
||||
|
||||
scx_root_kobj->kset = scx_kset;
|
||||
@ -5841,6 +5905,8 @@ err:
|
||||
free_exit_info(scx_exit_info);
|
||||
scx_exit_info = NULL;
|
||||
}
|
||||
err_free_pseqs:
|
||||
free_kick_pseqs();
|
||||
err_unlock:
|
||||
mutex_unlock(&scx_ops_enable_mutex);
|
||||
return ret;
|
||||
@ -6232,10 +6298,18 @@ static void kick_cpus_irq_workfn(struct irq_work *irq_work)
|
||||
{
|
||||
struct rq *this_rq = this_rq();
|
||||
struct scx_rq *this_scx = &this_rq->scx;
|
||||
unsigned long *pseqs = this_cpu_ptr(scx_kick_cpus_pnt_seqs);
|
||||
struct scx_kick_pseqs __rcu *pseqs_pcpu = __this_cpu_read(scx_kick_pseqs);
|
||||
bool should_wait = false;
|
||||
unsigned long *pseqs;
|
||||
s32 cpu;
|
||||
|
||||
if (unlikely(!pseqs_pcpu)) {
|
||||
pr_warn_once("kick_cpus_irq_workfn() called with NULL scx_kick_pseqs");
|
||||
return;
|
||||
}
|
||||
|
||||
pseqs = rcu_dereference_bh(pseqs_pcpu)->seqs;
|
||||
|
||||
for_each_cpu(cpu, this_scx->cpus_to_kick) {
|
||||
should_wait |= kick_one_cpu(cpu, this_rq, pseqs);
|
||||
cpumask_clear_cpu(cpu, this_scx->cpus_to_kick);
|
||||
@ -6360,10 +6434,6 @@ void __init init_sched_ext_class(void)
|
||||
BUG_ON(!alloc_cpumask_var(&idle_masks.cpu, GFP_KERNEL));
|
||||
BUG_ON(!alloc_cpumask_var(&idle_masks.smt, GFP_KERNEL));
|
||||
#endif
|
||||
scx_kick_cpus_pnt_seqs =
|
||||
__alloc_percpu(sizeof(scx_kick_cpus_pnt_seqs[0]) * nr_cpu_ids,
|
||||
__alignof__(scx_kick_cpus_pnt_seqs[0]));
|
||||
BUG_ON(!scx_kick_cpus_pnt_seqs);
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
struct rq *rq = cpu_rq(cpu);
|
||||
|
||||
11
mm/migrate.c
11
mm/migrate.c
@ -1454,6 +1454,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
|
||||
int page_was_mapped = 0;
|
||||
struct anon_vma *anon_vma = NULL;
|
||||
struct address_space *mapping = NULL;
|
||||
enum ttu_flags ttu = 0;
|
||||
|
||||
if (folio_ref_count(src) == 1) {
|
||||
/* page was freed from under us. So we are done. */
|
||||
@ -1494,8 +1495,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
|
||||
goto put_anon;
|
||||
|
||||
if (folio_mapped(src)) {
|
||||
enum ttu_flags ttu = 0;
|
||||
|
||||
if (!folio_test_anon(src)) {
|
||||
/*
|
||||
* In shared mappings, try_to_unmap could potentially
|
||||
@ -1512,9 +1511,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
|
||||
|
||||
try_to_migrate(src, ttu);
|
||||
page_was_mapped = 1;
|
||||
|
||||
if (ttu & TTU_RMAP_LOCKED)
|
||||
i_mmap_unlock_write(mapping);
|
||||
}
|
||||
|
||||
if (!folio_mapped(src))
|
||||
@ -1522,7 +1518,10 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
|
||||
|
||||
if (page_was_mapped)
|
||||
remove_migration_ptes(src,
|
||||
rc == MIGRATEPAGE_SUCCESS ? dst : src, 0);
|
||||
rc == MIGRATEPAGE_SUCCESS ? dst : src, ttu ? RMP_LOCKED : 0);
|
||||
|
||||
if (ttu & TTU_RMAP_LOCKED)
|
||||
i_mmap_unlock_write(mapping);
|
||||
|
||||
unlock_put_anon:
|
||||
folio_unlock(dst);
|
||||
|
||||
@ -1334,7 +1334,8 @@ static int calipso_skbuff_setattr(struct sk_buff *skb,
|
||||
/* At this point new_end aligns to 4n, so (new_end & 4) pads to 8n */
|
||||
pad = ((new_end & 4) + (end & 7)) & 7;
|
||||
len_delta = new_end - (int)end + pad;
|
||||
ret_val = skb_cow(skb, skb_headroom(skb) + len_delta);
|
||||
ret_val = skb_cow(skb,
|
||||
skb_headroom(skb) + (len_delta > 0 ? len_delta : 0));
|
||||
if (ret_val < 0)
|
||||
return ret_val;
|
||||
|
||||
|
||||
@ -1,3 +1,58 @@
|
||||
* Tue Mar 03 2026 CKI KWF Bot <cki-ci-bot+kwf-gitlab-com@redhat.com> [6.12.0-124.43.1.el10_1]
|
||||
- HID: intel-thc-hid: intel-thc: Fix incorrect pointer arithmetic in I2C regs save (CKI Backport Bot) [RHEL-142253] {CVE-2025-39818}
|
||||
- drm/xe: Make dma-fences compliant with the safe access rules (Mika Penttilä) [RHEL-122272] {CVE-2025-38703}
|
||||
Resolves: RHEL-122272, RHEL-142253
|
||||
|
||||
* Sat Feb 28 2026 CKI KWF Bot <cki-ci-bot+kwf-gitlab-com@redhat.com> [6.12.0-124.42.1.el10_1]
|
||||
- dpll: zl3073x: Fix ref frequency setting (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: Prevent duplicate registrations (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: zl3073x: Remove unused dev wrappers (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: zl3073x: Cache all output properties in zl3073x_out (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: zl3073x: Cache all reference properties in zl3073x_ref (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: zl3073x: Cache reference monitor status (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: zl3073x: Split ref, out, and synth logic from core (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: zl3073x: Store raw register values instead of parsed state (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: zl3073x: fix kernel-doc name and missing parameter in fw.c (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: zl3073x: Specify phase adjustment granularity for pins (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: add phase-adjust-gran pin attribute (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: fix device-id-get and pin-id-get to return errors properly (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: spec: add missing module-name and clock-id to pin-get reply (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: zl3073x: Allow to configure phase offset averaging factor (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: add phase_offset_avg_factor_get/set callback ops (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: add phase-offset-avg-factor device attribute to netlink spec (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: fix clock quality level reporting (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: add reference sync get/set (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: add reference-sync netlink attribute (Ivan Vecera) [RHEL-139828]
|
||||
- dpll: remove documentation of rclk_dev_name (Ivan Vecera) [RHEL-139828]
|
||||
- ipv6: BUG() in pskb_expand_head() as part of calipso_skbuff_setattr() (CKI Backport Bot) [RHEL-143548] {CVE-2025-71085}
|
||||
- usb: core: config: Prevent OOB read in SS endpoint companion parsing (CKI Backport Bot) [RHEL-137370] {CVE-2025-39760}
|
||||
- sched_ext: Fix scx_kick_pseqs corruption on concurrent scheduler loads (Phil Auld) [RHEL-124637]
|
||||
- sched_ext: Allocate scx_kick_cpus_pnt_seqs lazily using kvzalloc() (Phil Auld) [RHEL-124637]
|
||||
Resolves: RHEL-124637, RHEL-137370, RHEL-139828, RHEL-143548
|
||||
|
||||
* Thu Feb 26 2026 CKI KWF Bot <cki-ci-bot+kwf-gitlab-com@redhat.com> [6.12.0-124.41.1.el10_1]
|
||||
- vfs: check dentry is still valid in get_link() (Ian Kent) [RHEL-134853]
|
||||
- efivarfs: fix error propagation in efivar_entry_get() (CKI Backport Bot) [RHEL-150116] {CVE-2026-23156}
|
||||
- KVM: x86: Apply runtime updates to current CPUID during KVM_SET_CPUID{,2} (Igor Mammedov) [RHEL-148458]
|
||||
- printk: Use console_is_usable on console_unblank (CKI Backport Bot) [RHEL-148303]
|
||||
- printk: Avoid irq_work for printk_deferred() on suspend (CKI Backport Bot) [RHEL-148303]
|
||||
- printk: Avoid scheduling irq_work on suspend (CKI Backport Bot) [RHEL-148303]
|
||||
- printk: Allow printk_trigger_flush() to flush all types (CKI Backport Bot) [RHEL-148303]
|
||||
- printk: Check CON_SUSPEND when unblanking a console (CKI Backport Bot) [RHEL-148303]
|
||||
- printk: nbcon: Allow reacquire during panic (CKI Backport Bot) [RHEL-148303]
|
||||
- migrate: correct lock ordering for hugetlb file folios (Luiz Capitulino) [RHEL-147270] {CVE-2026-23097}
|
||||
- io_uring/sqpoll: don't put task_struct on tctx setup failure (Jeff Moyer) [RHEL-137992]
|
||||
- io_uring: consistently use rcu semantics with sqpoll thread (Jeff Moyer) [RHEL-137992]
|
||||
- io_uring: fix use-after-free of sq->thread in __io_uring_show_fdinfo() (Jeff Moyer) [RHEL-137992] {CVE-2025-38106}
|
||||
- io_uring/sqpoll: fix sqpoll error handling races (Jeff Moyer) [RHEL-137992]
|
||||
- gpio: cdev: Fix resource leaks on errors in lineinfo_changed_notify() (CKI Backport Bot) [RHEL-145597] {CVE-2025-40249}
|
||||
- gpio: cdev: make sure the cdev fd is still active before emitting events (CKI Backport Bot) [RHEL-145597] {CVE-2025-40249}
|
||||
- macvlan: fix possible UAF in macvlan_forward_source() (CKI Backport Bot) [RHEL-144128] {CVE-2026-23001}
|
||||
- dm: use READ_ONCE in dm_blk_report_zones (Benjamin Marzinski) [RHEL-137953]
|
||||
- dm: fix unlocked test for dm_suspended_md (Benjamin Marzinski) [RHEL-137953]
|
||||
- dm: fix dm_blk_report_zones (CKI Backport Bot) [RHEL-137953] {CVE-2025-38141}
|
||||
Resolves: RHEL-134853, RHEL-137953, RHEL-137992, RHEL-144128, RHEL-145597, RHEL-147270, RHEL-148303, RHEL-148458, RHEL-150116
|
||||
|
||||
* Thu Feb 19 2026 CKI KWF Bot <cki-ci-bot+kwf-gitlab-com@redhat.com> [6.12.0-124.40.1.el10_1]
|
||||
- s390/mm: Fix __ptep_rdp() inline assembly (Mete Durlu) [RHEL-143715]
|
||||
- ice: PTP: fix missing timestamps on E825 hardware (CKI Backport Bot) [RHEL-148168]
|
||||
|
||||
@ -1,3 +1,3 @@
|
||||
sbat,1,SBAT Version,sbat,1,https://github.com/rhboot/shim/blob/main/SBAT.md
|
||||
kernel-uki-virt-addons.centos,1,Red Hat,kernel-uki-virt-addons,6.12.0-124.40.1.el10.x86_64,mailto:secalert@redhat.com
|
||||
kernel-uki-virt-addons.almalinux,1,AlmaLinux,kernel-uki-virt-addons,6.12.0-124.40.1.el10.x86_64,mailto:security@almalinux.org
|
||||
kernel-uki-virt-addons.centos,1,Red Hat,kernel-uki-virt-addons,6.12.0-124.43.1.el10.x86_64,mailto:secalert@redhat.com
|
||||
kernel-uki-virt-addons.almalinux,1,AlmaLinux,kernel-uki-virt-addons,6.12.0-124.43.1.el10.x86_64,mailto:security@almalinux.org
|
||||
|
||||
4
uki.sbat
4
uki.sbat
@ -1,3 +1,3 @@
|
||||
sbat,1,SBAT Version,sbat,1,https://github.com/rhboot/shim/blob/main/SBAT.md
|
||||
kernel-uki-virt.centos,1,Red Hat,kernel-uki-virt,6.12.0-124.40.1.el10.x86_64,mailto:secalert@redhat.com
|
||||
kernel-uki-virt.almalinux,1,AlmaLinux,kernel-uki-virt,6.12.0-124.40.1.el10.x86_64,mailto:security@almalinux.org
|
||||
kernel-uki-virt.centos,1,Red Hat,kernel-uki-virt,6.12.0-124.43.1.el10.x86_64,mailto:secalert@redhat.com
|
||||
kernel-uki-virt.almalinux,1,AlmaLinux,kernel-uki-virt,6.12.0-124.43.1.el10.x86_64,mailto:security@almalinux.org
|
||||
|
||||
Loading…
Reference in New Issue
Block a user