hpsa: bring back deprecated PCI ids #CFHack #CFHack2024
mptsas: bring back deprecated PCI ids #CFHack #CFHack2024 megaraid_sas: bring back deprecated PCI ids #CFHack #CFHack2024 qla2xxx: bring back deprecated PCI ids #CFHack #CFHack2024 qla4xxx: bring back deprecated PCI ids lpfc: bring back deprecated PCI ids be2iscsi: bring back deprecated PCI ids kernel/rh_messages.h: enable all disabled pci devices by moving to unmaintained Use AlmaLinux OS secure boot cert Debrand for AlmaLinux OS
This commit is contained in:
commit
492fc0900c
2
.gitignore
vendored
2
.gitignore
vendored
@ -2,7 +2,7 @@ SOURCES/centossecureboot201.cer
|
||||
SOURCES/centossecurebootca2.cer
|
||||
SOURCES/kernel-abi-stablelists-4.18.0-553.tar.bz2
|
||||
SOURCES/kernel-kabi-dw-4.18.0-553.tar.bz2
|
||||
SOURCES/linux-4.18.0-553.121.1.el8_10.tar.xz
|
||||
SOURCES/linux-4.18.0-553.123.1.el8_10.tar.xz
|
||||
SOURCES/redhatsecureboot302.cer
|
||||
SOURCES/redhatsecureboot303.cer
|
||||
SOURCES/redhatsecureboot501.cer
|
||||
|
||||
@ -1,8 +1,8 @@
|
||||
2ba40bf9138b48311e5aa1b737b7f0a8ad66066f SOURCES/centossecureboot201.cer
|
||||
bfdb3d7cffc43f579655af5155d50c08671d95e5 SOURCES/centossecurebootca2.cer
|
||||
a08cceeed86752cd9fcf5ae3c393706b01aebb0d SOURCES/kernel-abi-stablelists-4.18.0-553.tar.bz2
|
||||
51af9f65ba46f3af01601440512581d8a1ae7c3f SOURCES/kernel-kabi-dw-4.18.0-553.tar.bz2
|
||||
5e9a517613ef33401919cd0d1998c524299f2725 SOURCES/linux-4.18.0-553.121.1.el8_10.tar.xz
|
||||
16beeec466f9755c7ff70f7393c88320af46e2ed SOURCES/kernel-abi-stablelists-4.18.0-553.tar.bz2
|
||||
2318474e4033305aa0461e29d5962ca0a5dc24cb SOURCES/kernel-kabi-dw-4.18.0-553.tar.bz2
|
||||
5a7ddf54de0b2233bda2448815fd1bbc324db233 SOURCES/linux-4.18.0-553.123.1.el8_10.tar.xz
|
||||
13e5cd3f856b472fde80a4deb75f4c18dfb5b255 SOURCES/redhatsecureboot302.cer
|
||||
e89890ca0ded2f9058651cc5fa838b78db2e6cc2 SOURCES/redhatsecureboot303.cer
|
||||
ba0b760e594ff668ee72ae348adf3e49b97f75fb SOURCES/redhatsecureboot501.cer
|
||||
|
||||
@ -1,978 +0,0 @@
|
||||
From: AlmaLinux Backport <packager@almalinux.org>
|
||||
Subject: [PATCH] CVE-2026-31431 ("Copy Fail"): crypto AEAD/algif fixes for EL8
|
||||
|
||||
Backport addressing CVE-2026-31431 ("Copy Fail"), reported by Taeyang Lee
|
||||
<0wn@theori.io>. EL8 kernel is based on 4.18.0; the closest stable
|
||||
branch with these fixes is linux-5.10.y. All 10 stable-5.10.y commits
|
||||
are included, plus one prerequisite (committed 2026-01-30).
|
||||
|
||||
df22c9a65e9a crypto: authencesn - reject too-short AAD (assoclen<8) [prereq, committed 2026-01-30]
|
||||
534b7f208c60 crypto: scatterwalk - Backport memcpy_sglist()
|
||||
488f9c3ab90e crypto: algif_aead - use memcpy_sglist() instead of null skcipher
|
||||
893d22e0135f crypto: algif_aead - Revert to operating out-of-place
|
||||
08ea39a556ec crypto: algif_aead - snapshot IV for async AEAD requests
|
||||
274857bb1fbe crypto: authenc - use memcpy_sglist() instead of null skcipher
|
||||
8c62f6185765 crypto: authencesn - Do not place hiseq at end of dst for out-of-place decryption
|
||||
88881da57e60 crypto: authencesn - Fix src offset when decrypting in-place
|
||||
fa48d3ea9cdb crypto: af_alg - Fix page reassignment overflow in af_alg_pull_tsgl
|
||||
74a66fdb5282 crypto: algif_aead - Fix minimum RX size check for decryption
|
||||
|
||||
Manual adjustments where 4.18 diverges from 5.10:
|
||||
- 488f9c3ab90e: el8 uses crypto_skcipher (not crypto_sync_skcipher) and
|
||||
has different aead_tfm/null_tfm structure layout; refactor adapted to
|
||||
remove struct aead_tfm and the crypto_aead_copy_sgl helper.
|
||||
- 893d22e0135f: af_alg.c docstring/signature updates adapted to el8's
|
||||
older docstring style.
|
||||
- 274857bb1fbe: el8 has crypto_authenc_esn_copy() and crypto_authenc_copy_assoc()
|
||||
helpers (not memcpy_sglist); refactor adapted to remove these helpers.
|
||||
- 8c62f6185765: hunk 4 manually re-applied to fit el8's pre-refactor code
|
||||
shape, equivalent to the upstream post-refactor result.
|
||||
|
||||
The (omitted) crypto: doc - fix kernel-doc notation patch (8b3843b1e3bc)
|
||||
is purely cosmetic and is intentionally not included.
|
||||
|
||||
Signed-off-by: Andrew Lukoshko <alukoshko@almalinux.org>
|
||||
---
|
||||
--- a/crypto/af_alg.c
|
||||
+++ b/crypto/af_alg.c
|
||||
@@ -524,15 +524,13 @@
|
||||
/**
|
||||
* aead_count_tsgl - Count number of TX SG entries
|
||||
*
|
||||
- * The counting starts from the beginning of the SGL to @bytes. If
|
||||
- * an offset is provided, the counting of the SG entries starts at the offset.
|
||||
+ * The counting starts from the beginning of the SGL to @bytes.
|
||||
*
|
||||
* @sk socket of connection to user space
|
||||
* @bytes Count the number of SG entries holding given number of bytes.
|
||||
- * @offset Start the counting of SG entries from the given offset.
|
||||
* @return Number of TX SG entries found given the constraints
|
||||
*/
|
||||
-unsigned int af_alg_count_tsgl(struct sock *sk, size_t bytes, size_t offset)
|
||||
+unsigned int af_alg_count_tsgl(struct sock *sk, size_t bytes)
|
||||
{
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
struct af_alg_ctx *ctx = ask->private;
|
||||
@@ -547,25 +545,11 @@
|
||||
struct scatterlist *sg = sgl->sg;
|
||||
|
||||
for (i = 0; i < sgl->cur; i++) {
|
||||
- size_t bytes_count;
|
||||
-
|
||||
- /* Skip offset */
|
||||
- if (offset >= sg[i].length) {
|
||||
- offset -= sg[i].length;
|
||||
- bytes -= sg[i].length;
|
||||
- continue;
|
||||
- }
|
||||
-
|
||||
- bytes_count = sg[i].length - offset;
|
||||
-
|
||||
- offset = 0;
|
||||
sgl_count++;
|
||||
-
|
||||
- /* If we have seen requested number of bytes, stop */
|
||||
- if (bytes_count >= bytes)
|
||||
+ if (sg[i].length >= bytes)
|
||||
return sgl_count;
|
||||
|
||||
- bytes -= bytes_count;
|
||||
+ bytes -= sg[i].length;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -577,19 +561,14 @@
|
||||
* aead_pull_tsgl - Release the specified buffers from TX SGL
|
||||
*
|
||||
* If @dst is non-null, reassign the pages to dst. The caller must release
|
||||
- * the pages. If @dst_offset is given only reassign the pages to @dst starting
|
||||
- * at the @dst_offset (byte). The caller must ensure that @dst is large
|
||||
- * enough (e.g. by using af_alg_count_tsgl with the same offset).
|
||||
+ * the pages.
|
||||
*
|
||||
* @sk socket of connection to user space
|
||||
* @used Number of bytes to pull from TX SGL
|
||||
* @dst If non-NULL, buffer is reassigned to dst SGL instead of releasing. The
|
||||
* caller must release the buffers in dst.
|
||||
- * @dst_offset Reassign the TX SGL from given offset. All buffers before
|
||||
- * reaching the offset is released.
|
||||
*/
|
||||
-void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst,
|
||||
- size_t dst_offset)
|
||||
+void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst)
|
||||
{
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
struct af_alg_ctx *ctx = ask->private;
|
||||
@@ -613,19 +592,11 @@
|
||||
* Assumption: caller created af_alg_count_tsgl(len)
|
||||
* SG entries in dst.
|
||||
*/
|
||||
- if (dst) {
|
||||
- if (dst_offset >= plen) {
|
||||
- /* discard page before offset */
|
||||
- dst_offset -= plen;
|
||||
- } else {
|
||||
- /* reassign page to dst after offset */
|
||||
- get_page(page);
|
||||
- sg_set_page(dst + j, page,
|
||||
- plen - dst_offset,
|
||||
- sg[i].offset + dst_offset);
|
||||
- dst_offset = 0;
|
||||
- j++;
|
||||
- }
|
||||
+ if (dst && plen) {
|
||||
+ /* reassign page to dst */
|
||||
+ get_page(page);
|
||||
+ sg_set_page(dst + j, page, plen, sg[i].offset);
|
||||
+ j++;
|
||||
}
|
||||
|
||||
sg[i].length -= plen;
|
||||
--- a/crypto/algif_aead.c
|
||||
+++ b/crypto/algif_aead.c
|
||||
@@ -30,8 +30,6 @@
|
||||
#include <crypto/internal/aead.h>
|
||||
#include <crypto/scatterwalk.h>
|
||||
#include <crypto/if_alg.h>
|
||||
-#include <crypto/skcipher.h>
|
||||
-#include <crypto/null.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/kernel.h>
|
||||
@@ -40,19 +38,13 @@
|
||||
#include <linux/net.h>
|
||||
#include <net/sock.h>
|
||||
|
||||
-struct aead_tfm {
|
||||
- struct crypto_aead *aead;
|
||||
- struct crypto_skcipher *null_tfm;
|
||||
-};
|
||||
-
|
||||
static inline bool aead_sufficient_data(struct sock *sk)
|
||||
{
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
struct sock *psk = ask->parent;
|
||||
struct alg_sock *pask = alg_sk(psk);
|
||||
struct af_alg_ctx *ctx = ask->private;
|
||||
- struct aead_tfm *aeadc = pask->private;
|
||||
- struct crypto_aead *tfm = aeadc->aead;
|
||||
+ struct crypto_aead *tfm = pask->private;
|
||||
unsigned int as = crypto_aead_authsize(tfm);
|
||||
|
||||
/*
|
||||
@@ -68,27 +60,12 @@
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
struct sock *psk = ask->parent;
|
||||
struct alg_sock *pask = alg_sk(psk);
|
||||
- struct aead_tfm *aeadc = pask->private;
|
||||
- struct crypto_aead *tfm = aeadc->aead;
|
||||
+ struct crypto_aead *tfm = pask->private;
|
||||
unsigned int ivsize = crypto_aead_ivsize(tfm);
|
||||
|
||||
return af_alg_sendmsg(sock, msg, size, ivsize);
|
||||
}
|
||||
|
||||
-static int crypto_aead_copy_sgl(struct crypto_skcipher *null_tfm,
|
||||
- struct scatterlist *src,
|
||||
- struct scatterlist *dst, unsigned int len)
|
||||
-{
|
||||
- SKCIPHER_REQUEST_ON_STACK(skreq, null_tfm);
|
||||
-
|
||||
- skcipher_request_set_tfm(skreq, null_tfm);
|
||||
- skcipher_request_set_callback(skreq, CRYPTO_TFM_REQ_MAY_BACKLOG,
|
||||
- NULL, NULL);
|
||||
- skcipher_request_set_crypt(skreq, src, dst, len, NULL);
|
||||
-
|
||||
- return crypto_skcipher_encrypt(skreq);
|
||||
-}
|
||||
-
|
||||
static int _aead_recvmsg(struct socket *sock, struct msghdr *msg,
|
||||
size_t ignored, int flags)
|
||||
{
|
||||
@@ -97,13 +74,12 @@
|
||||
struct sock *psk = ask->parent;
|
||||
struct alg_sock *pask = alg_sk(psk);
|
||||
struct af_alg_ctx *ctx = ask->private;
|
||||
- struct aead_tfm *aeadc = pask->private;
|
||||
- struct crypto_aead *tfm = aeadc->aead;
|
||||
- struct crypto_skcipher *null_tfm = aeadc->null_tfm;
|
||||
- unsigned int i, as = crypto_aead_authsize(tfm);
|
||||
+ struct crypto_aead *tfm = pask->private;
|
||||
+ unsigned int as = crypto_aead_authsize(tfm);
|
||||
+ unsigned int ivsize = crypto_aead_ivsize(tfm);
|
||||
struct af_alg_async_req *areq;
|
||||
- struct af_alg_tsgl *tsgl, *tmp;
|
||||
struct scatterlist *rsgl_src, *tsgl_src = NULL;
|
||||
+ void *iv;
|
||||
int err = 0;
|
||||
size_t used = 0; /* [in] TX bufs to be en/decrypted */
|
||||
size_t outlen = 0; /* [out] RX bufs produced by kernel */
|
||||
@@ -155,10 +131,14 @@
|
||||
|
||||
/* Allocate cipher request for current operation. */
|
||||
areq = af_alg_alloc_areq(sk, sizeof(struct af_alg_async_req) +
|
||||
- crypto_aead_reqsize(tfm));
|
||||
+ crypto_aead_reqsize(tfm) + ivsize);
|
||||
if (IS_ERR(areq))
|
||||
return PTR_ERR(areq);
|
||||
|
||||
+ iv = (u8 *)aead_request_ctx(&areq->cra_u.aead_req) +
|
||||
+ crypto_aead_reqsize(tfm);
|
||||
+ memcpy(iv, ctx->iv, ivsize);
|
||||
+
|
||||
/* convert iovecs of output buffers into RX SGL */
|
||||
err = af_alg_get_rsgl(sk, msg, flags, areq, outlen, &usedpages);
|
||||
if (err)
|
||||
@@ -174,7 +154,7 @@
|
||||
if (usedpages < outlen) {
|
||||
size_t less = outlen - usedpages;
|
||||
|
||||
- if (used < less) {
|
||||
+ if (used < less + (ctx->enc ? 0 : as)) {
|
||||
err = -EINVAL;
|
||||
goto free;
|
||||
}
|
||||
@@ -182,23 +162,24 @@
|
||||
outlen -= less;
|
||||
}
|
||||
|
||||
+ /*
|
||||
+ * Create a per request TX SGL for this request which tracks the
|
||||
+ * SG entries from the global TX SGL.
|
||||
+ */
|
||||
processed = used + ctx->aead_assoclen;
|
||||
- list_for_each_entry_safe(tsgl, tmp, &ctx->tsgl_list, list) {
|
||||
- for (i = 0; i < tsgl->cur; i++) {
|
||||
- struct scatterlist *process_sg = tsgl->sg + i;
|
||||
-
|
||||
- if (!(process_sg->length) || !sg_page(process_sg))
|
||||
- continue;
|
||||
- tsgl_src = process_sg;
|
||||
- break;
|
||||
- }
|
||||
- if (tsgl_src)
|
||||
- break;
|
||||
- }
|
||||
- if (processed && !tsgl_src) {
|
||||
- err = -EFAULT;
|
||||
+ areq->tsgl_entries = af_alg_count_tsgl(sk, processed);
|
||||
+ if (!areq->tsgl_entries)
|
||||
+ areq->tsgl_entries = 1;
|
||||
+ areq->tsgl = sock_kmalloc(sk, array_size(sizeof(*areq->tsgl),
|
||||
+ areq->tsgl_entries),
|
||||
+ GFP_KERNEL);
|
||||
+ if (!areq->tsgl) {
|
||||
+ err = -ENOMEM;
|
||||
goto free;
|
||||
}
|
||||
+ sg_init_table(areq->tsgl, areq->tsgl_entries);
|
||||
+ af_alg_pull_tsgl(sk, processed, areq->tsgl);
|
||||
+ tsgl_src = areq->tsgl;
|
||||
|
||||
/*
|
||||
* Copy of AAD from source to destination
|
||||
@@ -207,82 +188,16 @@
|
||||
* when user space uses an in-place cipher operation, the kernel
|
||||
* will copy the data as it does not see whether such in-place operation
|
||||
* is initiated.
|
||||
- *
|
||||
- * To ensure efficiency, the following implementation ensure that the
|
||||
- * ciphers are invoked to perform a crypto operation in-place. This
|
||||
- * is achieved by memory management specified as follows.
|
||||
*/
|
||||
|
||||
/* Use the RX SGL as source (and destination) for crypto op. */
|
||||
rsgl_src = areq->first_rsgl.sgl.sg;
|
||||
|
||||
- if (ctx->enc) {
|
||||
- /*
|
||||
- * Encryption operation - The in-place cipher operation is
|
||||
- * achieved by the following operation:
|
||||
- *
|
||||
- * TX SGL: AAD || PT
|
||||
- * | |
|
||||
- * | copy |
|
||||
- * v v
|
||||
- * RX SGL: AAD || PT || Tag
|
||||
- */
|
||||
- err = crypto_aead_copy_sgl(null_tfm, tsgl_src,
|
||||
- areq->first_rsgl.sgl.sg, processed);
|
||||
- if (err)
|
||||
- goto free;
|
||||
- af_alg_pull_tsgl(sk, processed, NULL, 0);
|
||||
- } else {
|
||||
- /*
|
||||
- * Decryption operation - To achieve an in-place cipher
|
||||
- * operation, the following SGL structure is used:
|
||||
- *
|
||||
- * TX SGL: AAD || CT || Tag
|
||||
- * | | ^
|
||||
- * | copy | | Create SGL link.
|
||||
- * v v |
|
||||
- * RX SGL: AAD || CT ----+
|
||||
- */
|
||||
-
|
||||
- /* Copy AAD || CT to RX SGL buffer for in-place operation. */
|
||||
- err = crypto_aead_copy_sgl(null_tfm, tsgl_src,
|
||||
- areq->first_rsgl.sgl.sg, outlen);
|
||||
- if (err)
|
||||
- goto free;
|
||||
-
|
||||
- /* Create TX SGL for tag and chain it to RX SGL. */
|
||||
- areq->tsgl_entries = af_alg_count_tsgl(sk, processed,
|
||||
- processed - as);
|
||||
- if (!areq->tsgl_entries)
|
||||
- areq->tsgl_entries = 1;
|
||||
- areq->tsgl = sock_kmalloc(sk, array_size(sizeof(*areq->tsgl),
|
||||
- areq->tsgl_entries),
|
||||
- GFP_KERNEL);
|
||||
- if (!areq->tsgl) {
|
||||
- err = -ENOMEM;
|
||||
- goto free;
|
||||
- }
|
||||
- sg_init_table(areq->tsgl, areq->tsgl_entries);
|
||||
-
|
||||
- /* Release TX SGL, except for tag data and reassign tag data. */
|
||||
- af_alg_pull_tsgl(sk, processed, areq->tsgl, processed - as);
|
||||
-
|
||||
- /* chain the areq TX SGL holding the tag with RX SGL */
|
||||
- if (usedpages) {
|
||||
- /* RX SGL present */
|
||||
- struct af_alg_sgl *sgl_prev = &areq->last_rsgl->sgl;
|
||||
-
|
||||
- sg_unmark_end(sgl_prev->sg + sgl_prev->npages - 1);
|
||||
- sg_chain(sgl_prev->sg, sgl_prev->npages + 1,
|
||||
- areq->tsgl);
|
||||
- } else
|
||||
- /* no RX SGL present (e.g. authentication only) */
|
||||
- rsgl_src = areq->tsgl;
|
||||
- }
|
||||
+ memcpy_sglist(rsgl_src, tsgl_src, ctx->aead_assoclen);
|
||||
|
||||
/* Initialize the crypto operation */
|
||||
- aead_request_set_crypt(&areq->cra_u.aead_req, rsgl_src,
|
||||
- areq->first_rsgl.sgl.sg, used, ctx->iv);
|
||||
+ aead_request_set_crypt(&areq->cra_u.aead_req, tsgl_src,
|
||||
+ areq->first_rsgl.sgl.sg, used, iv);
|
||||
aead_request_set_ad(&areq->cra_u.aead_req, ctx->aead_assoclen);
|
||||
aead_request_set_tfm(&areq->cra_u.aead_req, tfm);
|
||||
|
||||
@@ -383,7 +298,7 @@
|
||||
int err = 0;
|
||||
struct sock *psk;
|
||||
struct alg_sock *pask;
|
||||
- struct aead_tfm *tfm;
|
||||
+ struct crypto_aead *tfm;
|
||||
struct sock *sk = sock->sk;
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
|
||||
@@ -397,7 +312,7 @@
|
||||
|
||||
err = -ENOKEY;
|
||||
lock_sock_nested(psk, SINGLE_DEPTH_NESTING);
|
||||
- if (crypto_aead_get_flags(tfm->aead) & CRYPTO_TFM_NEED_KEY)
|
||||
+ if (crypto_aead_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
|
||||
goto unlock;
|
||||
|
||||
if (!pask->refcnt++)
|
||||
@@ -476,54 +391,22 @@
|
||||
|
||||
static void *aead_bind(const char *name, u32 type, u32 mask)
|
||||
{
|
||||
- struct aead_tfm *tfm;
|
||||
- struct crypto_aead *aead;
|
||||
- struct crypto_skcipher *null_tfm;
|
||||
-
|
||||
- tfm = kzalloc(sizeof(*tfm), GFP_KERNEL);
|
||||
- if (!tfm)
|
||||
- return ERR_PTR(-ENOMEM);
|
||||
-
|
||||
- aead = crypto_alloc_aead(name, type, mask);
|
||||
- if (IS_ERR(aead)) {
|
||||
- kfree(tfm);
|
||||
- return ERR_CAST(aead);
|
||||
- }
|
||||
-
|
||||
- null_tfm = crypto_get_default_null_skcipher();
|
||||
- if (IS_ERR(null_tfm)) {
|
||||
- crypto_free_aead(aead);
|
||||
- kfree(tfm);
|
||||
- return ERR_CAST(null_tfm);
|
||||
- }
|
||||
-
|
||||
- tfm->aead = aead;
|
||||
- tfm->null_tfm = null_tfm;
|
||||
-
|
||||
- return tfm;
|
||||
+ return crypto_alloc_aead(name, type, mask);
|
||||
}
|
||||
|
||||
static void aead_release(void *private)
|
||||
{
|
||||
- struct aead_tfm *tfm = private;
|
||||
-
|
||||
- crypto_free_aead(tfm->aead);
|
||||
- crypto_put_default_null_skcipher();
|
||||
- kfree(tfm);
|
||||
+ crypto_free_aead(private);
|
||||
}
|
||||
|
||||
static int aead_setauthsize(void *private, unsigned int authsize)
|
||||
{
|
||||
- struct aead_tfm *tfm = private;
|
||||
-
|
||||
- return crypto_aead_setauthsize(tfm->aead, authsize);
|
||||
+ return crypto_aead_setauthsize(private, authsize);
|
||||
}
|
||||
|
||||
static int aead_setkey(void *private, const u8 *key, unsigned int keylen)
|
||||
{
|
||||
- struct aead_tfm *tfm = private;
|
||||
-
|
||||
- return crypto_aead_setkey(tfm->aead, key, keylen);
|
||||
+ return crypto_aead_setkey(private, key, keylen);
|
||||
}
|
||||
|
||||
static void aead_sock_destruct(struct sock *sk)
|
||||
@@ -532,11 +415,10 @@
|
||||
struct af_alg_ctx *ctx = ask->private;
|
||||
struct sock *psk = ask->parent;
|
||||
struct alg_sock *pask = alg_sk(psk);
|
||||
- struct aead_tfm *aeadc = pask->private;
|
||||
- struct crypto_aead *tfm = aeadc->aead;
|
||||
+ struct crypto_aead *tfm = pask->private;
|
||||
unsigned int ivlen = crypto_aead_ivsize(tfm);
|
||||
|
||||
- af_alg_pull_tsgl(sk, ctx->used, NULL, 0);
|
||||
+ af_alg_pull_tsgl(sk, ctx->used, NULL);
|
||||
sock_kzfree_s(sk, ctx->iv, ivlen);
|
||||
sock_kfree_s(sk, ctx, ctx->len);
|
||||
af_alg_release_parent(sk);
|
||||
@@ -546,10 +428,9 @@
|
||||
{
|
||||
struct af_alg_ctx *ctx;
|
||||
struct alg_sock *ask = alg_sk(sk);
|
||||
- struct aead_tfm *tfm = private;
|
||||
- struct crypto_aead *aead = tfm->aead;
|
||||
+ struct crypto_aead *tfm = private;
|
||||
unsigned int len = sizeof(*ctx);
|
||||
- unsigned int ivlen = crypto_aead_ivsize(aead);
|
||||
+ unsigned int ivlen = crypto_aead_ivsize(tfm);
|
||||
|
||||
ctx = sock_kmalloc(sk, len, GFP_KERNEL);
|
||||
if (!ctx)
|
||||
@@ -582,9 +463,9 @@
|
||||
|
||||
static int aead_accept_parent(void *private, struct sock *sk)
|
||||
{
|
||||
- struct aead_tfm *tfm = private;
|
||||
+ struct crypto_aead *tfm = private;
|
||||
|
||||
- if (crypto_aead_get_flags(tfm->aead) & CRYPTO_TFM_NEED_KEY)
|
||||
+ if (crypto_aead_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
|
||||
return -ENOKEY;
|
||||
|
||||
return aead_accept_parent_nokey(private, sk);
|
||||
--- a/crypto/algif_skcipher.c
|
||||
+++ b/crypto/algif_skcipher.c
|
||||
@@ -97,7 +97,7 @@
|
||||
* Create a per request TX SGL for this request which tracks the
|
||||
* SG entries from the global TX SGL.
|
||||
*/
|
||||
- areq->tsgl_entries = af_alg_count_tsgl(sk, len, 0);
|
||||
+ areq->tsgl_entries = af_alg_count_tsgl(sk, len);
|
||||
if (!areq->tsgl_entries)
|
||||
areq->tsgl_entries = 1;
|
||||
areq->tsgl = sock_kmalloc(sk, array_size(sizeof(*areq->tsgl),
|
||||
@@ -108,7 +108,7 @@
|
||||
goto free;
|
||||
}
|
||||
sg_init_table(areq->tsgl, areq->tsgl_entries);
|
||||
- af_alg_pull_tsgl(sk, len, areq->tsgl, 0);
|
||||
+ af_alg_pull_tsgl(sk, len, areq->tsgl);
|
||||
|
||||
/* Initialize the crypto operation */
|
||||
skcipher_request_set_tfm(&areq->cra_u.skcipher_req, tfm);
|
||||
@@ -328,7 +328,7 @@
|
||||
struct alg_sock *pask = alg_sk(psk);
|
||||
struct crypto_skcipher *tfm = pask->private;
|
||||
|
||||
- af_alg_pull_tsgl(sk, ctx->used, NULL, 0);
|
||||
+ af_alg_pull_tsgl(sk, ctx->used, NULL);
|
||||
sock_kzfree_s(sk, ctx->iv, crypto_skcipher_ivsize(tfm));
|
||||
sock_kfree_s(sk, ctx, ctx->len);
|
||||
af_alg_release_parent(sk);
|
||||
--- a/crypto/authenc.c
|
||||
+++ b/crypto/authenc.c
|
||||
@@ -14,7 +14,6 @@
|
||||
#include <crypto/internal/hash.h>
|
||||
#include <crypto/internal/skcipher.h>
|
||||
#include <crypto/authenc.h>
|
||||
-#include <crypto/null.h>
|
||||
#include <crypto/scatterwalk.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/init.h>
|
||||
@@ -33,7 +32,6 @@
|
||||
struct crypto_authenc_ctx {
|
||||
struct crypto_ahash *auth;
|
||||
struct crypto_skcipher *enc;
|
||||
- struct crypto_skcipher *null;
|
||||
};
|
||||
|
||||
struct authenc_request_ctx {
|
||||
@@ -189,21 +187,6 @@
|
||||
authenc_request_complete(areq, err);
|
||||
}
|
||||
|
||||
-static int crypto_authenc_copy_assoc(struct aead_request *req)
|
||||
-{
|
||||
- struct crypto_aead *authenc = crypto_aead_reqtfm(req);
|
||||
- struct crypto_authenc_ctx *ctx = crypto_aead_ctx(authenc);
|
||||
- SKCIPHER_REQUEST_ON_STACK(skreq, ctx->null);
|
||||
-
|
||||
- skcipher_request_set_tfm(skreq, ctx->null);
|
||||
- skcipher_request_set_callback(skreq, aead_request_flags(req),
|
||||
- NULL, NULL);
|
||||
- skcipher_request_set_crypt(skreq, req->src, req->dst, req->assoclen,
|
||||
- NULL);
|
||||
-
|
||||
- return crypto_skcipher_encrypt(skreq);
|
||||
-}
|
||||
-
|
||||
static int crypto_authenc_encrypt(struct aead_request *req)
|
||||
{
|
||||
struct crypto_aead *authenc = crypto_aead_reqtfm(req);
|
||||
@@ -222,10 +205,7 @@
|
||||
dst = src;
|
||||
|
||||
if (req->src != req->dst) {
|
||||
- err = crypto_authenc_copy_assoc(req);
|
||||
- if (err)
|
||||
- return err;
|
||||
-
|
||||
+ memcpy_sglist(req->dst, req->src, req->assoclen);
|
||||
dst = scatterwalk_ffwd(areq_ctx->dst, req->dst, req->assoclen);
|
||||
}
|
||||
|
||||
@@ -326,7 +306,6 @@
|
||||
struct crypto_authenc_ctx *ctx = crypto_aead_ctx(tfm);
|
||||
struct crypto_ahash *auth;
|
||||
struct crypto_skcipher *enc;
|
||||
- struct crypto_skcipher *null;
|
||||
int err;
|
||||
|
||||
auth = crypto_spawn_ahash(&ictx->auth);
|
||||
@@ -338,14 +317,8 @@
|
||||
if (IS_ERR(enc))
|
||||
goto err_free_ahash;
|
||||
|
||||
- null = crypto_get_default_null_skcipher();
|
||||
- err = PTR_ERR(null);
|
||||
- if (IS_ERR(null))
|
||||
- goto err_free_skcipher;
|
||||
-
|
||||
ctx->auth = auth;
|
||||
ctx->enc = enc;
|
||||
- ctx->null = null;
|
||||
|
||||
crypto_aead_set_reqsize(
|
||||
tfm,
|
||||
@@ -359,8 +332,6 @@
|
||||
|
||||
return 0;
|
||||
|
||||
-err_free_skcipher:
|
||||
- crypto_free_skcipher(enc);
|
||||
err_free_ahash:
|
||||
crypto_free_ahash(auth);
|
||||
return err;
|
||||
@@ -372,7 +343,6 @@
|
||||
|
||||
crypto_free_ahash(ctx->auth);
|
||||
crypto_free_skcipher(ctx->enc);
|
||||
- crypto_put_default_null_skcipher();
|
||||
}
|
||||
|
||||
static void crypto_authenc_free(struct aead_instance *inst)
|
||||
--- a/crypto/authencesn.c
|
||||
+++ b/crypto/authencesn.c
|
||||
@@ -17,7 +17,6 @@
|
||||
#include <crypto/internal/hash.h>
|
||||
#include <crypto/internal/skcipher.h>
|
||||
#include <crypto/authenc.h>
|
||||
-#include <crypto/null.h>
|
||||
#include <crypto/scatterwalk.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/init.h>
|
||||
@@ -36,7 +35,6 @@
|
||||
unsigned int reqoff;
|
||||
struct crypto_ahash *auth;
|
||||
struct crypto_skcipher *enc;
|
||||
- struct crypto_skcipher *null;
|
||||
};
|
||||
|
||||
struct authenc_esn_request_ctx {
|
||||
@@ -179,20 +177,6 @@
|
||||
authenc_esn_request_complete(areq, err);
|
||||
}
|
||||
|
||||
-static int crypto_authenc_esn_copy(struct aead_request *req, unsigned int len)
|
||||
-{
|
||||
- struct crypto_aead *authenc_esn = crypto_aead_reqtfm(req);
|
||||
- struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn);
|
||||
- SKCIPHER_REQUEST_ON_STACK(skreq, ctx->null);
|
||||
-
|
||||
- skcipher_request_set_tfm(skreq, ctx->null);
|
||||
- skcipher_request_set_callback(skreq, aead_request_flags(req),
|
||||
- NULL, NULL);
|
||||
- skcipher_request_set_crypt(skreq, req->src, req->dst, len, NULL);
|
||||
-
|
||||
- return crypto_skcipher_encrypt(skreq);
|
||||
-}
|
||||
-
|
||||
static int crypto_authenc_esn_encrypt(struct aead_request *req)
|
||||
{
|
||||
struct crypto_aead *authenc_esn = crypto_aead_reqtfm(req);
|
||||
@@ -206,15 +190,15 @@
|
||||
struct scatterlist *src, *dst;
|
||||
int err;
|
||||
|
||||
+ if (assoclen < 8)
|
||||
+ return -EINVAL;
|
||||
+
|
||||
sg_init_table(areq_ctx->src, 2);
|
||||
src = scatterwalk_ffwd(areq_ctx->src, req->src, assoclen);
|
||||
dst = src;
|
||||
|
||||
if (req->src != req->dst) {
|
||||
- err = crypto_authenc_esn_copy(req, assoclen);
|
||||
- if (err)
|
||||
- return err;
|
||||
-
|
||||
+ memcpy_sglist(req->dst, req->src, assoclen);
|
||||
sg_init_table(areq_ctx->dst, 2);
|
||||
dst = scatterwalk_ffwd(areq_ctx->dst, req->dst, assoclen);
|
||||
}
|
||||
@@ -245,6 +229,7 @@
|
||||
crypto_ahash_alignmask(auth) + 1);
|
||||
unsigned int cryptlen = req->cryptlen - authsize;
|
||||
unsigned int assoclen = req->assoclen;
|
||||
+ struct scatterlist *src = req->src;
|
||||
struct scatterlist *dst = req->dst;
|
||||
u8 *ihash = ohash + crypto_ahash_digestsize(auth);
|
||||
u32 tmp[2];
|
||||
@@ -252,23 +237,29 @@
|
||||
if (!authsize)
|
||||
goto decrypt;
|
||||
|
||||
- /* Move high-order bits of sequence number back. */
|
||||
- scatterwalk_map_and_copy(tmp, dst, 4, 4, 0);
|
||||
- scatterwalk_map_and_copy(tmp + 1, dst, assoclen + cryptlen, 4, 0);
|
||||
- scatterwalk_map_and_copy(tmp, dst, 0, 8, 1);
|
||||
+ if (src == dst) {
|
||||
+ /* Move high-order bits of sequence number back. */
|
||||
+ scatterwalk_map_and_copy(tmp, dst, 4, 4, 0);
|
||||
+ scatterwalk_map_and_copy(tmp + 1, dst, assoclen + cryptlen, 4, 0);
|
||||
+ scatterwalk_map_and_copy(tmp, dst, 0, 8, 1);
|
||||
+ } else
|
||||
+ memcpy_sglist(dst, src, assoclen);
|
||||
|
||||
if (crypto_memneq(ihash, ohash, authsize))
|
||||
return -EBADMSG;
|
||||
|
||||
decrypt:
|
||||
|
||||
- sg_init_table(areq_ctx->dst, 2);
|
||||
dst = scatterwalk_ffwd(areq_ctx->dst, dst, assoclen);
|
||||
+ if (req->src == req->dst)
|
||||
+ src = dst;
|
||||
+ else
|
||||
+ src = scatterwalk_ffwd(areq_ctx->src, src, assoclen);
|
||||
|
||||
skcipher_request_set_tfm(skreq, ctx->enc);
|
||||
skcipher_request_set_callback(skreq, flags,
|
||||
req->base.complete, req->base.data);
|
||||
- skcipher_request_set_crypt(skreq, dst, dst, cryptlen, req->iv);
|
||||
+ skcipher_request_set_crypt(skreq, src, dst, cryptlen, req->iv);
|
||||
|
||||
return crypto_skcipher_decrypt(skreq);
|
||||
}
|
||||
@@ -295,31 +286,36 @@
|
||||
unsigned int assoclen = req->assoclen;
|
||||
unsigned int cryptlen = req->cryptlen;
|
||||
u8 *ihash = ohash + crypto_ahash_digestsize(auth);
|
||||
+ struct scatterlist *src = req->src;
|
||||
struct scatterlist *dst = req->dst;
|
||||
u32 tmp[2];
|
||||
int err;
|
||||
|
||||
- cryptlen -= authsize;
|
||||
+ if (assoclen < 8)
|
||||
+ return -EINVAL;
|
||||
|
||||
- if (req->src != dst) {
|
||||
- err = crypto_authenc_esn_copy(req, assoclen + cryptlen);
|
||||
- if (err)
|
||||
- return err;
|
||||
- }
|
||||
+ if (!authsize)
|
||||
+ goto tail;
|
||||
|
||||
+ cryptlen -= authsize;
|
||||
scatterwalk_map_and_copy(ihash, req->src, assoclen + cryptlen,
|
||||
authsize, 0);
|
||||
|
||||
- if (!authsize)
|
||||
- goto tail;
|
||||
-
|
||||
/* Move high-order bits of sequence number to the end. */
|
||||
- scatterwalk_map_and_copy(tmp, dst, 0, 8, 0);
|
||||
- scatterwalk_map_and_copy(tmp, dst, 4, 4, 1);
|
||||
- scatterwalk_map_and_copy(tmp + 1, dst, assoclen + cryptlen, 4, 1);
|
||||
-
|
||||
- sg_init_table(areq_ctx->dst, 2);
|
||||
- dst = scatterwalk_ffwd(areq_ctx->dst, dst, 4);
|
||||
+ scatterwalk_map_and_copy(tmp, src, 0, 8, 0);
|
||||
+ if (src == dst) {
|
||||
+ scatterwalk_map_and_copy(tmp, dst, 4, 4, 1);
|
||||
+ scatterwalk_map_and_copy(tmp + 1, dst, assoclen + cryptlen, 4, 1);
|
||||
+ dst = scatterwalk_ffwd(areq_ctx->dst, dst, 4);
|
||||
+ } else {
|
||||
+ scatterwalk_map_and_copy(tmp, dst, 0, 4, 1);
|
||||
+ scatterwalk_map_and_copy(tmp + 1, dst, assoclen + cryptlen - 4, 4, 1);
|
||||
+
|
||||
+ src = scatterwalk_ffwd(areq_ctx->src, src, 8);
|
||||
+ dst = scatterwalk_ffwd(areq_ctx->dst, dst, 4);
|
||||
+ memcpy_sglist(dst, src, assoclen + cryptlen - 8);
|
||||
+ dst = req->dst;
|
||||
+ }
|
||||
|
||||
ahash_request_set_tfm(ahreq, auth);
|
||||
ahash_request_set_crypt(ahreq, dst, ohash, assoclen + cryptlen);
|
||||
@@ -341,7 +337,6 @@
|
||||
struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(tfm);
|
||||
struct crypto_ahash *auth;
|
||||
struct crypto_skcipher *enc;
|
||||
- struct crypto_skcipher *null;
|
||||
int err;
|
||||
|
||||
auth = crypto_spawn_ahash(&ictx->auth);
|
||||
@@ -353,14 +348,8 @@
|
||||
if (IS_ERR(enc))
|
||||
goto err_free_ahash;
|
||||
|
||||
- null = crypto_get_default_null_skcipher();
|
||||
- err = PTR_ERR(null);
|
||||
- if (IS_ERR(null))
|
||||
- goto err_free_skcipher;
|
||||
-
|
||||
ctx->auth = auth;
|
||||
ctx->enc = enc;
|
||||
- ctx->null = null;
|
||||
|
||||
ctx->reqoff = ALIGN(2 * crypto_ahash_digestsize(auth),
|
||||
crypto_ahash_alignmask(auth) + 1);
|
||||
@@ -377,8 +366,6 @@
|
||||
|
||||
return 0;
|
||||
|
||||
-err_free_skcipher:
|
||||
- crypto_free_skcipher(enc);
|
||||
err_free_ahash:
|
||||
crypto_free_ahash(auth);
|
||||
return err;
|
||||
@@ -390,7 +377,6 @@
|
||||
|
||||
crypto_free_ahash(ctx->auth);
|
||||
crypto_free_skcipher(ctx->enc);
|
||||
- crypto_put_default_null_skcipher();
|
||||
}
|
||||
|
||||
static void crypto_authenc_esn_free(struct aead_instance *inst)
|
||||
--- a/crypto/scatterwalk.c
|
||||
+++ b/crypto/scatterwalk.c
|
||||
@@ -74,6 +74,104 @@
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(scatterwalk_map_and_copy);
|
||||
|
||||
+/**
|
||||
+ * memcpy_sglist() - Copy data from one scatterlist to another
|
||||
+ * @dst: The destination scatterlist. Can be NULL if @nbytes == 0.
|
||||
+ * @src: The source scatterlist. Can be NULL if @nbytes == 0.
|
||||
+ * @nbytes: Number of bytes to copy
|
||||
+ *
|
||||
+ * The scatterlists can describe exactly the same memory, in which case this
|
||||
+ * function is a no-op. No other overlaps are supported.
|
||||
+ *
|
||||
+ * Context: Any context
|
||||
+ */
|
||||
+void memcpy_sglist(struct scatterlist *dst, struct scatterlist *src,
|
||||
+ unsigned int nbytes)
|
||||
+{
|
||||
+ unsigned int src_offset, dst_offset;
|
||||
+
|
||||
+ if (unlikely(nbytes == 0)) /* in case src and/or dst is NULL */
|
||||
+ return;
|
||||
+
|
||||
+ src_offset = src->offset;
|
||||
+ dst_offset = dst->offset;
|
||||
+ for (;;) {
|
||||
+ /* Compute the length to copy this step. */
|
||||
+ unsigned int len = min3(src->offset + src->length - src_offset,
|
||||
+ dst->offset + dst->length - dst_offset,
|
||||
+ nbytes);
|
||||
+ struct page *src_page = sg_page(src);
|
||||
+ struct page *dst_page = sg_page(dst);
|
||||
+ const void *src_virt;
|
||||
+ void *dst_virt;
|
||||
+
|
||||
+ if (IS_ENABLED(CONFIG_HIGHMEM)) {
|
||||
+ /* HIGHMEM: we may have to actually map the pages. */
|
||||
+ const unsigned int src_oip = offset_in_page(src_offset);
|
||||
+ const unsigned int dst_oip = offset_in_page(dst_offset);
|
||||
+ const unsigned int limit = PAGE_SIZE;
|
||||
+
|
||||
+ /* Further limit len to not cross a page boundary. */
|
||||
+ len = min3(len, limit - src_oip, limit - dst_oip);
|
||||
+
|
||||
+ /* Compute the source and destination pages. */
|
||||
+ src_page += src_offset / PAGE_SIZE;
|
||||
+ dst_page += dst_offset / PAGE_SIZE;
|
||||
+
|
||||
+ if (src_page != dst_page) {
|
||||
+ /* Copy between different pages. */
|
||||
+ dst_virt = kmap_atomic(dst_page);
|
||||
+ src_virt = kmap_atomic(src_page);
|
||||
+ memcpy(dst_virt + dst_oip, src_virt + src_oip,
|
||||
+ len);
|
||||
+ kunmap_atomic((void *)src_virt);
|
||||
+ kunmap_atomic(dst_virt);
|
||||
+ flush_dcache_page(dst_page);
|
||||
+ } else if (src_oip != dst_oip) {
|
||||
+ /* Copy between different parts of same page. */
|
||||
+ dst_virt = kmap_atomic(dst_page);
|
||||
+ memcpy(dst_virt + dst_oip, dst_virt + src_oip,
|
||||
+ len);
|
||||
+ kunmap_atomic(dst_virt);
|
||||
+ flush_dcache_page(dst_page);
|
||||
+ } /* Else, it's the same memory. No action needed. */
|
||||
+ } else {
|
||||
+ /*
|
||||
+ * !HIGHMEM: no mapping needed. Just work in the linear
|
||||
+ * buffer of each sg entry. Note that we can cross page
|
||||
+ * boundaries, as they are not significant in this case.
|
||||
+ */
|
||||
+ src_virt = page_address(src_page) + src_offset;
|
||||
+ dst_virt = page_address(dst_page) + dst_offset;
|
||||
+ if (src_virt != dst_virt) {
|
||||
+ memcpy(dst_virt, src_virt, len);
|
||||
+ if (ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE)
|
||||
+ __scatterwalk_flush_dcache_pages(
|
||||
+ dst_page, dst_offset, len);
|
||||
+ } /* Else, it's the same memory. No action needed. */
|
||||
+ }
|
||||
+ nbytes -= len;
|
||||
+ if (nbytes == 0) /* No more to copy? */
|
||||
+ break;
|
||||
+
|
||||
+ /*
|
||||
+ * There's more to copy. Advance the offsets by the length
|
||||
+ * copied this step, and advance the sg entries as needed.
|
||||
+ */
|
||||
+ src_offset += len;
|
||||
+ if (src_offset >= src->offset + src->length) {
|
||||
+ src = sg_next(src);
|
||||
+ src_offset = src->offset;
|
||||
+ }
|
||||
+ dst_offset += len;
|
||||
+ if (dst_offset >= dst->offset + dst->length) {
|
||||
+ dst = sg_next(dst);
|
||||
+ dst_offset = dst->offset;
|
||||
+ }
|
||||
+ }
|
||||
+}
|
||||
+EXPORT_SYMBOL_GPL(memcpy_sglist);
|
||||
+
|
||||
struct scatterlist *scatterwalk_ffwd(struct scatterlist dst[2],
|
||||
struct scatterlist *src,
|
||||
unsigned int len)
|
||||
--- a/crypto/Kconfig
|
||||
+++ b/crypto/Kconfig
|
||||
@@ -240,7 +240,6 @@
|
||||
select CRYPTO_BLKCIPHER
|
||||
select CRYPTO_MANAGER
|
||||
select CRYPTO_HASH
|
||||
- select CRYPTO_NULL
|
||||
help
|
||||
Authenc: Combined mode wrapper for IPsec.
|
||||
This is required for IPSec.
|
||||
@@ -1863,7 +1862,6 @@
|
||||
depends on NET
|
||||
select CRYPTO_AEAD
|
||||
select CRYPTO_BLKCIPHER
|
||||
- select CRYPTO_NULL
|
||||
select CRYPTO_USER_API
|
||||
help
|
||||
This option enables the user-spaces interface for AEAD
|
||||
--- a/include/crypto/if_alg.h
|
||||
+++ b/include/crypto/if_alg.h
|
||||
@@ -231,9 +231,8 @@
|
||||
}
|
||||
|
||||
int af_alg_alloc_tsgl(struct sock *sk);
|
||||
-unsigned int af_alg_count_tsgl(struct sock *sk, size_t bytes, size_t offset);
|
||||
-void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst,
|
||||
- size_t dst_offset);
|
||||
+unsigned int af_alg_count_tsgl(struct sock *sk, size_t bytes);
|
||||
+void af_alg_pull_tsgl(struct sock *sk, size_t used, struct scatterlist *dst);
|
||||
void af_alg_free_areq_sgls(struct af_alg_async_req *areq);
|
||||
int af_alg_wait_for_wmem(struct sock *sk, unsigned int flags);
|
||||
void af_alg_wmem_wakeup(struct sock *sk);
|
||||
--- a/include/crypto/scatterwalk.h
|
||||
+++ b/include/crypto/scatterwalk.h
|
||||
@@ -111,6 +111,35 @@
|
||||
scatterwalk_start(walk, sg_next(walk->sg));
|
||||
}
|
||||
|
||||
+/*
|
||||
+ * Flush the dcache of any pages that overlap the region
|
||||
+ * [offset, offset + nbytes) relative to base_page.
|
||||
+ *
|
||||
+ * This should be called only when ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, to ensure
|
||||
+ * that all relevant code (including the call to sg_page() in the caller, if
|
||||
+ * applicable) gets fully optimized out when !ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE.
|
||||
+ */
|
||||
+static inline void __scatterwalk_flush_dcache_pages(struct page *base_page,
|
||||
+ unsigned int offset,
|
||||
+ unsigned int nbytes)
|
||||
+{
|
||||
+ unsigned int num_pages;
|
||||
+ unsigned int i;
|
||||
+
|
||||
+ base_page += offset / PAGE_SIZE;
|
||||
+ offset %= PAGE_SIZE;
|
||||
+
|
||||
+ /*
|
||||
+ * This is an overflow-safe version of
|
||||
+ * num_pages = DIV_ROUND_UP(offset + nbytes, PAGE_SIZE).
|
||||
+ */
|
||||
+ num_pages = nbytes / PAGE_SIZE;
|
||||
+ num_pages += DIV_ROUND_UP(offset + (nbytes % PAGE_SIZE), PAGE_SIZE);
|
||||
+
|
||||
+ for (i = 0; i < num_pages; i++)
|
||||
+ flush_dcache_page(base_page + i);
|
||||
+}
|
||||
+
|
||||
static inline void scatterwalk_done(struct scatter_walk *walk, int out,
|
||||
int more)
|
||||
{
|
||||
@@ -123,6 +152,9 @@
|
||||
size_t nbytes, int out);
|
||||
void *scatterwalk_map(struct scatter_walk *walk);
|
||||
|
||||
+void memcpy_sglist(struct scatterlist *dst, struct scatterlist *src,
|
||||
+ unsigned int nbytes);
|
||||
+
|
||||
void scatterwalk_map_and_copy(void *buf, struct scatterlist *sg,
|
||||
unsigned int start, unsigned int nbytes, int out);
|
||||
|
||||
@ -38,10 +38,10 @@
|
||||
# define buildid .local
|
||||
|
||||
%define specversion 4.18.0
|
||||
%define pkgrelease 553.121.1.el8_10
|
||||
%define pkgrelease 553.123.1.el8_10
|
||||
|
||||
# allow pkg_release to have configurable %%{?dist} tag
|
||||
%define specrelease 553.121.1%{?dist}
|
||||
%define specrelease 553.123.1%{?dist}
|
||||
|
||||
%define pkg_release %{specrelease}%{?buildid}
|
||||
|
||||
@ -530,7 +530,6 @@ Patch999999: linux-kernel-test.patch
|
||||
# AlmaLinux Patch
|
||||
Patch1000: debrand-single-cpu.patch
|
||||
Patch1002: debrand-rh-i686-cpu.patch
|
||||
Patch1100: 1100-CVE-2026-31431-crypto-Copy-Fail-fixes.patch
|
||||
Patch2001: 0001-Enable-all-disabled-pci-devices-by-moving-to-unmaint.patch
|
||||
Patch2002: 0002-Bring-back-deprecated-pci-ids-to-megaraid_sas-driver.patch
|
||||
Patch2003: 0003-Bring-back-deprecated-pci-ids-to-mptsas-mptspi-drive.patch
|
||||
@ -1108,7 +1107,6 @@ ApplyOptionalPatch linux-kernel-test.patch
|
||||
# Applying AlmaLinux Patch
|
||||
ApplyPatch debrand-single-cpu.patch
|
||||
ApplyPatch debrand-rh-i686-cpu.patch
|
||||
ApplyPatch 1100-CVE-2026-31431-crypto-Copy-Fail-fixes.patch
|
||||
ApplyPatch 0001-Enable-all-disabled-pci-devices-by-moving-to-unmaint.patch
|
||||
ApplyPatch 0002-Bring-back-deprecated-pci-ids-to-megaraid_sas-driver.patch
|
||||
ApplyPatch 0003-Bring-back-deprecated-pci-ids-to-mptsas-mptspi-drive.patch
|
||||
@ -2715,17 +2713,7 @@ fi
|
||||
#
|
||||
#
|
||||
%changelog
|
||||
* Thu Apr 30 2026 Andrew Lukoshko <alukoshko@almalinux.org> - 4.18.0-553.121.1
|
||||
- crypto: authencesn - reject too-short AAD (assoclen<8)
|
||||
- crypto: scatterwalk - Backport memcpy_sglist()
|
||||
- crypto: algif_aead - use memcpy_sglist() instead of null skcipher
|
||||
- crypto: algif_aead - Revert to operating out-of-place
|
||||
- crypto: algif_aead - snapshot IV for async AEAD requests
|
||||
- crypto: authenc - use memcpy_sglist() instead of null skcipher
|
||||
- crypto: authencesn - Do not place hiseq at end of dst for out-of-place decryption
|
||||
- crypto: authencesn - Fix src offset when decrypting in-place
|
||||
- crypto: af_alg - Fix page reassignment overflow in af_alg_pull_tsgl
|
||||
- crypto: algif_aead - Fix minimum RX size check for decryption
|
||||
* Tue May 05 2026 Andrei Lukoshko <alukoshko@almalinux.org> - 4.18.0-553.123.1
|
||||
- hpsa: bring back deprecated PCI ids #CFHack #CFHack2024
|
||||
- mptsas: bring back deprecated PCI ids #CFHack #CFHack2024
|
||||
- megaraid_sas: bring back deprecated PCI ids #CFHack #CFHack2024
|
||||
@ -2736,10 +2724,29 @@ fi
|
||||
- kernel/rh_messages.h: enable all disabled pci devices by moving to
|
||||
unmaintained
|
||||
|
||||
* Wed Apr 29 2026 Eduard Abdullin <eabdullin@almalinux.org> - 4.18.0-553.121.1
|
||||
* Tue May 05 2026 Eduard Abdullin <eabdullin@almalinux.org> - 4.18.0-553.123.1
|
||||
- Use AlmaLinux OS secure boot cert
|
||||
- Debrand for AlmaLinux OS
|
||||
|
||||
* Mon May 04 2026 Denys Vlasenko <dvlasenk@redhat.com> [4.18.0-553.123.1.el8_10]
|
||||
- crypto: algif_aead - snapshot IV for async AEAD requests (Herbert Xu) [RHEL-172187]
|
||||
- crypto: algif_aead - Fix minimum RX size check for decryption (Herbert Xu) [RHEL-172187]
|
||||
- crypto: authencesn - reject short ahash digests during instance creation (Herbert Xu) [RHEL-172187]
|
||||
- crypto: authencesn - Fix src offset when decrypting in-place (Herbert Xu) [RHEL-172187]
|
||||
- crypto: authencesn - Do not place hiseq at end of dst for out-of-place decryption (Herbert Xu) [RHEL-172187] {CVE-2026-31431}
|
||||
- crypto: authencesn - reject too-short AAD (assoclen<8) to match ESP/ESN spec (Herbert Xu) [RHEL-172187] {CVE-2026-23060}
|
||||
- crypto: af_alg - Fix page reassignment overflow in af_alg_pull_tsgl (Herbert Xu) [RHEL-172187]
|
||||
- crypto: af_alg - limit RX SG extraction by receive buffer budget (Herbert Xu) [RHEL-172187] {CVE-2026-31677}
|
||||
- crypto: algif_aead - Revert to operating out-of-place (Herbert Xu) [RHEL-172187] {CVE-2026-31431}
|
||||
- crypto: af-alg - fix NULL pointer dereference in scatterwalk (Herbert Xu) [RHEL-172187]
|
||||
- KVM: x86/mmu: Drop/zap existing present SPTE even when creating an MMIO SPTE (Paolo Bonzini) [RHEL-153727] {CVE-2026-23401}
|
||||
|
||||
* Fri Apr 24 2026 CKI KWF Bot <cki-ci-bot+kwf-gitlab-com@redhat.com> [4.18.0-553.122.1.el8_10]
|
||||
- nvme: avoid double free special payload (Maurizio Lombardi) [RHEL-51303] {CVE-2024-41073}
|
||||
- crypto: asymmetric_keys - prevent overflow in asymmetric_key_generate_id (CKI Backport Bot) [RHEL-166921] {CVE-2025-68724}
|
||||
- net: qlogic/qede: fix potential out-of-bounds read in qede_tpa_cont() and qede_tpa_end() (Jay Shin) [RHEL-166155] {CVE-2025-40252}
|
||||
- kernel.h: Move ARRAY_SIZE() to a separate header (Jay Shin) [RHEL-166155] {CVE-2025-40252}
|
||||
|
||||
* Wed Apr 15 2026 CKI KWF Bot <cki-ci-bot+kwf-gitlab-com@redhat.com> [4.18.0-553.121.1.el8_10]
|
||||
- nfsd: fix heap overflow in NFSv4.0 LOCK replay cache (Scott Mayhew) [RHEL-167011] {CVE-2026-31402}
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user