From 06eaaacb3182d3f8840b3cbe21a62193421e8470 Mon Sep 17 00:00:00 2001 From: eabdullin Date: Fri, 22 Mar 2024 07:44:27 +0000 Subject: [PATCH] import EuroLinux squid-4.15-7.module+el8.9.0+21530+59b09a5b.10 --- ..._client-call-chains-with-async-calls.patch | 599 ----- ...t-lowestOffset-target_offset-asserti.patch | 119 - ...-mem_hdr-freeDataUpto-assertion-1562.patch | 67 - ...ure-as-a-replacement-for-problematic.patch | 210 -- ...t-additional-functions-for-SquidMath.patch | 200 -- SOURCES/0007-Adapt-to-older-gcc-cleanup.patch | 763 ------ SOURCES/perl-requires-squid.sh | 0 SOURCES/squid-4.15-CVE-2023-46724.patch | 16 +- SOURCES/squid-4.15-CVE-2023-46728.patch | 1732 +------------ SOURCES/squid-4.15-CVE-2023-49285.patch | 14 +- SOURCES/squid-4.15-CVE-2023-49286.patch | 28 +- SOURCES/squid-4.15-CVE-2023-50269.patch | 50 + ...y.patch => squid-4.15-CVE-2023-5824.patch} | 2264 ++++++++++++----- SOURCES/squid-4.15-CVE-2024-25111.patch | 193 ++ SOURCES/squid-4.15-CVE-2024-25617.patch | 105 + SOURCES/squid-4.15.tar.xz.asc | 25 + SOURCES/squid.nm | 0 SPECS/squid.spec | 91 +- 18 files changed, 2059 insertions(+), 4417 deletions(-) delete mode 100644 SOURCES/0001-Break-long-store_client-call-chains-with-async-calls.patch delete mode 100644 SOURCES/0003-Bug-5309-frequent-lowestOffset-target_offset-asserti.patch delete mode 100644 SOURCES/0004-Remove-mem_hdr-freeDataUpto-assertion-1562.patch delete mode 100644 SOURCES/0005-Backport-Add-Assure-as-a-replacement-for-problematic.patch delete mode 100644 SOURCES/0006-Backport-additional-functions-for-SquidMath.patch delete mode 100644 SOURCES/0007-Adapt-to-older-gcc-cleanup.patch mode change 100644 => 100755 SOURCES/perl-requires-squid.sh create mode 100644 SOURCES/squid-4.15-CVE-2023-50269.patch rename SOURCES/{0002-Remove-serialized-HTTP-headers-from-storeClientCopy.patch => squid-4.15-CVE-2023-5824.patch} (64%) create mode 100644 SOURCES/squid-4.15-CVE-2024-25111.patch create mode 100644 SOURCES/squid-4.15-CVE-2024-25617.patch mode change 100644 => 100755 SOURCES/squid.nm diff --git a/SOURCES/0001-Break-long-store_client-call-chains-with-async-calls.patch b/SOURCES/0001-Break-long-store_client-call-chains-with-async-calls.patch deleted file mode 100644 index 0b63af6..0000000 --- a/SOURCES/0001-Break-long-store_client-call-chains-with-async-calls.patch +++ /dev/null @@ -1,599 +0,0 @@ -From 4896d07bf753683a3dbba4210384b0d862ff2d11 Mon Sep 17 00:00:00 2001 -From: Eduard Bagdasaryan -Date: Thu, 7 Dec 2023 16:47:08 +0000 -Subject: [PATCH 1/7] Break long store_client call chains with async calls - (#1056) - -The store_client class design created very long call chains spanning -Squid-client and Squid-server processing and multiple transactions. -These call chains also create ideal conditions for dangerous recursive -relationships between communicating classes (a.k.a. "reentrancy" among -Squid developers). For example, storeClientCopy() enters store_client -and triggers disk I/O that triggers invokeHandlers() that re-enters the -same store_client object and starts competing with the original -storeClientCopy() processing state. - -The official code prevented the worst recursion cases with three(!) -boolean flags and time-based events abused to break some of the call -chains, but that approach did not solve all of the problems while also -losing transaction context information across time-based events. - -This change effectively makes STCB storeClientCopy() callbacks -asynchronous, eliminating the need for time-based events and one of the -flags. It shortens many call chains and preserves transaction context. -The remaining problems can and should be eliminated by converting -store_client into AsyncJob, but those changes deserve a dedicated PR. - -store_client orchestrates cooperation of multiple asynchronous players: - -* Sink: A Store client requests a STCB callback via a - storeClientCopy()/copy() call. A set _callback.callback_handler - implies that the client is waiting for this callback. - -* Source1: A Store disk reading subsystem activated by the storeRead() - call "spontaneously" delivers response bytes via storeClientRead*() - callbacks. The disk_io_pending flag implies waiting for them. - -* Source2: Store memory subsystem activated by storeClientListAdd() - "spontaneously" delivers response bytes via invokeHandlers(). - -* Source3: Store disk subsystem activated by storeSwapInStart() - "spontaneously" notifies of EOF/error by calling noteSwapInDone(). - -* Source4: A store_client object owner may delete the object by - "spontaneously" calling storeUnregister(). The official code was - converting this event into an error-notifying callback. - -We continue to answer each storeClientCopy() request with the first -available information even though several SourceN calls are possible -while we are waiting to complete the STCB callback. The StoreIOBuffer -API and STCB recipients do not support data+error/eof combinations, and -future code will move this wait to the main event loop anyway. This -first-available approach means that the creation of the notifier call -effectively ends answer processing -- store_client just waits for that -call to fire so that it can relay the answer to still-synchronous STCB. -When STCB itself becomes asynchronous, this logic will continue to work. - -Also stopped calling STCB from storeUnregister(). Conceptually, the -storeUnregister() and storeClientCopy() callers ought to represent the -same get-content-from-Store task; there should be no need to notify that -task about what it is doing. Technically, analysis of STCB callbacks -showed that many such notifications would be dangerous (if they are or -become reachable). At the time of the storeUnregister() call, the STCB -callbacks are usually unset (e.g., when storeUnregister() is called from -the destructor, after that object has finished copying -- a very common -case) or do not do anything (useful). - -Also removed callback_data from the Callback::pending() condition. It is -conceptually wrong to require non-nil callback parameter, and it is -never cleared separately from the callback_handler data member anyway. - -Also hid copyInto into the private store_client section to make sure it -is not modified while we are waiting to complete the STCB callback. This -move required adding a couple of read-only wrapper methods like -bytesWanted() and noteSwapInDone(). - -Also simplified error/EOF/bytes handling on copy()-STCB path using -dedicated methods (e.g., store_client::callback() API is no longer -mixing EOF and error signals). - -Modified-by: Alex Burmashev -Signed-off-by: Alex Burmashev ---- - src/MemObject.cc | 6 +- - src/StoreClient.h | 64 ++++++++++-- - src/store_client.cc | 177 ++++++++++++++++++++++----------- - src/store_swapin.cc | 2 +- - src/tests/stub_store_client.cc | 5 +- - 5 files changed, 186 insertions(+), 68 deletions(-) - -diff --git a/src/MemObject.cc b/src/MemObject.cc -index df7791f..4ba63cc 100644 ---- a/src/MemObject.cc -+++ b/src/MemObject.cc -@@ -196,8 +196,8 @@ struct LowestMemReader : public unary_function { - LowestMemReader(int64_t seed):current(seed) {} - - void operator() (store_client const &x) { -- if (x.memReaderHasLowerOffset(current)) -- current = x.copyInto.offset; -+ if (x.getType() == STORE_MEM_CLIENT) -+ current = std::min(current, x.readOffset()); - } - - int64_t current; -@@ -492,7 +492,7 @@ MemObject::mostBytesAllowed() const - - #endif - -- j = sc->delayId.bytesWanted(0, sc->copyInto.length); -+ j = sc->bytesWanted(); - - if (j > jmax) { - jmax = j; -diff --git a/src/StoreClient.h b/src/StoreClient.h -index 65472d8..457844a 100644 ---- a/src/StoreClient.h -+++ b/src/StoreClient.h -@@ -12,6 +12,7 @@ - #include "dlink.h" - #include "StoreIOBuffer.h" - #include "StoreIOState.h" -+#include "base/AsyncCall.h" - - typedef void STCB(void *, StoreIOBuffer); /* store callback */ - -@@ -39,14 +40,32 @@ class store_client - public: - store_client(StoreEntry *); - ~store_client(); -- bool memReaderHasLowerOffset(int64_t) const; -+ -+ /// An offset into the stored response bytes, including the HTTP response -+ /// headers (if any). Note that this offset does not include Store entry -+ /// metadata, because it is not a part of the stored response. -+ /// \retval 0 means the client wants to read HTTP response headers. -+ /// \retval +N the response byte that the client wants to read next. -+ /// \retval -N should not occur. -+ // TODO: Callers do not expect negative offset. Verify that the return -+ // value cannot be negative and convert to unsigned in this case. -+ int64_t readOffset() const { return copyInto.offset; } -+ - int getType() const; -- void fail(); -- void callback(ssize_t len, bool error = false); -+ -+ /// React to the end of reading the response from disk. There will be no -+ /// more readHeader() and readBody() callbacks for the current storeRead() -+ /// swapin after this notification. -+ void noteSwapInDone(bool error); -+ - void doCopy (StoreEntry *e); - void readHeader(const char *buf, ssize_t len); - void readBody(const char *buf, ssize_t len); -+ -+ /// Request StoreIOBuffer-described response data via an asynchronous STCB -+ /// callback. At most one outstanding request is allowed per store_client. - void copy(StoreEntry *, StoreIOBuffer, STCB *, void *); -+ - void dumpStats(MemBuf * output, int clientNumber) const; - - int64_t cmp_offset; -@@ -59,19 +78,29 @@ public: - StoreIOState::Pointer swapin_sio; - - struct { -+ /// whether we are expecting a response to be swapped in from disk -+ /// (i.e. whether async storeRead() is currently in progress) -+ // TODO: a better name reflecting the 'in' scope of the flag - bool disk_io_pending; -+ -+ /// whether the store_client::doCopy()-initiated STCB sequence is -+ /// currently in progress - bool store_copying; -- bool copy_event_pending; - } flags; - - #if USE_DELAY_POOLS - DelayId delayId; -+ -+ /// The maximum number of bytes the Store client can read/copy next without -+ /// overflowing its buffer and without violating delay pool limits. Store -+ /// I/O is not rate-limited, but we assume that the same number of bytes may -+ /// be read from the Squid-to-server connection that may be rate-limited. -+ int bytesWanted() const; -+ - void setDelayId(DelayId delay_id); - #endif - - dlink_node node; -- /* Below here is private - do no alter outside storeClient calls */ -- StoreIOBuffer copyInto; - - private: - bool moreToSend() const; -@@ -83,9 +112,25 @@ private: - bool startSwapin(); - bool unpackHeader(char const *buf, ssize_t len); - -+ void fail(); -+ void callback(ssize_t); -+ void noteCopiedBytes(size_t); -+ void noteEof(); -+ void noteNews(); -+ void finishCallback(); -+ static void FinishCallback(store_client *); -+ - int type; - bool object_ok; - -+ /// Storage and metadata associated with the current copy() request. Ought -+ /// to be ignored when not answering a copy() request. -+ StoreIOBuffer copyInto; -+ -+ /// The number of bytes loaded from Store into copyInto while answering the -+ /// current copy() request. Ought to be ignored when not answering. -+ size_t copiedSize; -+ - /* Until we finish stuffing code into store_client */ - - public: -@@ -94,9 +139,16 @@ public: - Callback ():callback_handler(NULL), callback_data(NULL) {} - - Callback (STCB *, void *); -+ -+ /// Whether the copy() answer is needed/expected (by the client) and has -+ /// not been computed (by us). False during (asynchronous) answer -+ /// delivery to the STCB callback_handler. - bool pending() const; - STCB *callback_handler; - void *callback_data; -+ -+ /// a scheduled asynchronous finishCallback() call (or nil) -+ AsyncCall::Pointer notifier; - } _callback; - }; - -diff --git a/src/store_client.cc b/src/store_client.cc -index 1b54f04..207c96b 100644 ---- a/src/store_client.cc -+++ b/src/store_client.cc -@@ -9,6 +9,7 @@ - /* DEBUG: section 90 Storage Manager Client-Side Interface */ - - #include "squid.h" -+#include "base/AsyncCbdataCalls.h" - #include "event.h" - #include "globals.h" - #include "HttpReply.h" -@@ -39,17 +40,10 @@ - static StoreIOState::STRCB storeClientReadBody; - static StoreIOState::STRCB storeClientReadHeader; - static void storeClientCopy2(StoreEntry * e, store_client * sc); --static EVH storeClientCopyEvent; - static bool CheckQuickAbortIsReasonable(StoreEntry * entry); - - CBDATA_CLASS_INIT(store_client); - --bool --store_client::memReaderHasLowerOffset(int64_t anOffset) const --{ -- return getType() == STORE_MEM_CLIENT && copyInto.offset < anOffset; --} -- - int - store_client::getType() const - { -@@ -104,22 +98,41 @@ storeClientListAdd(StoreEntry * e, void *data) - return sc; - } - -+/// schedules asynchronous STCB call to relay disk or memory read results -+/// \param outcome an error signal (if negative), an EOF signal (if zero), or the number of bytes read -+void -+store_client::callback(const ssize_t outcome) -+{ -+ if (outcome > 0) -+ return noteCopiedBytes(outcome); -+ -+ if (outcome < 0) -+ return fail(); -+ -+ noteEof(); -+} -+/// finishCallback() wrapper; TODO: Add NullaryMemFunT for non-jobs. - void --store_client::callback(ssize_t sz, bool error) -+store_client::FinishCallback(store_client * const sc) - { -- size_t bSz = 0; -+ sc->finishCallback(); -+} - -- if (sz >= 0 && !error) -- bSz = sz; -+/// finishes a copy()-STCB sequence by synchronously calling STCB -+void -+store_client::finishCallback() -+{ -+ Assure(_callback.callback_handler); -+ Assure(_callback.notifier); - -- StoreIOBuffer result(bSz, 0 ,copyInto.data); -+ // callers are not ready to handle a content+error combination -+ Assure(object_ok || !copiedSize); - -- if (sz < 0 || error) -- result.flags.error = 1; -+ StoreIOBuffer result(copiedSize, copyInto.offset, copyInto.data); -+ result.flags.error = object_ok ? 0 : 1; -+ copiedSize = 0; - -- result.offset = cmp_offset; -- assert(_callback.pending()); -- cmp_offset = copyInto.offset + bSz; -+ cmp_offset = result.offset + result.length; - STCB *temphandler = _callback.callback_handler; - void *cbdata = _callback.callback_data; - _callback = Callback(NULL, NULL); -@@ -131,18 +144,24 @@ store_client::callback(ssize_t sz, bool error) - cbdataReferenceDone(cbdata); - } - --static void --storeClientCopyEvent(void *data) -+/// schedules asynchronous STCB call to relay a successful disk or memory read -+/// \param bytesCopied the number of response bytes copied into copyInto -+void -+store_client::noteCopiedBytes(const size_t bytesCopied) - { -- store_client *sc = (store_client *)data; -- debugs(90, 3, "storeClientCopyEvent: Running"); -- assert (sc->flags.copy_event_pending); -- sc->flags.copy_event_pending = false; -- -- if (!sc->_callback.pending()) -- return; -+ debugs(90, 5, bytesCopied); -+ Assure(bytesCopied > 0); -+ Assure(!copiedSize); -+ copiedSize = bytesCopied; -+ noteNews(); -+} - -- storeClientCopy2(sc->entry, sc); -+void -+store_client::noteEof() -+{ -+ debugs(90, 5, copiedSize); -+ Assure(!copiedSize); -+ noteNews(); - } - - store_client::store_client(StoreEntry *e) : -@@ -152,11 +171,11 @@ store_client::store_client(StoreEntry *e) : - #endif - entry(e), - type(e->storeClientType()), -- object_ok(true) -+ object_ok(true), -+ copiedSize(0) - { - flags.disk_io_pending = false; - flags.store_copying = false; -- flags.copy_event_pending = false; - ++ entry->refcount; - - if (getType() == STORE_DISK_CLIENT) { -@@ -272,17 +291,11 @@ static void - storeClientCopy2(StoreEntry * e, store_client * sc) - { - /* reentrancy not allowed - note this could lead to -- * dropped events -+ * dropped notifications about response data availability - */ - -- if (sc->flags.copy_event_pending) { -- return; -- } -- - if (sc->flags.store_copying) { -- sc->flags.copy_event_pending = true; -- debugs(90, 3, "storeClientCopy2: Queueing storeClientCopyEvent()"); -- eventAdd("storeClientCopyEvent", storeClientCopyEvent, sc, 0.0, 0); -+ debugs(90, 3, "prevented recursive copying for " << *e); - return; - } - -@@ -295,21 +308,16 @@ storeClientCopy2(StoreEntry * e, store_client * sc) - * if the peer aborts, we want to give the client(s) - * everything we got before the abort condition occurred. - */ -- /* Warning: doCopy may indirectly free itself in callbacks, -- * hence the lock to keep it active for the duration of -- * this function -- * XXX: Locking does not prevent calling sc destructor (it only prevents -- * freeing sc memory) so sc may become invalid from C++ p.o.v. -- */ -- CbcPointer tmpLock = sc; -- assert (!sc->flags.store_copying); - sc->doCopy(e); -- assert(!sc->flags.store_copying); - } - - void - store_client::doCopy(StoreEntry *anEntry) - { -+ Assure(_callback.pending()); -+ Assure(!flags.disk_io_pending); -+ Assure(!flags.store_copying); -+ - assert (anEntry == entry); - flags.store_copying = true; - MemObject *mem = entry->mem_obj; -@@ -321,7 +329,7 @@ store_client::doCopy(StoreEntry *anEntry) - if (!moreToSend()) { - /* There is no more to send! */ - debugs(33, 3, HERE << "There is no more to send!"); -- callback(0); -+ noteEof(); - flags.store_copying = false; - return; - } -@@ -382,6 +390,16 @@ store_client::startSwapin() - } - } - -+void -+store_client::noteSwapInDone(const bool error) -+{ -+ Assure(_callback.pending()); -+ if (error) -+ fail(); -+ else -+ noteEof(); -+} -+ - void - store_client::scheduleRead() - { -@@ -421,7 +439,7 @@ store_client::scheduleMemRead() - /* What the client wants is in memory */ - /* Old style */ - debugs(90, 3, "store_client::doCopy: Copying normal from memory"); -- size_t sz = entry->mem_obj->data_hdr.copy(copyInto); -+ const auto sz = entry->mem_obj->data_hdr.copy(copyInto); // may be <= 0 per copy() API - callback(sz); - flags.store_copying = false; - } -@@ -493,7 +511,19 @@ store_client::readBody(const char *, ssize_t len) - void - store_client::fail() - { -+ debugs(90, 3, (object_ok ? "once" : "again")); -+ if (!object_ok) -+ return; // we failed earlier; nothing to do now -+ - object_ok = false; -+ -+ noteNews(); -+} -+ -+/// if necessary and possible, informs the Store reader about copy() result -+void -+store_client::noteNews() -+{ - /* synchronous open failures callback from the store, - * before startSwapin detects the failure. - * TODO: fix this inconsistent behaviour - probably by -@@ -501,8 +531,20 @@ store_client::fail() - * not synchronous - */ - -- if (_callback.pending()) -- callback(0, true); -+ if (!_callback.callback_handler) { -+ debugs(90, 5, "client lost interest"); -+ return; -+ } -+ -+ if (_callback.notifier) { -+ debugs(90, 5, "earlier news is being delivered by " << _callback.notifier); -+ return; -+ } -+ -+ _callback.notifier = asyncCall(90, 4, "store_client::FinishCallback", cbdataDialer(store_client::FinishCallback, this)); -+ ScheduleCallHere(_callback.notifier); -+ -+ Assure(!_callback.pending()); - } - - static void -@@ -673,10 +715,12 @@ storeUnregister(store_client * sc, StoreEntry * e, void *data) - ++statCounter.swap.ins; - } - -- if (sc->_callback.pending()) { -- /* callback with ssize = -1 to indicate unexpected termination */ -- debugs(90, 3, "store_client for " << *e << " has a callback"); -- sc->fail(); -+ if (sc->_callback.callback_handler || sc->_callback.notifier) { -+ debugs(90, 3, "forgetting store_client callback for " << *e); -+ // Do not notify: Callers want to stop copying and forget about this -+ // pending copy request. Some would mishandle a notification from here. -+ if (sc->_callback.notifier) -+ sc->_callback.notifier->cancel("storeUnregister"); - } - - #if STORE_CLIENT_LIST_DEBUG -@@ -684,6 +728,8 @@ storeUnregister(store_client * sc, StoreEntry * e, void *data) - - #endif - -+ // XXX: We might be inside sc store_client method somewhere up the call -+ // stack. TODO: Convert store_client to AsyncJob to make destruction async. - delete sc; - - assert(e->locked()); -@@ -740,6 +786,16 @@ StoreEntry::invokeHandlers() - - if (sc->flags.disk_io_pending) - continue; -+ if (sc->flags.store_copying) -+ continue; -+ -+ // XXX: If invokeHandlers() is (indirectly) called from a store_client -+ // method, then the above three conditions may not be sufficient to -+ // prevent us from reentering the same store_client object! This -+ // probably does not happen in the current code, but no observed -+ // invariant prevents this from (accidentally) happening in the future. -+ -+ // TODO: Convert store_client into AsyncJob; make this call asynchronous - - storeClientCopy2(this, sc); - } -@@ -864,8 +920,8 @@ store_client::dumpStats(MemBuf * output, int clientNumber) const - if (flags.store_copying) - output->append(" store_copying", 14); - -- if (flags.copy_event_pending) -- output->append(" copy_event_pending", 19); -+ if (_callback.notifier) -+ output->append(" notifying", 10); - - output->append("\n",1); - } -@@ -873,12 +929,19 @@ store_client::dumpStats(MemBuf * output, int clientNumber) const - bool - store_client::Callback::pending() const - { -- return callback_handler && callback_data; -+ return callback_handler && !notifier; - } - - store_client::Callback::Callback(STCB *function, void *data) : callback_handler(function), callback_data (data) {} - - #if USE_DELAY_POOLS -+int -+store_client::bytesWanted() const -+{ -+ // TODO: To avoid using stale copyInto, return zero if !_callback.pending()? -+ return delayId.bytesWanted(0, copyInto.length); -+} -+ - void - store_client::setDelayId(DelayId delay_id) - { -diff --git a/src/store_swapin.cc b/src/store_swapin.cc -index a05d7e3..cd32e94 100644 ---- a/src/store_swapin.cc -+++ b/src/store_swapin.cc -@@ -56,7 +56,7 @@ storeSwapInFileClosed(void *data, int errflag, StoreIOState::Pointer) - - if (sc->_callback.pending()) { - assert (errflag <= 0); -- sc->callback(0, errflag ? true : false); -+ sc->noteSwapInDone(errflag); - } - - ++statCounter.swap.ins; -diff --git a/src/tests/stub_store_client.cc b/src/tests/stub_store_client.cc -index 2a13874..4a73863 100644 ---- a/src/tests/stub_store_client.cc -+++ b/src/tests/stub_store_client.cc -@@ -34,7 +34,10 @@ void storeLogOpen(void) STUB - void storeDigestInit(void) STUB - void storeRebuildStart(void) STUB - void storeReplSetup(void) STUB --bool store_client::memReaderHasLowerOffset(int64_t anOffset) const STUB_RETVAL(false) -+void store_client::noteSwapInDone(bool) STUB -+#if USE_DELAY_POOLS -+int store_client::bytesWanted() const STUB_RETVAL(0) -+#endif - void store_client::dumpStats(MemBuf * output, int clientNumber) const STUB - int store_client::getType() const STUB_RETVAL(0) - --- -2.39.3 - diff --git a/SOURCES/0003-Bug-5309-frequent-lowestOffset-target_offset-asserti.patch b/SOURCES/0003-Bug-5309-frequent-lowestOffset-target_offset-asserti.patch deleted file mode 100644 index 8ba5bd3..0000000 --- a/SOURCES/0003-Bug-5309-frequent-lowestOffset-target_offset-asserti.patch +++ /dev/null @@ -1,119 +0,0 @@ -From af18cb04f07555f49daef982c8c21459bfbe388c Mon Sep 17 00:00:00 2001 -From: Alex Rousskov -Date: Thu, 23 Nov 2023 18:27:24 +0000 -Subject: [PATCH 3/7] Bug 5309: frequent "lowestOffset () <= target_offset" - assertion (#1561) - - Recent commit 122a6e3 left store_client::readOffset() unchanged but - should have adjusted it to match changed copyInto.offset semantics: - Starting with that commit, storeClientCopy() callers supply HTTP - response _body_ offset rather than HTTP response offset. -.... - This bug decreased readOffset() values (by the size of stored HTTP - response headers), effectively telling Store that we are not yet done - with some of the MemObject/mem_hdr bytes. This bug could cause slightly - higher transaction memory usage because the same response bytes are - trimmed later. This bug should not have caused any assertions. -.... - However, the old mem_hdr::freeDataUpto() code that uses readOffset() is - also broken -- the assertion in that method only "works" when - readOffset() returns values matching a memory node boundary. The smaller - values returned by buggy readOffset() triggered buggy assertions. -.... - This minimal fix removes the recent store_client::readOffset() bug - described above. We will address old mem_hdr problems separately. - -Modified-by: Alex Burmashev -Signed-off-by: Alex Burmashev ---- - src/MemObject.cc | 2 +- - src/StoreClient.h | 19 ++++++++++--------- - src/store_client.cc | 13 +++++++++++++ - 3 files changed, 24 insertions(+), 10 deletions(-) - -diff --git a/src/MemObject.cc b/src/MemObject.cc -index d7aaf5e..650d3fd 100644 ---- a/src/MemObject.cc -+++ b/src/MemObject.cc -@@ -197,7 +197,7 @@ struct LowestMemReader : public unary_function { - - void operator() (store_client const &x) { - if (x.getType() == STORE_MEM_CLIENT) -- current = std::min(current, x.readOffset()); -+ current = std::min(current, x.discardableHttpEnd()); - } - - int64_t current; -diff --git a/src/StoreClient.h b/src/StoreClient.h -index 1d90e5a..0524776 100644 ---- a/src/StoreClient.h -+++ b/src/StoreClient.h -@@ -54,15 +54,8 @@ public: - store_client(StoreEntry *); - ~store_client(); - -- /// An offset into the stored response bytes, including the HTTP response -- /// headers (if any). Note that this offset does not include Store entry -- /// metadata, because it is not a part of the stored response. -- /// \retval 0 means the client wants to read HTTP response headers. -- /// \retval +N the response byte that the client wants to read next. -- /// \retval -N should not occur. -- // TODO: Callers do not expect negative offset. Verify that the return -- // value cannot be negative and convert to unsigned in this case. -- int64_t readOffset() const { return copyInto.offset; } -+ /// the client will not use HTTP response bytes with lower offsets (if any) -+ auto discardableHttpEnd() const { return discardableHttpEnd_; } - - int getType() const; - -@@ -156,8 +149,16 @@ private: - - /// Storage and metadata associated with the current copy() request. Ought - /// to be ignored when not answering a copy() request. -+ /// * copyInto.offset is the requested HTTP response body offset; -+ /// * copyInto.data is the client-owned, client-provided result buffer; -+ /// * copyInto.length is the size of the .data result buffer; -+ /// * copyInto.flags are unused by this class. - StoreIOBuffer copyInto; - -+ // TODO: Convert to uint64_t after fixing mem_hdr::endOffset() and friends. -+ /// \copydoc discardableHttpEnd() -+ int64_t discardableHttpEnd_ = 0; -+ - /// the total number of finishCallback() calls - uint64_t answers; - -diff --git a/src/store_client.cc b/src/store_client.cc -index 1731c4c..383aac8 100644 ---- a/src/store_client.cc -+++ b/src/store_client.cc -@@ -122,6 +122,16 @@ store_client::finishCallback() - result = parsingBuffer->packBack(); - result.flags.error = object_ok ? 0 : 1; - -+ // TODO: Move object_ok handling above into this `if` statement. -+ if (object_ok) { -+ // works for zero hdr_sz cases as well; see also: nextHttpReadOffset() -+ discardableHttpEnd_ = NaturalSum(entry->mem().baseReply().hdr_sz, result.offset, result.length).value(); -+ } else { -+ // object_ok is sticky, so we will not be able to use any response bytes -+ discardableHttpEnd_ = entry->mem().endOffset(); -+ } -+ debugs(90, 7, "with " << result << "; discardableHttpEnd_=" << discardableHttpEnd_); -+ - // no HTTP headers and no body bytes (but not because there was no space) - atEof_ = !sendingHttpHeaders() && !result.length && copyInto.length; - -@@ -220,6 +230,9 @@ store_client::copy(StoreEntry * anEntry, - - parsingBuffer.emplace(copyInto); - -+ discardableHttpEnd_ = nextHttpReadOffset(); -+ debugs(90, 7, "discardableHttpEnd_=" << discardableHttpEnd_); -+ - static bool copying (false); - assert (!copying); - copying = true; --- -2.39.3 - diff --git a/SOURCES/0004-Remove-mem_hdr-freeDataUpto-assertion-1562.patch b/SOURCES/0004-Remove-mem_hdr-freeDataUpto-assertion-1562.patch deleted file mode 100644 index 202823b..0000000 --- a/SOURCES/0004-Remove-mem_hdr-freeDataUpto-assertion-1562.patch +++ /dev/null @@ -1,67 +0,0 @@ -From 422272d78399d5fb2fc340281611961fc7c528e7 Mon Sep 17 00:00:00 2001 -From: Alex Rousskov -Date: Thu, 23 Nov 2023 18:27:45 +0000 -Subject: [PATCH 4/7] Remove mem_hdr::freeDataUpto() assertion (#1562) - - stmem.cc:98: "lowestOffset () <= target_offset" -.... - The assertion is conceptually wrong: The given target_offset parameter - may have any value; that value does not have to correlate with mem_hdr - state in any way. It is freeDataUpto() job to preserve nodes at or above - the given offset and (arguably optionally) remove nodes below it, but - the assertion does not actually validate that freeDataUpdo() did that. -.... - The old mem_hdr::freeDataUpto() assertion incorrectly assumed that, - after zero or more unneeded memory nodes were freed, the remaining - memory area never starts after the given target_offset parameter. That - assumption fails in at least two use cases, both using target_offset - values that do not belong to any existing or future mem_hdr node: -.... - 1. target_offset is points to the left of the first node. freeDataUpto() - correctly keeps all memory nodes in such calls, but then asserts. For - example, calling freeDataUpto(0) when mem_hdr has bytes [100,199) - triggers this incorrect assertion. -.... - 2. target_offset is in the gap between two nodes. For example, calling - freeDataUpto(2000) when mem_hdr contains two nodes: [0,1000) and - [3000,3003) will trigger this assertion (as happened in Bug 5309). - Such gaps are very common for HTTP 206 responses with a Content-Range - header because such responses often specify a range that does not - start with zero and create a gap after the node(s) with HTTP headers. -.... - Bugs notwithstanding, it is unlikely that relevant calls exist today, - but they certainly could be added, especially when freeDataUpto() stops - preserving the last unused node. The current "avoid change to [some - unidentified] part of code" hoarding excuse should not last forever. -.... - Prior to commit 122a6e3, Squid did not (frequently) assert in gap cases: - Callers first give target_offset 0 (which results in freeDataUpto() - doing nothing, keeping the header node(s)) and then they give - target_offset matching the beginning of the first body node (which - results in freeDataUpto() freeing the header nodes(s) and increasing - lowerOffset() from zero to target_offset). A bug in commit 122a6e3 - lowered target_offset a bit, placing target_offset in the gap and - triggering frequent (and incorrect) assertions (Bug 5309). - -Modified-by: Alex Burmashev -Signed-off-by: Alex Burmashev ---- - src/stmem.cc | 2 -- - 1 file changed, 2 deletions(-) - -diff --git a/src/stmem.cc b/src/stmem.cc -index d117c15..b627005 100644 ---- a/src/stmem.cc -+++ b/src/stmem.cc -@@ -95,8 +95,6 @@ mem_hdr::freeDataUpto(int64_t target_offset) - break; - } - -- assert (lowestOffset () <= target_offset); -- - return lowestOffset (); - } - --- -2.39.3 - diff --git a/SOURCES/0005-Backport-Add-Assure-as-a-replacement-for-problematic.patch b/SOURCES/0005-Backport-Add-Assure-as-a-replacement-for-problematic.patch deleted file mode 100644 index 1953209..0000000 --- a/SOURCES/0005-Backport-Add-Assure-as-a-replacement-for-problematic.patch +++ /dev/null @@ -1,210 +0,0 @@ -From 5df95b5923de244eaf2ddccf980d5f28d7114b1f Mon Sep 17 00:00:00 2001 -From: Alex Burmashev -Date: Thu, 7 Dec 2023 18:01:47 +0000 -Subject: [PATCH 5/7] Backport Add Assure() as a replacement for problematic - Must() - -This is a partial backport of -b9a1bbfbc531359a87647271a282edff9ccdd206 -b8ae064d94784934b3402e5db015246d1b1ca658 - -Needed for CVE CVE-2023-5824 fix - -Signed-off-by: Alex Burmashev ---- - src/HttpReply.cc | 1 + - src/acl/Asn.cc | 1 + - src/base/Assure.cc | 23 ++++++++++++++++++ - src/base/Assure.h | 51 ++++++++++++++++++++++++++++++++++++++++ - src/base/Makefile.am | 2 ++ - src/base/Makefile.in | 8 +++++-- - src/client_side_reply.cc | 1 + - 7 files changed, 85 insertions(+), 2 deletions(-) - create mode 100644 src/base/Assure.cc - create mode 100644 src/base/Assure.h - -diff --git a/src/HttpReply.cc b/src/HttpReply.cc -index af2bd4d..df5bcef 100644 ---- a/src/HttpReply.cc -+++ b/src/HttpReply.cc -@@ -10,6 +10,7 @@ - - #include "squid.h" - #include "acl/AclSizeLimit.h" -+#include "base/Assure.h" - #include "acl/FilledChecklist.h" - #include "base/EnumIterator.h" - #include "globals.h" -diff --git a/src/acl/Asn.cc b/src/acl/Asn.cc -index ad450c0..bcedc82 100644 ---- a/src/acl/Asn.cc -+++ b/src/acl/Asn.cc -@@ -17,6 +17,7 @@ - #include "acl/SourceAsn.h" - #include "acl/Strategised.h" - #include "base/CharacterSet.h" -+#include "base/Assure.h" - #include "FwdState.h" - #include "HttpReply.h" - #include "HttpRequest.h" -diff --git a/src/base/Assure.cc b/src/base/Assure.cc -new file mode 100644 -index 0000000..b09b848 ---- /dev/null -+++ b/src/base/Assure.cc -@@ -0,0 +1,23 @@ -+/* -+ * Copyright (C) 1996-2023 The Squid Software Foundation and contributors -+ * -+ * Squid software is distributed under GPLv2+ license and includes -+ * contributions from numerous individuals and organizations. -+ * Please see the COPYING and CONTRIBUTORS files for details. -+ */ -+ -+#include "squid.h" -+#include "base/Assure.h" -+#include "base/TextException.h" -+#include "sbuf/Stream.h" -+ -+[[ noreturn ]] void -+ReportAndThrow_(const int debugLevel, const char *description, const SourceLocation &location) -+{ -+ const TextException ex(description, location); -+ const auto label = debugLevel <= DBG_IMPORTANT ? "ERROR: Squid BUG: " : ""; -+ // TODO: Consider also printing the number of BUGs reported so far. It would -+ // require GC, but we could even print the number of same-location reports. -+ debugs(0, debugLevel, label << ex); -+ throw ex; -+} -diff --git a/src/base/Assure.h b/src/base/Assure.h -new file mode 100644 -index 0000000..650c204 ---- /dev/null -+++ b/src/base/Assure.h -@@ -0,0 +1,51 @@ -+/* -+ * Copyright (C) 1996-2023 The Squid Software Foundation and contributors -+ * -+ * Squid software is distributed under GPLv2+ license and includes -+ * contributions from numerous individuals and organizations. -+ * Please see the COPYING and CONTRIBUTORS files for details. -+ */ -+ -+#ifndef SQUID_SRC_BASE_ASSURE_H -+#define SQUID_SRC_BASE_ASSURE_H -+ -+#include "base/Here.h" -+ -+/// Reports the description (at the given debugging level) and throws -+/// the corresponding exception. Reduces compiled code size of Assure() and -+/// Must() callers. Do not call directly; use Assure() instead. -+/// \param description explains the condition (i.e. what MUST happen) -+[[ noreturn ]] void ReportAndThrow_(int debugLevel, const char *description, const SourceLocation &); -+ -+/// Calls ReportAndThrow() if needed. Reduces caller code duplication. -+/// Do not call directly; use Assure() instead. -+/// \param description c-string explaining the condition (i.e. what MUST happen) -+#define Assure_(debugLevel, condition, description, location) \ -+ while (!(condition)) \ -+ ReportAndThrow_((debugLevel), (description), (location)) -+ -+#if !defined(NDEBUG) -+ -+/// Like assert() but throws an exception instead of aborting the process. Use -+/// this macro to detect code logic mistakes (i.e. bugs) where aborting the -+/// current AsyncJob or a similar task is unlikely to jeopardize Squid service -+/// integrity. For example, this macro is _not_ appropriate for detecting bugs -+/// that indicate a dangerous global state corruption which may go unnoticed by -+/// other jobs after the current job or task is aborted. -+#define Assure(condition) \ -+ Assure2((condition), #condition) -+ -+/// Like Assure() but allows the caller to customize the exception message. -+/// \param description string literal describing the condition (i.e. what MUST happen) -+#define Assure2(condition, description) \ -+ Assure_(0, (condition), ("assurance failed: " description), Here()) -+ -+#else -+ -+/* do-nothing implementations for NDEBUG builds */ -+#define Assure(condition) ((void)0) -+#define Assure2(condition, description) ((void)0) -+ -+#endif /* NDEBUG */ -+ -+#endif /* SQUID_SRC_BASE_ASSURE_H */ -diff --git a/src/base/Makefile.am b/src/base/Makefile.am -index 9b0f4cf..c22dd0e 100644 ---- a/src/base/Makefile.am -+++ b/src/base/Makefile.am -@@ -19,6 +19,8 @@ libbase_la_SOURCES = \ - AsyncJob.cc \ - AsyncJob.h \ - AsyncJobCalls.h \ -+ Assure.cc \ -+ Assure.h \ - ByteCounter.h \ - CbcPointer.h \ - CbDataList.h \ -diff --git a/src/base/Makefile.in b/src/base/Makefile.in -index 90a4f5b..f43e098 100644 ---- a/src/base/Makefile.in -+++ b/src/base/Makefile.in -@@ -163,7 +163,7 @@ CONFIG_CLEAN_FILES = - CONFIG_CLEAN_VPATH_FILES = - LTLIBRARIES = $(noinst_LTLIBRARIES) - libbase_la_LIBADD = --am_libbase_la_OBJECTS = AsyncCall.lo AsyncCallQueue.lo AsyncJob.lo \ -+am_libbase_la_OBJECTS = AsyncCall.lo AsyncCallQueue.lo AsyncJob.lo Assure.lo \ - CharacterSet.lo File.lo Here.lo RegexPattern.lo \ - RunnersRegistry.lo TextException.lo - libbase_la_OBJECTS = $(am_libbase_la_OBJECTS) -@@ -187,7 +187,7 @@ DEFAULT_INCLUDES = - depcomp = $(SHELL) $(top_srcdir)/cfgaux/depcomp - am__maybe_remake_depfiles = depfiles - am__depfiles_remade = ./$(DEPDIR)/AsyncCall.Plo \ -- ./$(DEPDIR)/AsyncCallQueue.Plo ./$(DEPDIR)/AsyncJob.Plo \ -+ ./$(DEPDIR)/AsyncCallQueue.Plo ./$(DEPDIR)/AsyncJob.Plo ./$(DEPDIR)/Assure.Plo \ - ./$(DEPDIR)/CharacterSet.Plo ./$(DEPDIR)/File.Plo \ - ./$(DEPDIR)/Here.Plo ./$(DEPDIR)/RegexPattern.Plo \ - ./$(DEPDIR)/RunnersRegistry.Plo ./$(DEPDIR)/TextException.Plo -@@ -737,6 +737,8 @@ libbase_la_SOURCES = \ - AsyncJob.cc \ - AsyncJob.h \ - AsyncJobCalls.h \ -+ Assure.cc \ -+ Assure.h \ - ByteCounter.h \ - CbcPointer.h \ - CbDataList.h \ -@@ -830,6 +832,7 @@ distclean-compile: - @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/AsyncCall.Plo@am__quote@ # am--include-marker - @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/AsyncCallQueue.Plo@am__quote@ # am--include-marker - @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/AsyncJob.Plo@am__quote@ # am--include-marker -+@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Assure.Plo@am__quote@ # am--include-marker - @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/CharacterSet.Plo@am__quote@ # am--include-marker - @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/File.Plo@am__quote@ # am--include-marker - @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Here.Plo@am__quote@ # am--include-marker -@@ -1224,6 +1227,7 @@ maintainer-clean: maintainer-clean-am - -rm -f ./$(DEPDIR)/AsyncCall.Plo - -rm -f ./$(DEPDIR)/AsyncCallQueue.Plo - -rm -f ./$(DEPDIR)/AsyncJob.Plo -+ -rm -f ./$(DEPDIR)/Assure.Plo - -rm -f ./$(DEPDIR)/CharacterSet.Plo - -rm -f ./$(DEPDIR)/File.Plo - -rm -f ./$(DEPDIR)/Here.Plo -diff --git a/src/client_side_reply.cc b/src/client_side_reply.cc -index 861f4b4..470f4bc 100644 ---- a/src/client_side_reply.cc -+++ b/src/client_side_reply.cc -@@ -12,6 +12,7 @@ - #include "acl/FilledChecklist.h" - #include "acl/Gadgets.h" - #include "anyp/PortCfg.h" -+#include "base/Assure.h" - #include "client_side_reply.h" - #include "errorpage.h" - #include "ETag.h" --- -2.39.3 - diff --git a/SOURCES/0006-Backport-additional-functions-for-SquidMath.patch b/SOURCES/0006-Backport-additional-functions-for-SquidMath.patch deleted file mode 100644 index 54143bc..0000000 --- a/SOURCES/0006-Backport-additional-functions-for-SquidMath.patch +++ /dev/null @@ -1,200 +0,0 @@ -From c24b9507e35fa43ddb40211a50fae9d58a0381bc Mon Sep 17 00:00:00 2001 -From: Alex Burmashev -Date: Mon, 27 Nov 2023 11:47:40 +0000 -Subject: [PATCH 6/7] Backport additional functions for SquidMath - -This includes some cherry-picks from -b308d7e2ad02ae6622f380d94d2303446f5831a9 and later commits. - -This is needed for CVE-2023-5824 fix - -Signed-off-by: Alex Burmashev ---- - src/SquidMath.h | 164 ++++++++++++++++++++++++++++++++++++++++++++++++ - 1 file changed, 164 insertions(+) - -diff --git a/src/SquidMath.h b/src/SquidMath.h -index c70acd1..e5b6e58 100644 ---- a/src/SquidMath.h -+++ b/src/SquidMath.h -@@ -8,7 +8,11 @@ - - #ifndef _SQUID_SRC_SQUIDMATH_H - #define _SQUID_SRC_SQUIDMATH_H -+#include "base/forward.h" -+#include "base/TypeTraits.h" - -+#include -+#include - /* Math functions we define locally for Squid */ - namespace Math - { -@@ -21,5 +25,165 @@ double doubleAverage(const double, const double, int, const int); - - } // namespace Math - -+// If Sum() performance becomes important, consider using GCC and clang -+// built-ins like __builtin_add_overflow() instead of manual overflow checks. -+ -+/// detects a pair of unsigned types -+/// reduces code duplication in declarations further below -+template -+using AllUnsigned = typename std::conditional< -+ std::is_unsigned::value && std::is_unsigned::value, -+ std::true_type, -+ std::false_type -+ >::type; -+ -+// TODO: Replace with std::cmp_less() after migrating to C++20. -+/// whether integer a is less than integer b, with correct overflow handling -+template -+constexpr bool -+Less(const A a, const B b) { -+ // The casts below make standard C++ integer conversions explicit. They -+ // quell compiler warnings about signed/unsigned comparison. The first two -+ // lines exclude different-sign a and b, making the casts/comparison safe. -+ using AB = typename std::common_type::type; -+ return -+ (a >= 0 && b < 0) ? false : -+ (a < 0 && b >= 0) ? true : -+ /* (a >= 0) == (b >= 0) */ static_cast(a) < static_cast(b); -+} -+ -+/// ensure that T is supported by NaturalSum() and friends -+template -+constexpr void -+AssertNaturalType() -+{ -+ static_assert(std::numeric_limits::is_bounded, "std::numeric_limits::max() is meaningful"); -+ static_assert(std::numeric_limits::is_exact, "no silent loss of precision"); -+ static_assert(!std::is_enum::value, "no silent creation of non-enumerated values"); -+} -+ -+// TODO: Investigate whether this optimization can be expanded to [signed] types -+// A and B when std::numeric_limits::is_modulo is true. -+/// This IncreaseSumInternal() overload is optimized for speed. -+/// \returns a non-overflowing sum of the two unsigned arguments (or nothing) -+/// \prec both argument types are unsigned -+template ::value, int> = 0> -+std::optional -+IncreaseSumInternal(const A a, const B b) { -+ // paranoid: AllUnsigned precondition established that already -+ static_assert(std::is_unsigned::value, "AllUnsigned dispatch worked for A"); -+ static_assert(std::is_unsigned::value, "AllUnsigned dispatch worked for B"); -+ -+ AssertNaturalType(); -+ AssertNaturalType(); -+ AssertNaturalType(); -+ -+ // we should only be called by IncreaseSum(); it forces integer promotion -+ static_assert(std::is_same::value, "a will not be promoted"); -+ static_assert(std::is_same::value, "b will not be promoted"); -+ // and without integer promotions, a sum of unsigned integers is unsigned -+ static_assert(std::is_unsigned::value, "a+b is unsigned"); -+ -+ // with integer promotions ruled out, a or b can only undergo integer -+ // conversion to the higher rank type (A or B, we do not know which) -+ using AB = typename std::common_type::type; -+ static_assert(std::is_same::value || std::is_same::value, "no unexpected conversions"); -+ static_assert(std::is_same::value, "lossless assignment"); -+ const AB sum = a + b; -+ -+ static_assert(std::numeric_limits::is_modulo, "we can detect overflows"); -+ // 1. modulo math: overflowed sum is smaller than any of its operands -+ // 2. the sum may overflow S (i.e. the return base type) -+ // We do not need Less() here because we compare promoted unsigned types. -+ return (sum >= a && sum <= std::numeric_limits::max()) ? -+ std::optional(sum) : std::optional(); -+} -+ -+/// This IncreaseSumInternal() overload supports a larger variety of types. -+/// \returns a non-overflowing sum of the two arguments (or nothing) -+/// \returns nothing if at least one of the arguments is negative -+/// \prec at least one of the argument types is signed -+template ::value, int> = 0> -+std::optional constexpr -+IncreaseSumInternal(const A a, const B b) { -+ AssertNaturalType(); -+ AssertNaturalType(); -+ AssertNaturalType(); -+ -+ // we should only be called by IncreaseSum() that does integer promotion -+ static_assert(std::is_same::value, "a will not be promoted"); -+ static_assert(std::is_same::value, "b will not be promoted"); -+ -+ return -+ // We could support a non-under/overflowing sum of negative numbers, but -+ // our callers use negative values specially (e.g., for do-not-use or -+ // do-not-limit settings) and are not supposed to do math with them. -+ (a < 0 || b < 0) ? std::optional() : -+ // To avoid undefined behavior of signed overflow, we must not compute -+ // the raw a+b sum if it may overflow. When A is not B, a or b undergoes -+ // (safe for non-negatives) integer conversion in these expressions, so -+ // we do not know the resulting a+b type AB and its maximum. We must -+ // also detect subsequent casting-to-S overflows. -+ // Overflow condition: (a + b > maxAB) or (a + b > maxS). -+ // A is an integer promotion of S, so maxS <= maxA <= maxAB. -+ // Since maxS <= maxAB, it is sufficient to just check: a + b > maxS, -+ // which is the same as the overflow-safe condition here: maxS - a < b. -+ // Finally, (maxS - a) cannot overflow because a is not negative and -+ // cannot underflow because a is a promotion of s: 0 <= a <= maxS. -+ Less(std::numeric_limits::max() - a, b) ? std::optional() : -+ std::optional(a + b); -+} -+ -+/// argument pack expansion termination for IncreaseSum() -+template -+std::optional -+IncreaseSum(const S s, const T t) -+{ -+ // Force (always safe) integer promotions now, to give std::enable_if_t<> -+ // promoted types instead of entering IncreaseSumInternal(s,t) -+ // but getting a _signed_ promoted value of s or t in s + t. -+ return IncreaseSumInternal(+s, +t); -+} -+ -+/// \returns a non-overflowing sum of the arguments (or nothing) -+template -+std::optional -+IncreaseSum(const S sum, const T t, const Args... args) { -+ if (const auto head = IncreaseSum(sum, t)) { -+ return IncreaseSum(head.value(), args...); -+ } else { -+ // std::optional() triggers bogus -Wmaybe-uninitialized warnings in GCC v10.3 -+ return std::nullopt; -+ } -+} -+ -+/// \returns an exact, non-overflowing sum of the arguments (or nothing) -+template -+std::optional -+NaturalSum(const Args... args) { -+ return IncreaseSum(0, args...); -+} -+ -+/// Safely resets the given variable to NaturalSum() of the given arguments. -+/// If the sum overflows, resets to variable's maximum possible value. -+/// \returns the new variable value (like an assignment operator would) -+template -+S -+SetToNaturalSumOrMax(S &var, const Args... args) -+{ -+ var = NaturalSum(args...).value_or(std::numeric_limits::max()); -+ return var; -+} -+ -+/// converts a given non-negative integer into an integer of a given type -+/// without loss of information or undefined behavior -+template -+Result -+NaturalCast(const Source s) -+{ -+ return NaturalSum(s).value(); -+} -+ -+ - #endif /* _SQUID_SRC_SQUIDMATH_H */ - --- -2.39.3 - diff --git a/SOURCES/0007-Adapt-to-older-gcc-cleanup.patch b/SOURCES/0007-Adapt-to-older-gcc-cleanup.patch deleted file mode 100644 index 126a8fd..0000000 --- a/SOURCES/0007-Adapt-to-older-gcc-cleanup.patch +++ /dev/null @@ -1,763 +0,0 @@ -From 37de4ce82f7f8906606d0625774d856ffd3a9453 Mon Sep 17 00:00:00 2001 -From: Alex Burmashev -Date: Thu, 7 Dec 2023 20:51:39 +0000 -Subject: [PATCH] Adapt to older gcc, cleanup - -Fix code that is not applicable to older codebase of squidv4. -On top do some work to adapt code to older gcc. -most of that is std::optional to std::pair conversion - -Signed-off-by: Alex Burmashev ---- - src/HttpReply.cc | 4 +- - src/MemObject.h | 3 ++ - src/MemStore.cc | 6 +-- - src/SquidMath.h | 27 ++++++------ - src/Store.h | 3 ++ - src/StoreClient.h | 2 +- - src/acl/Asn.cc | 14 +------ - src/base/Assure.cc | 8 ++++ - src/client_side_reply.cc | 64 ++++++++++++----------------- - src/peer_digest.cc | 1 + - src/store/ParsingBuffer.cc | 47 ++++++++++----------- - src/store/ParsingBuffer.h | 2 +- - src/store_client.cc | 84 +++++++++++++++++--------------------- - src/urn.cc | 2 +- - 14 files changed, 123 insertions(+), 144 deletions(-) - -diff --git a/src/HttpReply.cc b/src/HttpReply.cc -index df5bcef..21c62c2 100644 ---- a/src/HttpReply.cc -+++ b/src/HttpReply.cc -@@ -534,13 +534,13 @@ HttpReply::parseTerminatedPrefix(const char * const terminatedBuf, const size_t - const bool eof = false; // TODO: Remove after removing atEnd from HttpHeader::parse() - if (parse(terminatedBuf, bufSize, eof, &error)) { - debugs(58, 7, "success after accumulating " << bufSize << " bytes and parsing " << hdr_sz); -- Assure(pstate == Http::Message::psParsed); -+ Assure(pstate == psParsed); - Assure(hdr_sz > 0); - Assure(!Less(bufSize, hdr_sz)); // cannot parse more bytes than we have - return hdr_sz; // success - } - -- Assure(pstate != Http::Message::psParsed); -+ Assure(pstate != psParsed); - hdr_sz = 0; - - if (error) { -diff --git a/src/MemObject.h b/src/MemObject.h -index ba6646f..5a7590a 100644 ---- a/src/MemObject.h -+++ b/src/MemObject.h -@@ -56,6 +56,9 @@ public: - - void write(const StoreIOBuffer &buf); - void unlinkRequest(); -+ -+ HttpReply &baseReply() const { return *_reply; } -+ - HttpReply const *getReply() const; - void replaceHttpReply(HttpReply *newrep); - void stat (MemBuf * mb) const; -diff --git a/src/MemStore.cc b/src/MemStore.cc -index fe7af2f..6762c4f 100644 ---- a/src/MemStore.cc -+++ b/src/MemStore.cc -@@ -511,8 +511,8 @@ MemStore::copyFromShm(StoreEntry &e, const sfileno index, const Ipc::StoreMapAnc - " from " << extra.page << '+' << prefixSize); - - // parse headers if needed; they might span multiple slices! -- auto &reply = e.mem().adjustableBaseReply(); -- if (reply.pstate != Http::Message::psParsed) { -+ auto &reply = e.mem().baseReply(); -+ if (reply.pstate != psParsed) { - httpHeaderParsingBuffer.append(sliceBuf.data, sliceBuf.length); - if (reply.parseTerminatedPrefix(httpHeaderParsingBuffer.c_str(), httpHeaderParsingBuffer.length())) - httpHeaderParsingBuffer = SBuf(); // we do not need these bytes anymore -@@ -542,7 +542,7 @@ MemStore::copyFromShm(StoreEntry &e, const sfileno index, const Ipc::StoreMapAnc - debugs(20, 5, "mem-loaded all " << e.mem_obj->endOffset() << '/' << - anchor.basics.swap_file_sz << " bytes of " << e); - -- if (e.mem().adjustableBaseReply().pstate != Http::Message::psParsed) -+ if (e.mem().baseReply().pstate != psParsed) - throw TextException(ToSBuf("truncated mem-cached headers; accumulated: ", httpHeaderParsingBuffer.length()), Here()); - - // from StoreEntry::complete() -diff --git a/src/SquidMath.h b/src/SquidMath.h -index e5b6e58..538833b 100644 ---- a/src/SquidMath.h -+++ b/src/SquidMath.h -@@ -8,8 +8,6 @@ - - #ifndef _SQUID_SRC_SQUIDMATH_H - #define _SQUID_SRC_SQUIDMATH_H --#include "base/forward.h" --#include "base/TypeTraits.h" - - #include - #include -@@ -68,7 +66,7 @@ AssertNaturalType() - /// \returns a non-overflowing sum of the two unsigned arguments (or nothing) - /// \prec both argument types are unsigned - template ::value, int> = 0> --std::optional -+std::pair - IncreaseSumInternal(const A a, const B b) { - // paranoid: AllUnsigned precondition established that already - static_assert(std::is_unsigned::value, "AllUnsigned dispatch worked for A"); -@@ -96,7 +94,7 @@ IncreaseSumInternal(const A a, const B b) { - // 2. the sum may overflow S (i.e. the return base type) - // We do not need Less() here because we compare promoted unsigned types. - return (sum >= a && sum <= std::numeric_limits::max()) ? -- std::optional(sum) : std::optional(); -+ std::make_pair(sum, true) : std::make_pair(S(), false); - } - - /// This IncreaseSumInternal() overload supports a larger variety of types. -@@ -104,7 +102,7 @@ IncreaseSumInternal(const A a, const B b) { - /// \returns nothing if at least one of the arguments is negative - /// \prec at least one of the argument types is signed - template ::value, int> = 0> --std::optional constexpr -+std::pair - IncreaseSumInternal(const A a, const B b) { - AssertNaturalType(); - AssertNaturalType(); -@@ -118,7 +116,7 @@ IncreaseSumInternal(const A a, const B b) { - // We could support a non-under/overflowing sum of negative numbers, but - // our callers use negative values specially (e.g., for do-not-use or - // do-not-limit settings) and are not supposed to do math with them. -- (a < 0 || b < 0) ? std::optional() : -+ (a < 0 || b < 0) ? std::make_pair(S(), false) : - // To avoid undefined behavior of signed overflow, we must not compute - // the raw a+b sum if it may overflow. When A is not B, a or b undergoes - // (safe for non-negatives) integer conversion in these expressions, so -@@ -130,13 +128,13 @@ IncreaseSumInternal(const A a, const B b) { - // which is the same as the overflow-safe condition here: maxS - a < b. - // Finally, (maxS - a) cannot overflow because a is not negative and - // cannot underflow because a is a promotion of s: 0 <= a <= maxS. -- Less(std::numeric_limits::max() - a, b) ? std::optional() : -- std::optional(a + b); -+ Less(std::numeric_limits::max() - a, b) ? std::make_pair(S(), false) : -+ std::make_pair(S(a + b), true); - } - - /// argument pack expansion termination for IncreaseSum() - template --std::optional -+std::pair - IncreaseSum(const S s, const T t) - { - // Force (always safe) integer promotions now, to give std::enable_if_t<> -@@ -147,19 +145,20 @@ IncreaseSum(const S s, const T t) - - /// \returns a non-overflowing sum of the arguments (or nothing) - template --std::optional -+std::pair - IncreaseSum(const S sum, const T t, const Args... args) { -- if (const auto head = IncreaseSum(sum, t)) { -- return IncreaseSum(head.value(), args...); -+ const auto head = IncreaseSum(sum, t); -+ if (head.second) { -+ return IncreaseSum(head.first, args...); - } else { - // std::optional() triggers bogus -Wmaybe-uninitialized warnings in GCC v10.3 -- return std::nullopt; -+ return std::make_pair(S(), false); - } - } - - /// \returns an exact, non-overflowing sum of the arguments (or nothing) - template --std::optional -+std::pair - NaturalSum(const Args... args) { - return IncreaseSum(0, args...); - } -diff --git a/src/Store.h b/src/Store.h -index 3eb6b84..2475fe0 100644 ---- a/src/Store.h -+++ b/src/Store.h -@@ -49,6 +49,9 @@ public: - StoreEntry(); - virtual ~StoreEntry(); - -+ MemObject &mem() { assert(mem_obj); return *mem_obj; } -+ const MemObject &mem() const { assert(mem_obj); return *mem_obj; } -+ - virtual HttpReply const *getReply() const; - virtual void write (StoreIOBuffer); - -diff --git a/src/StoreClient.h b/src/StoreClient.h -index 0524776..ba5e669 100644 ---- a/src/StoreClient.h -+++ b/src/StoreClient.h -@@ -166,7 +166,7 @@ private: - /// request. Buffer contents depends on the source and parsing stage; it may - /// hold (parts of) swap metadata, HTTP response headers, and/or HTTP - /// response body bytes. -- std::optional parsingBuffer; -+ std::pair parsingBuffer = std::make_pair(Store::ParsingBuffer(), false); - - StoreIOBuffer lastDiskRead; ///< buffer used for the last storeRead() call - -diff --git a/src/acl/Asn.cc b/src/acl/Asn.cc -index bcedc82..67e453f 100644 ---- a/src/acl/Asn.cc -+++ b/src/acl/Asn.cc -@@ -73,7 +73,7 @@ class ASState - CBDATA_CLASS(ASState); - - public: -- ASState(); -+ ASState() = default; - ~ASState(); - - StoreEntry *entry; -@@ -87,18 +87,6 @@ public: - - CBDATA_CLASS_INIT(ASState); - --ASState::ASState() : -- entry(NULL), -- sc(NULL), -- request(NULL), -- as_number(0), -- offset(0), -- reqofs(0), -- dataRead(false) --{ -- memset(reqbuf, 0, AS_REQBUF_SZ); --} -- - ASState::~ASState() - { - debugs(53, 3, entry->url()); -diff --git a/src/base/Assure.cc b/src/base/Assure.cc -index b09b848..b4cf3e5 100644 ---- a/src/base/Assure.cc -+++ b/src/base/Assure.cc -@@ -11,6 +11,14 @@ - #include "base/TextException.h" - #include "sbuf/Stream.h" - -+std::ostream & -+operator <<(std::ostream &os, const TextException &ex) -+{ -+ ex.print(os); -+ return os; -+} -+ -+ - [[ noreturn ]] void - ReportAndThrow_(const int debugLevel, const char *description, const SourceLocation &location) - { -diff --git a/src/client_side_reply.cc b/src/client_side_reply.cc -index 470f4bc..64fd489 100644 ---- a/src/client_side_reply.cc -+++ b/src/client_side_reply.cc -@@ -1142,8 +1142,8 @@ clientReplyContext::storeNotOKTransferDone() const - MemObject *mem = http->storeEntry()->mem_obj; - assert(mem != NULL); - assert(http->request != NULL); -- -- if (mem->baseReply().pstate != Http::Message::psParsed) -+ const auto expectedBodySize = mem->baseReply().content_length; -+ if (mem->baseReply().pstate != psParsed) - return 0; - - /* -@@ -1808,32 +1808,6 @@ clientReplyContext::SendMoreData(void *data, StoreIOBuffer result) - context->sendMoreData (result); - } - --/// Whether the given body area describes the start of our Client Stream buffer. --/// An empty area does. --bool --clientReplyContext::matchesStreamBodyBuffer(const StoreIOBuffer &their) const --{ -- // the answer is undefined for errors; they are not really "body buffers" -- Assure(!their.flags.error); -- -- if (!their.length) -- return true; // an empty body area always matches our body area -- -- if (their.data != next()->readBuffer.data) { -- debugs(88, 7, "no: " << their << " vs. " << next()->readBuffer); -- return false; -- } -- -- return true; --} -- --void --clientReplyContext::noteStreamBufferredBytes(const StoreIOBuffer &result) --{ -- Assure(matchesStreamBodyBuffer(result)); -- lastStreamBufferedBytes = result; // may be unchanged and/or zero-length --} -- - void - clientReplyContext::makeThisHead() - { -@@ -2180,21 +2154,33 @@ clientReplyContext::sendMoreData (StoreIOBuffer result) - sc->setDelayId(DelayId::DelayClient(http,reply)); - #endif - -- /* handle headers */ -+ holdingBuffer = result; -+ processReplyAccess(); -+ return; -+} - -- if (Config.onoff.log_mime_hdrs) { -- size_t k; -+/// Whether the given body area describes the start of our Client Stream buffer. -+/// An empty area does. -+bool -+clientReplyContext::matchesStreamBodyBuffer(const StoreIOBuffer &their) const -+{ -+ // the answer is undefined for errors; they are not really "body buffers" -+ Assure(!their.flags.error); -+ if (!their.length) -+ return true; // an empty body area always matches our body area -+ if (their.data != next()->readBuffer.data) { -+ debugs(88, 7, "no: " << their << " vs. " << next()->readBuffer); -+ return false; - -- if ((k = headersEnd(buf, reqofs))) { -- safe_free(http->al->headers.reply); -- http->al->headers.reply = (char *)xcalloc(k + 1, 1); -- xstrncpy(http->al->headers.reply, buf, k); -- } - } -+ return true; -+} - -- holdingBuffer = result; -- processReplyAccess(); -- return; -+void -+clientReplyContext::noteStreamBufferredBytes(const StoreIOBuffer &result) -+{ -+ Assure(matchesStreamBodyBuffer(result)); -+ lastStreamBufferedBytes = result; // may be unchanged and/or zero-length - } - - /* Using this breaks the client layering just a little! -diff --git a/src/peer_digest.cc b/src/peer_digest.cc -index abfea4a..89ea73e 100644 ---- a/src/peer_digest.cc -+++ b/src/peer_digest.cc -@@ -588,6 +588,7 @@ peerDigestFetchReply(void *data, char *buf, ssize_t size) - - return 0; // we consumed/used no buffered bytes - } -+} - - int - peerDigestSwapInCBlock(void *data, char *buf, ssize_t size) -diff --git a/src/store/ParsingBuffer.cc b/src/store/ParsingBuffer.cc -index e948fe2..affbe9e 100644 ---- a/src/store/ParsingBuffer.cc -+++ b/src/store/ParsingBuffer.cc -@@ -28,19 +28,19 @@ Store::ParsingBuffer::ParsingBuffer(StoreIOBuffer &initialSpace): - const char * - Store::ParsingBuffer::memory() const - { -- return extraMemory_ ? extraMemory_->rawContent() : readerSuppliedMemory_.data; -+ return extraMemory_.second ? extraMemory_.first.rawContent() : readerSuppliedMemory_.data; - } - - size_t - Store::ParsingBuffer::capacity() const - { -- return extraMemory_ ? (extraMemory_->length() + extraMemory_->spaceSize()) : readerSuppliedMemory_.length; -+ return extraMemory_.second ? (extraMemory_.first.length() + extraMemory_.first.spaceSize()) : readerSuppliedMemory_.length; - } - - size_t - Store::ParsingBuffer::contentSize() const - { -- return extraMemory_ ? extraMemory_->length() : readerSuppliedMemoryContentSize_; -+ return extraMemory_.second ? extraMemory_.first.length() : readerSuppliedMemoryContentSize_; - } - - void -@@ -56,10 +56,10 @@ Store::ParsingBuffer::appended(const char * const newBytes, const size_t newByte - assert(memory() + contentSize() == newBytes); // the new bytes start in our space - // and now we know that newBytes is not nil either - -- if (extraMemory_) -- extraMemory_->rawAppendFinish(newBytes, newByteCount); -+ if (extraMemory_.second) -+ extraMemory_.first.rawAppendFinish(newBytes, newByteCount); - else -- readerSuppliedMemoryContentSize_ = *IncreaseSum(readerSuppliedMemoryContentSize_, newByteCount); -+ readerSuppliedMemoryContentSize_ = IncreaseSum(readerSuppliedMemoryContentSize_, newByteCount).first; - - assert(contentSize() <= capacity()); // paranoid - } -@@ -68,8 +68,8 @@ void - Store::ParsingBuffer::consume(const size_t parsedBytes) - { - Assure(contentSize() >= parsedBytes); // more conservative than extraMemory_->consume() -- if (extraMemory_) { -- extraMemory_->consume(parsedBytes); -+ if (extraMemory_.second) { -+ extraMemory_.first.consume(parsedBytes); - } else { - readerSuppliedMemoryContentSize_ -= parsedBytes; - if (parsedBytes && readerSuppliedMemoryContentSize_) -@@ -81,8 +81,8 @@ StoreIOBuffer - Store::ParsingBuffer::space() - { - const auto size = spaceSize(); -- const auto start = extraMemory_ ? -- extraMemory_->rawAppendStart(size) : -+ const auto start = extraMemory_.second ? -+ extraMemory_.first.rawAppendStart(size) : - (readerSuppliedMemory_.data + readerSuppliedMemoryContentSize_); - return StoreIOBuffer(spaceSize(), 0, start); - } -@@ -110,22 +110,23 @@ void - Store::ParsingBuffer::growSpace(const size_t minimumSpaceSize) - { - const auto capacityIncreaseAttempt = IncreaseSum(contentSize(), minimumSpaceSize); -- if (!capacityIncreaseAttempt) -+ if (!capacityIncreaseAttempt.second) - throw TextException(ToSBuf("no support for a single memory block of ", contentSize(), '+', minimumSpaceSize, " bytes"), Here()); -- const auto newCapacity = *capacityIncreaseAttempt; -+ const auto newCapacity = capacityIncreaseAttempt.first; - - if (newCapacity <= capacity()) - return; // already have enough space; no reallocation is needed - - debugs(90, 7, "growing to provide " << minimumSpaceSize << " in " << *this); - -- if (extraMemory_) { -- extraMemory_->reserveCapacity(newCapacity); -+ if (extraMemory_.second) { -+ extraMemory_.first.reserveCapacity(newCapacity); - } else { - SBuf newStorage; - newStorage.reserveCapacity(newCapacity); - newStorage.append(readerSuppliedMemory_.data, readerSuppliedMemoryContentSize_); -- extraMemory_ = std::move(newStorage); -+ extraMemory_.first = std::move(newStorage); -+ extraMemory_.second = true; - } - Assure(spaceSize() >= minimumSpaceSize); - } -@@ -133,14 +134,14 @@ Store::ParsingBuffer::growSpace(const size_t minimumSpaceSize) - SBuf - Store::ParsingBuffer::toSBuf() const - { -- return extraMemory_ ? *extraMemory_ : SBuf(content().data, content().length); -+ return extraMemory_.second ? extraMemory_.first : SBuf(content().data, content().length); - } - - size_t - Store::ParsingBuffer::spaceSize() const - { -- if (extraMemory_) -- return extraMemory_->spaceSize(); -+ if (extraMemory_.second) -+ return extraMemory_.first.spaceSize(); - - assert(readerSuppliedMemoryContentSize_ <= readerSuppliedMemory_.length); - return readerSuppliedMemory_.length - readerSuppliedMemoryContentSize_; -@@ -169,12 +170,12 @@ Store::ParsingBuffer::packBack() - result.length = bytesToPack; - Assure(result.data); - -- if (!extraMemory_) { -+ if (!extraMemory_.second) { - // no accumulated bytes copying because they are in readerSuppliedMemory_ - debugs(90, 7, "quickly exporting " << result.length << " bytes via " << readerSuppliedMemory_); - } else { -- debugs(90, 7, "slowly exporting " << result.length << " bytes from " << extraMemory_->id << " back into " << readerSuppliedMemory_); -- memmove(result.data, extraMemory_->rawContent(), result.length); -+ debugs(90, 7, "slowly exporting " << result.length << " bytes from " << extraMemory_.first.id << " back into " << readerSuppliedMemory_); -+ memmove(result.data, extraMemory_.first.rawContent(), result.length); - } - - return result; -@@ -185,9 +186,9 @@ Store::ParsingBuffer::print(std::ostream &os) const - { - os << "size=" << contentSize(); - -- if (extraMemory_) { -+ if (extraMemory_.second) { - os << " capacity=" << capacity(); -- os << " extra=" << extraMemory_->id; -+ os << " extra=" << extraMemory_.first.id; - } - - // report readerSuppliedMemory_ (if any) even if we are no longer using it -diff --git a/src/store/ParsingBuffer.h b/src/store/ParsingBuffer.h -index b8aa957..b473ac6 100644 ---- a/src/store/ParsingBuffer.h -+++ b/src/store/ParsingBuffer.h -@@ -112,7 +112,7 @@ private: - - /// our internal buffer that takes over readerSuppliedMemory_ when the - /// latter becomes full and more memory is needed -- std::optional extraMemory_; -+ std::pair extraMemory_ = std::make_pair(SBuf(), false); - }; - - inline std::ostream & -diff --git a/src/store_client.cc b/src/store_client.cc -index 383aac8..0236274 100644 ---- a/src/store_client.cc -+++ b/src/store_client.cc -@@ -10,6 +10,7 @@ - - #include "squid.h" - #include "base/AsyncCbdataCalls.h" -+#include "base/Assure.h" - #include "event.h" - #include "globals.h" - #include "HttpReply.h" -@@ -118,24 +119,14 @@ store_client::finishCallback() - // pointers. Some other legacy code expects "correct" result.offset even - // when there is no body to return. Accommodate all those expectations. - auto result = StoreIOBuffer(0, copyInto.offset, nullptr); -- if (object_ok && parsingBuffer && parsingBuffer->contentSize()) -- result = parsingBuffer->packBack(); -+ if (object_ok && parsingBuffer.second && parsingBuffer.first.contentSize()) -+ result = parsingBuffer.first.packBack(); - result.flags.error = object_ok ? 0 : 1; - -- // TODO: Move object_ok handling above into this `if` statement. -- if (object_ok) { -- // works for zero hdr_sz cases as well; see also: nextHttpReadOffset() -- discardableHttpEnd_ = NaturalSum(entry->mem().baseReply().hdr_sz, result.offset, result.length).value(); -- } else { -- // object_ok is sticky, so we will not be able to use any response bytes -- discardableHttpEnd_ = entry->mem().endOffset(); -- } -- debugs(90, 7, "with " << result << "; discardableHttpEnd_=" << discardableHttpEnd_); -- - // no HTTP headers and no body bytes (but not because there was no space) - atEof_ = !sendingHttpHeaders() && !result.length && copyInto.length; - -- parsingBuffer.reset(); -+ parsingBuffer.second = false; - ++answers; - - STCB *temphandler = _callback.callback_handler; -@@ -228,7 +219,9 @@ store_client::copy(StoreEntry * anEntry, - // when we already can respond with HTTP headers. - Assure(!copyInto.offset || answeredOnce()); - -- parsingBuffer.emplace(copyInto); -+ parsingBuffer.first = Store::ParsingBuffer(copyInto); -+ parsingBuffer.second = true; -+ - - discardableHttpEnd_ = nextHttpReadOffset(); - debugs(90, 7, "discardableHttpEnd_=" << discardableHttpEnd_); -@@ -454,14 +447,14 @@ store_client::canReadFromMemory() const - const auto &mem = entry->mem(); - const auto memReadOffset = nextHttpReadOffset(); - return mem.inmem_lo <= memReadOffset && memReadOffset < mem.endOffset() && -- parsingBuffer->spaceSize(); -+ parsingBuffer.first.spaceSize(); - } - - /// The offset of the next stored HTTP response byte wanted by the client. - int64_t - store_client::nextHttpReadOffset() const - { -- Assure(parsingBuffer); -+ Assure(parsingBuffer.second); - const auto &mem = entry->mem(); - const auto hdr_sz = mem.baseReply().hdr_sz; - // Certain SMP cache manager transactions do not store HTTP headers in -@@ -469,7 +462,7 @@ store_client::nextHttpReadOffset() const - // In such cases, hdr_sz ought to be zero. In all other (known) cases, - // mem_hdr contains HTTP response headers (positive hdr_sz if parsed) - // followed by HTTP response body. This code math accommodates all cases. -- return NaturalSum(hdr_sz, copyInto.offset, parsingBuffer->contentSize()).value(); -+ return NaturalSum(hdr_sz, copyInto.offset, parsingBuffer.first.contentSize()).first; - } - - /// Copies at least some of the requested body bytes from MemObject memory, -@@ -478,13 +471,13 @@ store_client::nextHttpReadOffset() const - void - store_client::readFromMemory() - { -- Assure(parsingBuffer); -- const auto readInto = parsingBuffer->space().positionAt(nextHttpReadOffset()); -+ Assure(parsingBuffer.second); -+ const auto readInto = parsingBuffer.first.space().positionAt(nextHttpReadOffset()); - - debugs(90, 3, "copying HTTP body bytes from memory into " << readInto); - const auto sz = entry->mem_obj->data_hdr.copy(readInto); - Assure(sz > 0); // our canReadFromMemory() precondition guarantees that -- parsingBuffer->appended(readInto.data, sz); -+ parsingBuffer.first.appended(readInto.data, sz); - } - - void -@@ -497,7 +490,7 @@ store_client::fileRead() - flags.disk_io_pending = true; - - // mem->swap_hdr_sz is zero here during initial read(s) -- const auto nextStoreReadOffset = NaturalSum(mem->swap_hdr_sz, nextHttpReadOffset()).value(); -+ const auto nextStoreReadOffset = NaturalSum(mem->swap_hdr_sz, nextHttpReadOffset()).first; - - // XXX: If fileRead() is called when we do not yet know mem->swap_hdr_sz, - // then we must start reading from disk offset zero to learn it: we cannot -@@ -522,10 +515,10 @@ store_client::fileRead() - // * performance effects of larger disk reads may be negative somewhere. - const decltype(StoreIOBuffer::length) maxReadSize = SM_PAGE_SIZE; - -- Assure(parsingBuffer); -+ Assure(parsingBuffer.second); - // also, do not read more than we can return (via a copyInto.length buffer) - const auto readSize = std::min(copyInto.length, maxReadSize); -- lastDiskRead = parsingBuffer->makeSpace(readSize).positionAt(nextStoreReadOffset); -+ lastDiskRead = parsingBuffer.first.makeSpace(readSize).positionAt(nextStoreReadOffset); - debugs(90, 5, "into " << lastDiskRead); - - storeRead(swapin_sio, -@@ -540,13 +533,12 @@ store_client::fileRead() - void - store_client::readBody(const char * const buf, const ssize_t lastIoResult) - { -- int parsed_header = 0; - - Assure(flags.disk_io_pending); - flags.disk_io_pending = false; - assert(_callback.pending()); -- Assure(parsingBuffer); -- debugs(90, 3, "got " << lastIoResult << " using " << *parsingBuffer); -+ Assure(parsingBuffer.second); -+ debugs(90, 3, "got " << lastIoResult << " using " << parsingBuffer.first); - if (lastIoResult < 0) - return fail(); - -@@ -560,7 +552,7 @@ store_client::readBody(const char * const buf, const ssize_t lastIoResult) - assert(lastDiskRead.data == buf); - lastDiskRead.length = lastIoResult; - -- parsingBuffer->appended(buf, lastIoResult); -+ parsingBuffer.first.appended(buf, lastIoResult); - - // we know swap_hdr_sz by now and were reading beyond swap metadata because - // readHead() would have been called otherwise (to read swap metadata) -@@ -589,13 +581,12 @@ store_client::handleBodyFromDisk() - if (!answeredOnce()) { - // All on-disk responses have HTTP headers. First disk body read(s) - // include HTTP headers that we must parse (if needed) and skip. -- const auto haveHttpHeaders = entry->mem_obj->baseReply().pstate == Http::Message::psParsed; -+ const auto haveHttpHeaders = entry->mem_obj->baseReply().pstate == psParsed; - if (!haveHttpHeaders && !parseHttpHeadersFromDisk()) - return; - skipHttpHeadersFromDisk(); - } - -- const HttpReply *rep = entry->getReply(); - noteNews(); - } - -@@ -626,8 +617,6 @@ store_client::maybeWriteFromDiskToMemory(const StoreIOBuffer &httpResponsePart) - } - } - --} -- - void - store_client::fail() - { -@@ -735,20 +724,20 @@ store_client::readHeader(char const *buf, ssize_t len) - if (!object_ok) - return; - -- Assure(parsingBuffer); -- debugs(90, 3, "got " << len << " using " << *parsingBuffer); -+ Assure(parsingBuffer.second); -+ debugs(90, 3, "got " << len << " using " << parsingBuffer.first); - - if (len < 0) - return fail(); - -- Assure(!parsingBuffer->contentSize()); -- parsingBuffer->appended(buf, len); -+ Assure(!parsingBuffer.first.contentSize()); -+ parsingBuffer.first.appended(buf, len); - if (!unpackHeader(buf, len)) { - fail(); - return; - } -- parsingBuffer->consume(mem->swap_hdr_sz); -- maybeWriteFromDiskToMemory(parsingBuffer->content()); -+ parsingBuffer.first.consume(mem->swap_hdr_sz); -+ maybeWriteFromDiskToMemory(parsingBuffer.first.content()); - handleBodyFromDisk(); - } - -@@ -1020,8 +1009,9 @@ store_client::parseHttpHeadersFromDisk() - // cache a header that we cannot parse and get here. Same for MemStore. - debugs(90, DBG_CRITICAL, "ERROR: Cannot parse on-disk HTTP headers" << - Debug::Extra << "exception: " << CurrentException << -- Debug::Extra << "raw input size: " << parsingBuffer->contentSize() << " bytes" << -- Debug::Extra << "current buffer capacity: " << parsingBuffer->capacity() << " bytes"); -+ Debug::Extra << "raw input size: " << parsingBuffer.first.contentSize() << " bytes" << -+ Debug::Extra << "current buffer capacity: " << parsingBuffer.first.capacity() << " bytes"); -+ - fail(); - return false; - } -@@ -1032,10 +1022,10 @@ store_client::parseHttpHeadersFromDisk() - bool - store_client::tryParsingHttpHeaders() - { -- Assure(parsingBuffer); -+ Assure(parsingBuffer.second); - Assure(!copyInto.offset); // otherwise, parsingBuffer cannot have HTTP response headers -- auto &adjustableReply = entry->mem().adjustableBaseReply(); -- if (adjustableReply.parseTerminatedPrefix(parsingBuffer->c_str(), parsingBuffer->contentSize())) -+ auto &adjustableReply = entry->mem().baseReply(); -+ if (adjustableReply.parseTerminatedPrefix(parsingBuffer.first.c_str(), parsingBuffer.first.contentSize())) - return true; - - // TODO: Optimize by checking memory as well. For simplicity sake, we -@@ -1052,12 +1042,12 @@ store_client::skipHttpHeadersFromDisk() - { - const auto hdr_sz = entry->mem_obj->baseReply().hdr_sz; - Assure(hdr_sz > 0); // all on-disk responses have HTTP headers -- if (Less(parsingBuffer->contentSize(), hdr_sz)) { -- debugs(90, 5, "discovered " << hdr_sz << "-byte HTTP headers in memory after reading some of them from disk: " << *parsingBuffer); -- parsingBuffer->consume(parsingBuffer->contentSize()); // skip loaded HTTP header prefix -+ if (Less(parsingBuffer.first.contentSize(), hdr_sz)) { -+ debugs(90, 5, "discovered " << hdr_sz << "-byte HTTP headers in memory after reading some of them from disk: " << parsingBuffer.first); -+ parsingBuffer.first.consume(parsingBuffer.first.contentSize()); // skip loaded HTTP header prefix - } else { -- parsingBuffer->consume(hdr_sz); // skip loaded HTTP headers -- const auto httpBodyBytesAfterHeader = parsingBuffer->contentSize(); // may be zero -+ parsingBuffer.first.consume(hdr_sz); // skip loaded HTTP headers -+ const auto httpBodyBytesAfterHeader = parsingBuffer.first.contentSize(); // may be zero - Assure(httpBodyBytesAfterHeader <= copyInto.length); - debugs(90, 5, "read HTTP body prefix: " << httpBodyBytesAfterHeader); - } -diff --git a/src/urn.cc b/src/urn.cc -index 9f5e89d..ad42b74 100644 ---- a/src/urn.cc -+++ b/src/urn.cc -@@ -238,7 +238,7 @@ urnHandleReply(void *data, StoreIOBuffer result) - return; - } - --+ urnState->parsingBuffer.appended(result.data, result.length); -+ urnState->parsingBuffer.appended(result.data, result.length); - - /* If we haven't received the entire object (urn), copy more */ - if (!urnState->sc->atEof()) { --- -2.39.3 - diff --git a/SOURCES/perl-requires-squid.sh b/SOURCES/perl-requires-squid.sh old mode 100644 new mode 100755 diff --git a/SOURCES/squid-4.15-CVE-2023-46724.patch b/SOURCES/squid-4.15-CVE-2023-46724.patch index 41c30aa..58b8651 100644 --- a/SOURCES/squid-4.15-CVE-2023-46724.patch +++ b/SOURCES/squid-4.15-CVE-2023-46724.patch @@ -1,17 +1,5 @@ -From 792ef23e6e1c05780fe17f733859eef6eb8c8be3 Mon Sep 17 00:00:00 2001 -From: Andreas Weigel -Date: Wed, 18 Oct 2023 04:14:31 +0000 -Subject: [PATCH] Fix validation of certificates with CN=* (#1523) - -The bug was discovered and detailed by Joshua Rogers at -https://megamansec.github.io/Squid-Security-Audit/ -where it was filed as "Buffer UnderRead in SSL CN Parsing". ---- - src/anyp/Uri.cc | 6 ++++++ - 1 file changed, 6 insertions(+) - diff --git a/src/anyp/Uri.cc b/src/anyp/Uri.cc -index 77b6f0c92..a6a5d5d9e 100644 +index 20b9bf1..81ebb18 100644 --- a/src/anyp/Uri.cc +++ b/src/anyp/Uri.cc @@ -173,6 +173,10 @@ urlInitialize(void) @@ -34,5 +22,3 @@ index 77b6f0c92..a6a5d5d9e 100644 /* * Start at the ends of the two strings and work towards the --- -2.25.1 diff --git a/SOURCES/squid-4.15-CVE-2023-46728.patch b/SOURCES/squid-4.15-CVE-2023-46728.patch index bb720b0..980f372 100644 --- a/SOURCES/squid-4.15-CVE-2023-46728.patch +++ b/SOURCES/squid-4.15-CVE-2023-46728.patch @@ -1,105 +1,8 @@ -From 6ea12e8fb590ac6959e9356a81aa3370576568c3 Mon Sep 17 00:00:00 2001 -From: Alex Rousskov -Date: Tue, 26 Jul 2022 15:05:54 +0000 -Subject: [PATCH] Remove support for Gopher protocol (#1092) +commit 0cf1b78cacfdb278107ae352022ced143635b528 +Author: Luboš Uhliarik +Date: Wed Dec 6 20:04:56 2023 +0100 -Gopher code quality remains too low for production use in most -environments. The code is a persistent source of vulnerabilities and -fixing it requires significant effort. We should not be spending scarce -Project resources on improving that code, especially given the lack of -strong demand for Gopher support. - -With this change, Gopher requests will be handled like any other request -with an unknown (to Squid) protocol. For example, HTTP requests with -Gopher URI scheme result in ERR_UNSUP_REQ. - -Default Squid configuration still considers TCP port 70 "safe". The -corresponding Safe_ports ACL rule has not been removed for consistency -sake: We consider WAIS port safe even though Squid refuses to forward -WAIS requests: - - acl Safe_ports port 70 # gopher - acl Safe_ports port 210 # wais - -Back port upstream patch -Signed-Off-By: Tianyue.lan@oracle.com ---- - doc/debug-sections.txt | 1 - - errors/af/ERR_UNSUP_REQ | 2 +- - errors/ar/ERR_UNSUP_REQ | 2 +- - errors/az/ERR_UNSUP_REQ | 2 +- - errors/bg/ERR_UNSUP_REQ | 2 +- - errors/ca/ERR_UNSUP_REQ | 2 +- - errors/cs/ERR_UNSUP_REQ | 2 +- - errors/da/ERR_UNSUP_REQ | 2 +- - errors/de/ERR_UNSUP_REQ | 2 +- - errors/el/ERR_UNSUP_REQ | 2 +- - errors/en/ERR_UNSUP_REQ | 2 +- - errors/errorpage.css | 2 +- - errors/es/ERR_UNSUP_REQ | 2 +- - errors/et/ERR_UNSUP_REQ | 2 +- - errors/fa/ERR_UNSUP_REQ | 2 +- - errors/fi/ERR_UNSUP_REQ | 2 +- - errors/fr/ERR_UNSUP_REQ | 2 +- - errors/he/ERR_UNSUP_REQ | 2 +- - errors/hu/ERR_UNSUP_REQ | 2 +- - errors/hy/ERR_UNSUP_REQ | 2 +- - errors/id/ERR_UNSUP_REQ | 2 +- - errors/it/ERR_UNSUP_REQ | 2 +- - errors/ja/ERR_UNSUP_REQ | 2 +- - errors/ka/ERR_UNSUP_REQ | 2 +- - errors/ko/ERR_UNSUP_REQ | 2 +- - errors/lt/ERR_UNSUP_REQ | 2 +- - errors/lv/ERR_UNSUP_REQ | 2 +- - errors/ms/ERR_UNSUP_REQ | 2 +- - errors/nl/ERR_UNSUP_REQ | 2 +- - errors/oc/ERR_UNSUP_REQ | 2 +- - errors/pl/ERR_UNSUP_REQ | 2 +- - errors/pt-br/ERR_UNSUP_REQ | 2 +- - errors/pt/ERR_UNSUP_REQ | 2 +- - errors/ro/ERR_UNSUP_REQ | 2 +- - errors/ru/ERR_UNSUP_REQ | 2 +- - errors/sk/ERR_UNSUP_REQ | 2 +- - errors/sl/ERR_UNSUP_REQ | 2 +- - errors/sr-cyrl/ERR_UNSUP_REQ | 2 +- - errors/sr-latn/ERR_UNSUP_REQ | 2 +- - errors/sv/ERR_UNSUP_REQ | 2 +- - errors/templates/ERR_UNSUP_REQ | 2 +- - errors/th/ERR_UNSUP_REQ | 2 +- - errors/tr/ERR_UNSUP_REQ | 2 +- - errors/uk/ERR_UNSUP_REQ | 2 +- - errors/uz/ERR_UNSUP_REQ | 2 +- - errors/vi/ERR_UNSUP_REQ | 2 +- - errors/zh-hans/ERR_UNSUP_REQ | 2 +- - errors/zh-hant/ERR_UNSUP_REQ | 2 +- - src/FwdState.cc | 5 - - src/HttpMsg.h | 1 - - src/HttpRequest.cc | 6 - - src/IoStats.h | 2 +- - src/Makefile.am | 14 - - src/Makefile.in | 53 +- - src/adaptation/ecap/Host.cc | 1 - - src/adaptation/ecap/MessageRep.cc | 2 - - src/anyp/ProtocolType.cc | 1 - - src/anyp/ProtocolType.h | 1 - - src/anyp/Uri.cc | 2 - - src/anyp/UriScheme.cc | 3 - - src/cf.data.pre | 6 +- - src/cf.data.pre.config | 6 +- - src/client_side_request.cc | 4 - - src/err_type.h | 2 +- - src/gopher.cc | 977 ---------------------- - src/gopher.cc.CVE-2021-46784 | 982 ----------------------- - src/gopher.h | 29 - - src/mgr/IoAction.cc | 3 - - src/mgr/IoAction.h | 2 - - src/squid.8.in | 2 +- - src/stat.cc | 17 - - test-suite/squidconf/regressions-3.4.0.1 | 1 - - 72 files changed, 73 insertions(+), 2144 deletions(-) - delete mode 100644 src/gopher.cc - delete mode 100644 src/gopher.cc.CVE-2021-46784 - delete mode 100644 src/gopher.h + Remove gopher support diff --git a/doc/debug-sections.txt b/doc/debug-sections.txt index 8b8b25f..50bd122 100644 @@ -113,123 +16,6 @@ index 8b8b25f..50bd122 100644 section 11 Hypertext Transfer Protocol (HTTP) section 12 Internet Cache Protocol (ICP) section 13 High Level Memory Pool Management -diff --git a/errors/af/ERR_UNSUP_REQ b/errors/af/ERR_UNSUP_REQ -index c8c3152..d0895e2 100644 ---- a/errors/af/ERR_UNSUP_REQ -+++ b/errors/af/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Niegesteunde versoekmetode en -protokol

- - --

Squid ondersteun nie alle navraagmetodes vir alle toegangsprotokolle nie. Mens kan by voorbeeld nie 'n Gopher-navraag POST nie.

-+

Squid ondersteun nie alle navraagmetodes vir alle toegangsprotokolle nie.

- -
-
-diff --git a/errors/ar/ERR_UNSUP_REQ b/errors/ar/ERR_UNSUP_REQ -index 909722f..dc8bceb 100644 ---- a/errors/ar/ERR_UNSUP_REQ -+++ b/errors/ar/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Unsupported Request Method and Protocol

- - --

Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request.

-+

Squid does not support all request methods for all access protocols.

- -

Your cache administrator is %w.

-
-diff --git a/errors/az/ERR_UNSUP_REQ b/errors/az/ERR_UNSUP_REQ -index 50207d8..a1fba06 100644 ---- a/errors/az/ERR_UNSUP_REQ -+++ b/errors/az/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Dəstəklənməyən sorğu metodu və protokol

- - --

Squid bütün sorğu metodları və bütün protokollardəstəkləmir. Məsələn, Gopher protokolu üzrə siz POST sorğu metodunu yerinə yetirə bilməzsiniz.

-+

Squid bütün sorğu metodları və bütün protokollardəstəkləmir.

- -

Your cache administrator is %w.

-
-diff --git a/errors/bg/ERR_UNSUP_REQ b/errors/bg/ERR_UNSUP_REQ -index e9130f9..6ff57a3 100644 ---- a/errors/bg/ERR_UNSUP_REQ -+++ b/errors/bg/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Сървърът не поддържа метода и/или протокола, посочен в заявката

- - --

Кеш сървърът не поддържа всички методи на заявка за всички протоколи. Например, не можете да заявите метод POST за протокол Gopher.

-+

Кеш сървърът не поддържа всички методи на заявка за всички протоколи.

- -

Вашият кеш администратор е %w.

-
-diff --git a/errors/ca/ERR_UNSUP_REQ b/errors/ca/ERR_UNSUP_REQ -index fe4433b..a62cf03 100644 ---- a/errors/ca/ERR_UNSUP_REQ -+++ b/errors/ca/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Mètode i protocol no admesos

- - --

Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request.

-+

Squid does not support all request methods for all access protocols.

- -

L'administrador d'aquesta cache és %w.

-
-diff --git a/errors/cs/ERR_UNSUP_REQ b/errors/cs/ERR_UNSUP_REQ -index cb955f9..42aeb7e 100644 ---- a/errors/cs/ERR_UNSUP_REQ -+++ b/errors/cs/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Unsupported Request Method and Protocol

- - --

Squid nepodporuje všechny typy metod u všech protokolů. Např. není možno použit metodu POST u služby GOPHER.

-+

Squid nepodporuje všechny typy metod u všech protokolů.

- -

Your cache administrator is %w.

-
-diff --git a/errors/da/ERR_UNSUP_REQ b/errors/da/ERR_UNSUP_REQ -index f41d696..0d5d09a 100644 ---- a/errors/da/ERR_UNSUP_REQ -+++ b/errors/da/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Uunderstøttet Forespørgsels Metode og Protokol

- - --

Proxy'en Squid understøtter ikke alle forespørgselsmetoder for alle adgangs protokoller. For eksempel kan du ikke POST en Gopher forespørgsel.

-+

Proxy'en Squid understøtter ikke alle forespørgselsmetoder for alle adgangs protokoller.

- -

Your cache administrator is %w.

-
-diff --git a/errors/de/ERR_UNSUP_REQ b/errors/de/ERR_UNSUP_REQ -index f106207..614e675 100644 ---- a/errors/de/ERR_UNSUP_REQ -+++ b/errors/de/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Anfragemethode und Protokoll nicht unterstützt

- - --

Squid unterstützt nicht alle Anfragemethoden für alle Protokolle. Sie können zum Beispiel keine POST Anfrage über das Gopher Protokoll senden.

-+

Squid unterstützt nicht alle Anfragemethoden für alle Protokolle.

- -

Ihr Cache Administrator ist %w.

-
-diff --git a/errors/el/ERR_UNSUP_REQ b/errors/el/ERR_UNSUP_REQ -index 0c232a5..5d092a7 100644 ---- a/errors/el/ERR_UNSUP_REQ -+++ b/errors/el/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Μη υποστηριζόμενη μέθοδος αίτησης και πρωτόκολλο

- - --

Το Squid δεν υποστηρίζει όλες τις μεθόδους αιτήσεων για όλα τα πρωτόκολλα πρόσβασης. Για παράδειγμα, το POST για Gopher δεν υποστηρίζεται.

-+

Το Squid δεν υποστηρίζει όλες τις μεθόδους αιτήσεων για όλα τα πρωτόκολλα πρόσβασης.

- -

Ο διαχειριστής του μεσολαβητή σας είναι ο %w.

-
diff --git a/errors/en/ERR_UNSUP_REQ b/errors/en/ERR_UNSUP_REQ index 352399d..e208043 100644 --- a/errors/en/ERR_UNSUP_REQ @@ -256,370 +42,6 @@ index 38ba434..facee93 100644 #dirmsg { font-family: courier, monospace; color: black; -diff --git a/errors/es/ERR_UNSUP_REQ b/errors/es/ERR_UNSUP_REQ -index eb1e86e..fc1a63f 100644 ---- a/errors/es/ERR_UNSUP_REQ -+++ b/errors/es/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Método de la petición y protocolo no soportados.

- - --

Squid no admite todos los métodos para todos los protocolos de acceso. Por ejemplo, no se puede hacer un POST a un servidor Gopher.

-+

Squid no admite todos los métodos para todos los protocolos de acceso.

- -

Su administrador del caché es %w.

-
-diff --git a/errors/et/ERR_UNSUP_REQ b/errors/et/ERR_UNSUP_REQ -index 5488e41..cf6ec2a 100644 ---- a/errors/et/ERR_UNSUP_REQ -+++ b/errors/et/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Tundmatu päringu meetod ja protokoll

- - --

Squid ei toeta kõiki päringu meetodeid kõikide protokollidega. Näiteks, te ei saa teha POST operatsiooni Gopher päringus.

-+

Squid ei toeta kõiki päringu meetodeid kõikide protokollidega.

- -

Teie teenusepakkuja aadress on %w.

-
-diff --git a/errors/fa/ERR_UNSUP_REQ b/errors/fa/ERR_UNSUP_REQ -index 065da44..9940bdc 100644 ---- a/errors/fa/ERR_UNSUP_REQ -+++ b/errors/fa/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

روش پشتیبانی‌نشده درخواست و قرارداد

- - --

Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request.

-+

Squid does not support all request methods for all access protocols.

- -

Your cache administrator is %w.

-
-diff --git a/errors/fi/ERR_UNSUP_REQ b/errors/fi/ERR_UNSUP_REQ -index 6a99e60..e06ec69 100644 ---- a/errors/fi/ERR_UNSUP_REQ -+++ b/errors/fi/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Hakupyynnon tyyppi ja yhteyskäytäntö ei tuettu

- - --

Squid ei tue kaikkia hakupyynnon tyyppejä kaikilla protokollilla. Et voi esimerkiksi käyttää POST-pyyntöä gopherilla.

-+

Squid ei tue kaikkia hakupyynnon tyyppejä kaikilla protokollilla.

- -

Your cache administrator is %w.

-
-diff --git a/errors/fr/ERR_UNSUP_REQ b/errors/fr/ERR_UNSUP_REQ -index 9bccd19..ddb6b85 100644 ---- a/errors/fr/ERR_UNSUP_REQ -+++ b/errors/fr/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

La méthode de requête et le protocole ne sont pas pris en charge.

- - --

Squid ne prend pas en charge tous les types de requêtes par rapport à tous les protocoles d'accès. Vous ne pouvez pas par exemple utiliser une requête POST avec le protocole Gopher.

-+

Squid ne prend pas en charge tous les types de requêtes par rapport à tous les protocoles d'accès.

- -

Votre administrateur proxy est %w.

-
-diff --git a/errors/he/ERR_UNSUP_REQ b/errors/he/ERR_UNSUP_REQ -index eaff6f3..8daee1a 100644 ---- a/errors/he/ERR_UNSUP_REQ -+++ b/errors/he/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

שיטת בקשה ופרוטוקול לא נתמכים

- - --

שרת ה Squid אינו תומך בכל שיטות הבקשה לכל הפרוטוקולים. לדוגמא אינך יכול לשלוח בקשת POST ב-Gopher.

-+

שרת ה Squid אינו תומך בכל שיטות הבקשה לכל הפרוטוקולים. לדוגמא אינך יכול לשלוח בקשת.

- -

מנהל השרת הוא %w.

-
-diff --git a/errors/hu/ERR_UNSUP_REQ b/errors/hu/ERR_UNSUP_REQ -index a7a6e43..d1602da 100644 ---- a/errors/hu/ERR_UNSUP_REQ -+++ b/errors/hu/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Nem támogatott kéréstípus vagy protokoll

- - --

A proxyszerver nem támogat minden létező kéréstípus és protokoll kombinációt, így pl. nem lehet POST kéréstípust használni egy Gopher kérésben.

-+

A proxyszerver nem támogat minden létező kéréstípus és protokoll kombinációt.

- -

A proxyszerver üzemeltetőjének e-mail címe: %w.

-
-diff --git a/errors/hy/ERR_UNSUP_REQ b/errors/hy/ERR_UNSUP_REQ -index 0a3cce7..db82035 100644 ---- a/errors/hy/ERR_UNSUP_REQ -+++ b/errors/hy/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Հարցում իրականացնելու մեթոդը և արձանագրությունը չեն աջակցվում

- - --

Squid-ը բոլոր արձանագրությունների համար բոլոր հարցման մեթոդները չի աջակցում. Օրինակ, Gopher արձանագրության համար չեք կարող POST հարցում կատարել.

-+

Squid-ը բոլոր արձանագրությունների համար բոլոր հարցման մեթոդները չի աջակցում.

- -

Ձեր քեշի կառավարիչը %w է.

-
-diff --git a/errors/id/ERR_UNSUP_REQ b/errors/id/ERR_UNSUP_REQ -index 352399d..e208043 100644 ---- a/errors/id/ERR_UNSUP_REQ -+++ b/errors/id/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Unsupported Request Method and Protocol

- - --

Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request.

-+

Squid does not support all request methods for all access protocols.

- -

Your cache administrator is %w.

-
-diff --git a/errors/it/ERR_UNSUP_REQ b/errors/it/ERR_UNSUP_REQ -index d6ebc13..4f770bb 100644 ---- a/errors/it/ERR_UNSUP_REQ -+++ b/errors/it/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Metodo e protocollo della richiesta non sono supportati.

- - --

Squid non consente di utilizzare qualsiasi tipo di richiesta per qualsiasi protocollo (a esempio non consente una richiesta POST su protocollo Gopher).

-+

Squid non consente di utilizzare qualsiasi tipo di richiesta per qualsiasi protocollo.

- -

L'amministratore del proxy è %w.

-
-diff --git a/errors/ja/ERR_UNSUP_REQ b/errors/ja/ERR_UNSUP_REQ -index 67b6cf2..a7b7950 100644 ---- a/errors/ja/ERR_UNSUP_REQ -+++ b/errors/ja/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

サポートしていないリクエストメソッドとプロトコルです。

- - --

Squidはすべてのアクセス・プロトコルに対して、すべてのリクエストメソッドをサポートしているわけではありません。例えば、POSTをGopherのリクエストで行うことはできません。

-+

Squidはすべてのアクセス・プロトコルに対して、すべてのリクエストメソッドをサポートしているわけではありません。

- -

Your cache administrator is %w.

-
-diff --git a/errors/ka/ERR_UNSUP_REQ b/errors/ka/ERR_UNSUP_REQ -index 1238302..8d2c62e 100644 ---- a/errors/ka/ERR_UNSUP_REQ -+++ b/errors/ka/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Unsupported Request Method and Protocol

- - --

Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request.

-+

Squid does not support all request methods for all access protocols.

- -

თქვენი კეშის ადმინისტრატორია %w.

-
-diff --git a/errors/ko/ERR_UNSUP_REQ b/errors/ko/ERR_UNSUP_REQ -index d19ce25..ca7c946 100644 ---- a/errors/ko/ERR_UNSUP_REQ -+++ b/errors/ko/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

지원되지 않는 Request Method와 프로토콜입니다.

- - --

Squid는 모든 접속 프로토콜에 대한 request method를 지원하지 않습니다. 한가지 예로, Gopher에서 POST request를 사용할 수 없습니다.

-+

Squid는 모든 접속 프로토콜에 대한 request method를 지원하지 않습니다.

- -

Your cache administrator is %w.

-
-diff --git a/errors/lt/ERR_UNSUP_REQ b/errors/lt/ERR_UNSUP_REQ -index 9e3949b..29af2de 100644 ---- a/errors/lt/ERR_UNSUP_REQ -+++ b/errors/lt/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Nepalaikomas užklausos metodas ir protokolas

- - --

Squid palaiko ne visus užklausos metodus daliai protokolų. Pavyzdžiui, jūs negalite vykdyti POST Gopher tipo užklausoje.

-+

Squid palaiko ne visus užklausos metodus daliai protokolų.

- -

Your cache administrator is %w.

-
-diff --git a/errors/lv/ERR_UNSUP_REQ b/errors/lv/ERR_UNSUP_REQ -index 85450e6..88bfc8b 100644 ---- a/errors/lv/ERR_UNSUP_REQ -+++ b/errors/lv/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Neatbalstīta pieprasījuma metode un protokols

- - --

Squid neatbalsta visas pieprasījuma metodes visiem protokoliem. Piemēram, Jūs nevarat veikt POST pieprasījumu izmantojot Gopher protokolu.

-+

Squid neatbalsta visas pieprasījuma metodes visiem protokoliem.

- -

Jūsu kešatmiņas administrators ir %w.

-
-diff --git a/errors/ms/ERR_UNSUP_REQ b/errors/ms/ERR_UNSUP_REQ -index 987fe76..20948f5 100644 ---- a/errors/ms/ERR_UNSUP_REQ -+++ b/errors/ms/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Unsupported Request Method and Protocol

- - --

Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request.

-+

Squid does not support all request methods for all access protocols.

- -

Pengurus Proxy anda ialah %w.

-
-diff --git a/errors/nl/ERR_UNSUP_REQ b/errors/nl/ERR_UNSUP_REQ -index a8cb984..c46c47a 100644 ---- a/errors/nl/ERR_UNSUP_REQ -+++ b/errors/nl/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Niet ondersteunde verzoekmethode of protocol

- - --

Squid ondersteunt niet alle verzoekmethoden voor alle toegangsprotocollen. U kunt bijvoorbeeld geen Gopher verzoek POSTen.

-+

Squid ondersteunt niet alle verzoekmethoden voor alle toegangsprotocollen.

- -

De beheerder van deze cache is %w.

-
-diff --git a/errors/oc/ERR_UNSUP_REQ b/errors/oc/ERR_UNSUP_REQ -index 617f4a9..4e2ea38 100644 ---- a/errors/oc/ERR_UNSUP_REQ -+++ b/errors/oc/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Unsupported Request Method and Protocol

- - --

Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request.

-+

Squid does not support all request methods for all access protocols.

- -

Vòstre administrator d'amagatal es %w.

-
-diff --git a/errors/pl/ERR_UNSUP_REQ b/errors/pl/ERR_UNSUP_REQ -index 44bc0de..64c594c 100644 ---- a/errors/pl/ERR_UNSUP_REQ -+++ b/errors/pl/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

użyta w żądaniu kombinacja metoda/protokół jest niewłaściwa

- - --

Squid nie wspiera wszystkich metod we wszystkich protokołach. Na przykład nie możesz użyć metody POST w żądaniu skierowanym do usługi Gopher.

-+

Squid nie wspiera wszystkich metod we wszystkich protokołach.

- -

Your cache administrator is %w.

-
-diff --git a/errors/pt-br/ERR_UNSUP_REQ b/errors/pt-br/ERR_UNSUP_REQ -index 60e08d3..5fbc882 100644 ---- a/errors/pt-br/ERR_UNSUP_REQ -+++ b/errors/pt-br/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Método e Protocolo de Requisição Não-Suportado

- - --

Squid não suporta todos os métodos de requisição para todos os protocolos de acesso. Por exemplo, você não pode emitir uma requisição POST ao protocolo Gopher.

-+

Squid não suporta todos os métodos de requisição para todos os protocolos de acesso.

- -

Seu administrador do cache é %w.

-
-diff --git a/errors/pt/ERR_UNSUP_REQ b/errors/pt/ERR_UNSUP_REQ -index ed3a68b..4b8bbbb 100644 ---- a/errors/pt/ERR_UNSUP_REQ -+++ b/errors/pt/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Método ou protocolo não suportado.

- - --

Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request.

-+

Squid does not support all request methods for all access protocols.

- -

Your cache administrator is %w.

-
-diff --git a/errors/ro/ERR_UNSUP_REQ b/errors/ro/ERR_UNSUP_REQ -index f97375f..a237af2 100644 ---- a/errors/ro/ERR_UNSUP_REQ -+++ b/errors/ro/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Metodă de cerere şi protocol nesuportată

- - --

Squid nu suportă toate metodele de cerere pentru toate protocoalele de acces. De exemplu, nu puteţi face o cerere de tip POST pentru Gopher.

-+

Squid nu suportă toate metodele de cerere pentru toate protocoalele de acces.

- -

Administratorul cache-ului este %w.

-
-diff --git a/errors/ru/ERR_UNSUP_REQ b/errors/ru/ERR_UNSUP_REQ -index 2a22302..b7fa536 100644 ---- a/errors/ru/ERR_UNSUP_REQ -+++ b/errors/ru/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Неподдерживаемый метод запроса или протокол

- - --

Squid не поддерживает все методы запросов для всех протоколов. К примеру, для протокола Gopher Вы не можете выполнить запрос POST.

-+

Squid не поддерживает все методы запросов для всех протоколов.

- -

Администратор Вашего кэша: %w.

-
-diff --git a/errors/sk/ERR_UNSUP_REQ b/errors/sk/ERR_UNSUP_REQ -index 4c37736..aecebc7 100644 ---- a/errors/sk/ERR_UNSUP_REQ -+++ b/errors/sk/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Nepodporovaná metóda a protokol požiadavky

- - --

Squid nepodporuje všetky typy metód pri všetkých protokoloch. Napríklad: nie je možné použiť metódu POST pri službe Gopher.

-+

Squid nepodporuje všetky typy metód pri všetkých protokoloch.

- -

Vaším správcom cache je %w.

-
-diff --git a/errors/sl/ERR_UNSUP_REQ b/errors/sl/ERR_UNSUP_REQ -index 3fff99a..7d421a5 100644 ---- a/errors/sl/ERR_UNSUP_REQ -+++ b/errors/sl/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Nepodprta metoda zahteve in protokol

- - --

Squid ne podpira vseh metod zahtev za vse protokole dostopa. Tako npr. metode POST ne morete uporabiti za zahtevo Gopher.

-+

Squid ne podpira vseh metod zahtev za vse protokole dostopa.

- -

Skrbnik vašega predpomnilnika je %w.

-
-diff --git a/errors/sr-cyrl/ERR_UNSUP_REQ b/errors/sr-cyrl/ERR_UNSUP_REQ -index 352399d..e208043 100644 ---- a/errors/sr-cyrl/ERR_UNSUP_REQ -+++ b/errors/sr-cyrl/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Unsupported Request Method and Protocol

- - --

Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request.

-+

Squid does not support all request methods for all access protocols.

- -

Your cache administrator is %w.

-
-diff --git a/errors/sr-latn/ERR_UNSUP_REQ b/errors/sr-latn/ERR_UNSUP_REQ -index 11ba17b..64ee787 100644 ---- a/errors/sr-latn/ERR_UNSUP_REQ -+++ b/errors/sr-latn/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Nepodržan metod ili protokol zahteva (Request)

- - --

Squid Proksi server ne podržava sve metode zahteva za sve moguæe pristupne protokole. Na primer ne možete da uradite POST na Gopher zahtev.

-+

Squid Proksi server ne podržava sve metode zahteva za sve moguæe pristupne protokole.

- -

Vaš keš/proksi administrator je: %w.

-
-diff --git a/errors/sv/ERR_UNSUP_REQ b/errors/sv/ERR_UNSUP_REQ -index 0fcb988..d7fdeef 100644 ---- a/errors/sv/ERR_UNSUP_REQ -+++ b/errors/sv/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Ej stöd för begärd Metod och Protokoll

- - --

Squid stödjer inte alla frågemetoder för alla protokoll. Till exempel, Ni kan inte POST'a en Gopher förfrågan.

-+

Squid stödjer inte alla frågemetoder för alla protokoll.

- -

Din cacheserver administratör är %w.

-
diff --git a/errors/templates/ERR_UNSUP_REQ b/errors/templates/ERR_UNSUP_REQ index e880392..196887d 100644 --- a/errors/templates/ERR_UNSUP_REQ @@ -631,97 +53,6 @@ index e880392..196887d 100644 -

Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request.

+

Squid does not support all request methods for all access protocols.

-

Your cache administrator is %w.

-
-diff --git a/errors/th/ERR_UNSUP_REQ b/errors/th/ERR_UNSUP_REQ -index d34fc2d..9586681 100644 ---- a/errors/th/ERR_UNSUP_REQ -+++ b/errors/th/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

ไม่รองรับโปรโตคอลและวิธีการหรือคำสั่งที่เรียกมา (request method)

- - --

Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request.

-+

Squid does not support all request methods for all access protocols.

- -

ผู้ดูแลระบบแคชของคุณคือ %w

-
-diff --git a/errors/tr/ERR_UNSUP_REQ b/errors/tr/ERR_UNSUP_REQ -index 9c00be4..90db4b7 100644 ---- a/errors/tr/ERR_UNSUP_REQ -+++ b/errors/tr/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Desteklenmeyen istek yöntemi ve protokol.

- - --

Squid, bazı erişim protokollerin, bazı istek yöntemlerini desteklemiyor. Örneğin Gopher isteğinizde POST yapamazsınız.

-+

Squid, bazı erişim protokollerin, bazı istek yöntemlerini desteklemiyor.

- -

Önbellk yöneticiniz %w.

-
-diff --git a/errors/uk/ERR_UNSUP_REQ b/errors/uk/ERR_UNSUP_REQ -index d92e9e5..4ffb93e 100644 ---- a/errors/uk/ERR_UNSUP_REQ -+++ b/errors/uk/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Метод запиту чи протокол не підтримуються

- - --

Squid не підтримує всі методи запитів для всіх наявних протоколів. Як приклад, Ви не можете виконати запит POST для протоколу Gopher.

-+

Squid не підтримує всі методи запитів для всіх наявних протоколів.

- -

Адміністратор даного кешу %w.

-
-diff --git a/errors/uz/ERR_UNSUP_REQ b/errors/uz/ERR_UNSUP_REQ -index 47f5fe9..7c4cfa7 100644 ---- a/errors/uz/ERR_UNSUP_REQ -+++ b/errors/uz/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Unsupported Request Method and Protocol

- - --

Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request.

-+

Squid does not support all request methods for all access protocols.

- -

Your cache administrator is %w.

-
-diff --git a/errors/vi/ERR_UNSUP_REQ b/errors/vi/ERR_UNSUP_REQ -index 807df9e..f84d447 100644 ---- a/errors/vi/ERR_UNSUP_REQ -+++ b/errors/vi/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

Unsupported Request Method and Protocol

- - --

Squid không hỗ trợ tất cả các phương pháp yêu cầu cho mỗi giao thức truy cập. Chẳng hạn, bạn không có khả năng POST một yêu cầu Gopher.

-+

Squid không hỗ trợ tất cả các phương pháp yêu cầu cho mỗi giao thức truy cập.

- -

Your cache administrator is %w.

-
-diff --git a/errors/zh-hans/ERR_UNSUP_REQ b/errors/zh-hans/ERR_UNSUP_REQ -index 056c22b..35b28a3 100644 ---- a/errors/zh-hans/ERR_UNSUP_REQ -+++ b/errors/zh-hans/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

不支持的请求方式和协议

- - --

Squid (缓存服务器)不能对所有的存取协议支持所有的请求方式。比如说,你不能对 GOPHER 进行一个 POST 请求。

-+

Squid (缓存服务器)不能对所有的存取协议支持所有的请求方式。

- -

缓存服务器的管理员 %w.

-
-diff --git a/errors/zh-hant/ERR_UNSUP_REQ b/errors/zh-hant/ERR_UNSUP_REQ -index eacf4c4..8023a8b 100644 ---- a/errors/zh-hant/ERR_UNSUP_REQ -+++ b/errors/zh-hant/ERR_UNSUP_REQ -@@ -24,7 +24,7 @@ body -

尚未支援的要求方式或通訊協定

- - --

因為 Squid (網路快取程式)並未支援所有的連結要求方式在各式通訊協定上。比如說,你不能要求一個 GOPHER 的 POST 連結要求。

-+

因為 Squid (網路快取程式)並未支援所有的連結要求方式在各式通訊協定上。

-

Your cache administrator is %w.


diff --git a/src/FwdState.cc b/src/FwdState.cc @@ -1120,18 +451,6 @@ index a2779e7..94595b6 100644 case AnyP::PROTO_WAIS: return libecap::protocolWais; case AnyP::PROTO_WHOIS: -diff --git a/src/anyp/ProtocolType.cc b/src/anyp/ProtocolType.cc -index 7b8c3ef..9b92c79 100644 ---- a/src/anyp/ProtocolType.cc -+++ b/src/anyp/ProtocolType.cc -@@ -13,7 +13,6 @@ const char * ProtocolType_str[] = { - "HTTPS", - "COAP", - "COAPS", -- "GOPHER", - "WAIS", - "CACHE_OBJECT", - "ICP", diff --git a/src/anyp/ProtocolType.h b/src/anyp/ProtocolType.h index 66f7bc2..ef3ab25 100644 --- a/src/anyp/ProtocolType.h @@ -1208,47 +527,9 @@ index b5519b2..bc2ddcd 100644 + Note that this is a global limit. It affects all HTTP, HTCP, and FTP connections from the client. For finer control use the ACL access controls. - Requires client_db to be enabled (the default). -diff --git a/src/cf.data.pre.config b/src/cf.data.pre.config -index 4aef432..c13ecae 100644 ---- a/src/cf.data.pre.config -+++ b/src/cf.data.pre.config -@@ -1513,7 +1513,6 @@ acl SSL_ports port 443 - acl Safe_ports port 80 # http - acl Safe_ports port 21 # ftp - acl Safe_ports port 443 # https --acl Safe_ports port 70 # gopher - acl Safe_ports port 210 # wais - acl Safe_ports port 1025-65535 # unregistered ports - acl Safe_ports port 280 # http-mgmt -@@ -4563,7 +4562,7 @@ DOC_START - [http::]serverConnection(), Config.Timeout.read, timeoutCall); -} - -diff --git a/src/gopher.cc.CVE-2021-46784 b/src/gopher.cc.CVE-2021-46784 -deleted file mode 100644 -index 169b0e1..0000000 ---- a/src/gopher.cc.CVE-2021-46784 -+++ /dev/null -@@ -1,982 +0,0 @@ --/* -- * Copyright (C) 1996-2021 The Squid Software Foundation and contributors -- * -- * Squid software is distributed under GPLv2+ license and includes -- * contributions from numerous individuals and organizations. -- * Please see the COPYING and CONTRIBUTORS files for details. -- */ -- --/* DEBUG: section 10 Gopher */ -- --#include "squid.h" --#include "comm.h" --#include "comm/Read.h" --#include "comm/Write.h" --#include "errorpage.h" --#include "fd.h" --#include "FwdState.h" --#include "globals.h" --#include "html_quote.h" --#include "HttpReply.h" --#include "HttpRequest.h" --#include "MemBuf.h" --#include "mime.h" --#include "parser/Tokenizer.h" --#include "rfc1738.h" --#include "SquidConfig.h" --#include "SquidTime.h" --#include "StatCounters.h" --#include "Store.h" --#include "tools.h" -- --#if USE_DELAY_POOLS --#include "DelayPools.h" --#include "MemObject.h" --#endif -- --/* gopher type code from rfc. Anawat. */ --#define GOPHER_FILE '0' --#define GOPHER_DIRECTORY '1' --#define GOPHER_CSO '2' --#define GOPHER_ERROR '3' --#define GOPHER_MACBINHEX '4' --#define GOPHER_DOSBIN '5' --#define GOPHER_UUENCODED '6' --#define GOPHER_INDEX '7' --#define GOPHER_TELNET '8' --#define GOPHER_BIN '9' --#define GOPHER_REDUNT '+' --#define GOPHER_3270 'T' --#define GOPHER_GIF 'g' --#define GOPHER_IMAGE 'I' -- --#define GOPHER_HTML 'h' --#define GOPHER_INFO 'i' -- --/// W3 address --#define GOPHER_WWW 'w' --#define GOPHER_SOUND 's' -- --#define GOPHER_PLUS_IMAGE ':' --#define GOPHER_PLUS_MOVIE ';' --#define GOPHER_PLUS_SOUND '<' -- --#define GOPHER_PORT 70 -- --#define TAB '\t' -- --// TODO CODE: should this be a protocol-specific thing? --#define TEMP_BUF_SIZE 4096 -- --#define MAX_CSO_RESULT 1024 -- --/** -- * Gopher Gateway Internals -- * -- * Gopher is somewhat complex and gross because it must convert from -- * the Gopher protocol to HTTP. -- */ --class GopherStateData --{ -- CBDATA_CLASS(GopherStateData); -- --public: -- GopherStateData(FwdState *aFwd) : -- entry(aFwd->entry), -- conversion(NORMAL), -- HTML_header_added(0), -- HTML_pre(0), -- type_id(GOPHER_FILE /* '0' */), -- cso_recno(0), -- len(0), -- buf(NULL), -- fwd(aFwd) -- { -- *request = 0; -- buf = (char *)memAllocate(MEM_4K_BUF); -- entry->lock("gopherState"); -- *replybuf = 0; -- } -- ~GopherStateData() {if(buf) swanSong();} -- -- /* AsyncJob API emulated */ -- void deleteThis(const char *aReason); -- void swanSong(); -- --public: -- StoreEntry *entry; -- enum { -- NORMAL, -- HTML_DIR, -- HTML_INDEX_RESULT, -- HTML_CSO_RESULT, -- HTML_INDEX_PAGE, -- HTML_CSO_PAGE -- } conversion; -- int HTML_header_added; -- int HTML_pre; -- char type_id; -- char request[MAX_URL]; -- int cso_recno; -- int len; -- char *buf; /* pts to a 4k page */ -- Comm::ConnectionPointer serverConn; -- FwdState::Pointer fwd; -- HttpReply::Pointer reply_; -- char replybuf[BUFSIZ]; --}; -- --CBDATA_CLASS_INIT(GopherStateData); -- --static CLCB gopherStateFree; --static void gopherMimeCreate(GopherStateData *); --static void gopher_request_parse(const HttpRequest * req, -- char *type_id, -- char *request); --static void gopherEndHTML(GopherStateData *); --static void gopherToHTML(GopherStateData *, char *inbuf, int len); --static CTCB gopherTimeout; --static IOCB gopherReadReply; --static IOCB gopherSendComplete; --static PF gopherSendRequest; -- --static char def_gopher_bin[] = "www/unknown"; -- --static char def_gopher_text[] = "text/plain"; -- --static void --gopherStateFree(const CommCloseCbParams ¶ms) --{ -- GopherStateData *gopherState = (GopherStateData *)params.data; -- -- if (gopherState == NULL) -- return; -- -- gopherState->deleteThis("gopherStateFree"); --} -- --void --GopherStateData::deleteThis(const char *) --{ -- swanSong(); -- delete this; --} -- --void --GopherStateData::swanSong() --{ -- if (entry) -- entry->unlock("gopherState"); -- -- if (buf) { -- memFree(buf, MEM_4K_BUF); -- buf = nullptr; -- } --} -- --/** -- * Create MIME Header for Gopher Data -- */ --static void --gopherMimeCreate(GopherStateData * gopherState) --{ -- StoreEntry *entry = gopherState->entry; -- const char *mime_type = NULL; -- const char *mime_enc = NULL; -- -- switch (gopherState->type_id) { -- -- case GOPHER_DIRECTORY: -- -- case GOPHER_INDEX: -- -- case GOPHER_HTML: -- -- case GOPHER_WWW: -- -- case GOPHER_CSO: -- mime_type = "text/html"; -- break; -- -- case GOPHER_GIF: -- -- case GOPHER_IMAGE: -- -- case GOPHER_PLUS_IMAGE: -- mime_type = "image/gif"; -- break; -- -- case GOPHER_SOUND: -- -- case GOPHER_PLUS_SOUND: -- mime_type = "audio/basic"; -- break; -- -- case GOPHER_PLUS_MOVIE: -- mime_type = "video/mpeg"; -- break; -- -- case GOPHER_MACBINHEX: -- -- case GOPHER_DOSBIN: -- -- case GOPHER_UUENCODED: -- -- case GOPHER_BIN: -- /* Rightnow We have no idea what it is. */ -- mime_enc = mimeGetContentEncoding(gopherState->request); -- mime_type = mimeGetContentType(gopherState->request); -- if (!mime_type) -- mime_type = def_gopher_bin; -- break; -- -- case GOPHER_FILE: -- -- default: -- mime_enc = mimeGetContentEncoding(gopherState->request); -- mime_type = mimeGetContentType(gopherState->request); -- if (!mime_type) -- mime_type = def_gopher_text; -- break; -- } -- -- assert(entry->isEmpty()); -- -- HttpReply *reply = new HttpReply; -- entry->buffer(); -- reply->setHeaders(Http::scOkay, "Gatewaying", mime_type, -1, -1, -2); -- if (mime_enc) -- reply->header.putStr(Http::HdrType::CONTENT_ENCODING, mime_enc); -- -- entry->replaceHttpReply(reply); -- gopherState->reply_ = reply; --} -- --/** -- * Parse a gopher request into components. By Anawat. -- */ --static void --gopher_request_parse(const HttpRequest * req, char *type_id, char *request) --{ -- ::Parser::Tokenizer tok(req->url.path()); -- -- if (request) -- *request = 0; -- -- tok.skip('/'); // ignore failures? path could be ab-empty -- -- if (tok.atEnd()) { -- *type_id = GOPHER_DIRECTORY; -- return; -- } -- -- static const CharacterSet anyByte("UTF-8",0x00, 0xFF); -- -- SBuf typeId; -- (void)tok.prefix(typeId, anyByte, 1); // never fails since !atEnd() -- *type_id = typeId[0]; -- -- if (request) { -- SBufToCstring(request, tok.remaining().substr(0, MAX_URL-1)); -- /* convert %xx to char */ -- rfc1738_unescape(request); -- } --} -- --/** -- * Parse the request to determine whether it is cachable. -- * -- * \param req Request data. -- * \retval 0 Not cachable. -- * \retval 1 Cachable. -- */ --int --gopherCachable(const HttpRequest * req) --{ -- int cachable = 1; -- char type_id; -- /* parse to see type */ -- gopher_request_parse(req, -- &type_id, -- NULL); -- -- switch (type_id) { -- -- case GOPHER_INDEX: -- -- case GOPHER_CSO: -- -- case GOPHER_TELNET: -- -- case GOPHER_3270: -- cachable = 0; -- break; -- -- default: -- cachable = 1; -- } -- -- return cachable; --} -- --static void --gopherHTMLHeader(StoreEntry * e, const char *title, const char *substring) --{ -- storeAppendPrintf(e, "\n"); -- storeAppendPrintf(e, ""); -- storeAppendPrintf(e, title, substring); -- storeAppendPrintf(e, ""); -- storeAppendPrintf(e, "\n"); -- storeAppendPrintf(e, "\n

"); -- storeAppendPrintf(e, title, substring); -- storeAppendPrintf(e, "

\n"); --} -- --static void --gopherHTMLFooter(StoreEntry * e) --{ -- storeAppendPrintf(e, "
\n"); -- storeAppendPrintf(e, "
\n"); -- storeAppendPrintf(e, "Generated %s by %s (%s)\n", -- mkrfc1123(squid_curtime), -- getMyHostname(), -- visible_appname_string); -- storeAppendPrintf(e, "
\n"); --} -- --static void --gopherEndHTML(GopherStateData * gopherState) --{ -- StoreEntry *e = gopherState->entry; -- -- if (!gopherState->HTML_header_added) { -- gopherHTMLHeader(e, "Server Return Nothing", NULL); -- storeAppendPrintf(e, "

The Gopher query resulted in a blank response

"); -- } else if (gopherState->HTML_pre) { -- storeAppendPrintf(e, "\n"); -- } -- -- gopherHTMLFooter(e); --} -- --/** -- * Convert Gopher to HTML. -- * -- * Borrow part of code from libwww2 came with Mosaic distribution. -- */ --static void --gopherToHTML(GopherStateData * gopherState, char *inbuf, int len) --{ -- char *pos = inbuf; -- char *lpos = NULL; -- char *tline = NULL; -- LOCAL_ARRAY(char, line, TEMP_BUF_SIZE); -- LOCAL_ARRAY(char, tmpbuf, TEMP_BUF_SIZE); -- char *name = NULL; -- char *selector = NULL; -- char *host = NULL; -- char *port = NULL; -- char *escaped_selector = NULL; -- const char *icon_url = NULL; -- char gtype; -- StoreEntry *entry = NULL; -- -- memset(tmpbuf, '\0', TEMP_BUF_SIZE); -- memset(line, '\0', TEMP_BUF_SIZE); -- -- entry = gopherState->entry; -- -- if (gopherState->conversion == GopherStateData::HTML_INDEX_PAGE) { -- char *html_url = html_quote(entry->url()); -- gopherHTMLHeader(entry, "Gopher Index %s", html_url); -- storeAppendPrintf(entry, -- "

This is a searchable Gopher index. Use the search\n" -- "function of your browser to enter search terms.\n" -- "\n"); -- gopherHTMLFooter(entry); -- /* now let start sending stuff to client */ -- entry->flush(); -- gopherState->HTML_header_added = 1; -- -- return; -- } -- -- if (gopherState->conversion == GopherStateData::HTML_CSO_PAGE) { -- char *html_url = html_quote(entry->url()); -- gopherHTMLHeader(entry, "CSO Search of %s", html_url); -- storeAppendPrintf(entry, -- "

A CSO database usually contains a phonebook or\n" -- "directory. Use the search function of your browser to enter\n" -- "search terms.

\n"); -- gopherHTMLFooter(entry); -- /* now let start sending stuff to client */ -- entry->flush(); -- gopherState->HTML_header_added = 1; -- -- return; -- } -- -- String outbuf; -- -- if (!gopherState->HTML_header_added) { -- if (gopherState->conversion == GopherStateData::HTML_CSO_RESULT) -- gopherHTMLHeader(entry, "CSO Search Result", NULL); -- else -- gopherHTMLHeader(entry, "Gopher Menu", NULL); -- -- outbuf.append ("
");
--
--        gopherState->HTML_header_added = 1;
--
--        gopherState->HTML_pre = 1;
--    }
--
--    while (pos < inbuf + len) {
--        int llen;
--        int left = len - (pos - inbuf);
--        lpos = (char *)memchr(pos, '\n', left);
--        if (lpos) {
--            ++lpos;             /* Next line is after \n */
--            llen = lpos - pos;
--        } else {
--            llen = left;
--        }
--        if (gopherState->len + llen >= TEMP_BUF_SIZE) {
--            debugs(10, DBG_IMPORTANT, "GopherHTML: Buffer overflow. Lost some data on URL: " << entry->url()  );
--            llen = TEMP_BUF_SIZE - gopherState->len - 1;
--        }
--        if (!lpos) {
--            /* there is no complete line in inbuf */
--            /* copy it to temp buffer */
--            /* note: llen is adjusted above */
--            memcpy(gopherState->buf + gopherState->len, pos, llen);
--            gopherState->len += llen;
--            break;
--        }
--        if (gopherState->len != 0) {
--            /* there is something left from last tx. */
--            memcpy(line, gopherState->buf, gopherState->len);
--            memcpy(line + gopherState->len, pos, llen);
--            llen += gopherState->len;
--            gopherState->len = 0;
--        } else {
--            memcpy(line, pos, llen);
--        }
--        line[llen + 1] = '\0';
--        /* move input to next line */
--        pos = lpos;
--
--        /* at this point. We should have one line in buffer to process */
--
--        if (*line == '.') {
--            /* skip it */
--            memset(line, '\0', TEMP_BUF_SIZE);
--            continue;
--        }
--
--        switch (gopherState->conversion) {
--
--        case GopherStateData::HTML_INDEX_RESULT:
--
--        case GopherStateData::HTML_DIR: {
--            tline = line;
--            gtype = *tline;
--            ++tline;
--            name = tline;
--            selector = strchr(tline, TAB);
--
--            if (selector) {
--                *selector = '\0';
--                ++selector;
--                host = strchr(selector, TAB);
--
--                if (host) {
--                    *host = '\0';
--                    ++host;
--                    port = strchr(host, TAB);
--
--                    if (port) {
--                        char *junk;
--                        port[0] = ':';
--                        junk = strchr(host, TAB);
--
--                        if (junk)
--                            *junk++ = 0;    /* Chop port */
--                        else {
--                            junk = strchr(host, '\r');
--
--                            if (junk)
--                                *junk++ = 0;    /* Chop port */
--                            else {
--                                junk = strchr(host, '\n');
--
--                                if (junk)
--                                    *junk++ = 0;    /* Chop port */
--                            }
--                        }
--
--                        if ((port[1] == '0') && (!port[2]))
--                            port[0] = 0;    /* 0 means none */
--                    }
--
--                    /* escape a selector here */
--                    escaped_selector = xstrdup(rfc1738_escape_part(selector));
--
--                    switch (gtype) {
--
--                    case GOPHER_DIRECTORY:
--                        icon_url = mimeGetIconURL("internal-menu");
--                        break;
--
--                    case GOPHER_HTML:
--
--                    case GOPHER_FILE:
--                        icon_url = mimeGetIconURL("internal-text");
--                        break;
--
--                    case GOPHER_INDEX:
--
--                    case GOPHER_CSO:
--                        icon_url = mimeGetIconURL("internal-index");
--                        break;
--
--                    case GOPHER_IMAGE:
--
--                    case GOPHER_GIF:
--
--                    case GOPHER_PLUS_IMAGE:
--                        icon_url = mimeGetIconURL("internal-image");
--                        break;
--
--                    case GOPHER_SOUND:
--
--                    case GOPHER_PLUS_SOUND:
--                        icon_url = mimeGetIconURL("internal-sound");
--                        break;
--
--                    case GOPHER_PLUS_MOVIE:
--                        icon_url = mimeGetIconURL("internal-movie");
--                        break;
--
--                    case GOPHER_TELNET:
--
--                    case GOPHER_3270:
--                        icon_url = mimeGetIconURL("internal-telnet");
--                        break;
--
--                    case GOPHER_BIN:
--
--                    case GOPHER_MACBINHEX:
--
--                    case GOPHER_DOSBIN:
--
--                    case GOPHER_UUENCODED:
--                        icon_url = mimeGetIconURL("internal-binary");
--                        break;
--
--                    case GOPHER_INFO:
--                        icon_url = NULL;
--                        break;
--
--                    default:
--                        icon_url = mimeGetIconURL("internal-unknown");
--                        break;
--                    }
--
--                    memset(tmpbuf, '\0', TEMP_BUF_SIZE);
--
--                    if ((gtype == GOPHER_TELNET) || (gtype == GOPHER_3270)) {
--                        if (strlen(escaped_selector) != 0)
--                            snprintf(tmpbuf, TEMP_BUF_SIZE, " %s\n",
--                                     icon_url, escaped_selector, rfc1738_escape_part(host),
--                                     *port ? ":" : "", port, html_quote(name));
--                        else
--                            snprintf(tmpbuf, TEMP_BUF_SIZE, " %s\n",
--                                     icon_url, rfc1738_escape_part(host), *port ? ":" : "",
--                                     port, html_quote(name));
--
--                    } else if (gtype == GOPHER_INFO) {
--                        snprintf(tmpbuf, TEMP_BUF_SIZE, "\t%s\n", html_quote(name));
--                    } else {
--                        if (strncmp(selector, "GET /", 5) == 0) {
--                            /* WWW link */
--                            snprintf(tmpbuf, TEMP_BUF_SIZE, " %s\n",
--                                     icon_url, host, rfc1738_escape_unescaped(selector + 5), html_quote(name));
--                        } else {
--                            /* Standard link */
--                            snprintf(tmpbuf, TEMP_BUF_SIZE, " %s\n",
--                                     icon_url, host, gtype, escaped_selector, html_quote(name));
--                        }
--                    }
--
--                    safe_free(escaped_selector);
--                    outbuf.append(tmpbuf);
--                } else {
--                    memset(line, '\0', TEMP_BUF_SIZE);
--                    continue;
--                }
--            } else {
--                memset(line, '\0', TEMP_BUF_SIZE);
--                continue;
--            }
--
--            break;
--            }           /* HTML_DIR, HTML_INDEX_RESULT */
--
--        case GopherStateData::HTML_CSO_RESULT: {
--            if (line[0] == '-') {
--                int code, recno;
--                char *s_code, *s_recno, *result;
--
--                s_code = strtok(line + 1, ":\n");
--                s_recno = strtok(NULL, ":\n");
--                result = strtok(NULL, "\n");
--
--                if (!result)
--                    break;
--
--                code = atoi(s_code);
--
--                recno = atoi(s_recno);
--
--                if (code != 200)
--                    break;
--
--                if (gopherState->cso_recno != recno) {
--                    snprintf(tmpbuf, TEMP_BUF_SIZE, "

Record# %d
%s

\n
", recno, html_quote(result));
--                    gopherState->cso_recno = recno;
--                } else {
--                    snprintf(tmpbuf, TEMP_BUF_SIZE, "%s\n", html_quote(result));
--                }
--
--                outbuf.append(tmpbuf);
--                break;
--            } else {
--                int code;
--                char *s_code, *result;
--
--                s_code = strtok(line, ":");
--                result = strtok(NULL, "\n");
--
--                if (!result)
--                    break;
--
--                code = atoi(s_code);
--
--                switch (code) {
--
--                case 200: {
--                    /* OK */
--                    /* Do nothing here */
--                    break;
--                }
--
--                case 102:   /* Number of matches */
--
--                case 501:   /* No Match */
--
--                case 502: { /* Too Many Matches */
--                    /* Print the message the server returns */
--                    snprintf(tmpbuf, TEMP_BUF_SIZE, "

%s

\n
", html_quote(result));
--                    outbuf.append(tmpbuf);
--                    break;
--                }
--
--                }
--            }
--
--            }           /* HTML_CSO_RESULT */
--
--        default:
--            break;      /* do nothing */
--
--        }           /* switch */
--
--    }               /* while loop */
--
--    if (outbuf.size() > 0) {
--        entry->append(outbuf.rawBuf(), outbuf.size());
--        /* now let start sending stuff to client */
--        entry->flush();
--    }
--
--    outbuf.clean();
--    return;
--}
--
--static void
--gopherTimeout(const CommTimeoutCbParams &io)
--{
--    GopherStateData *gopherState = static_cast(io.data);
--    debugs(10, 4, HERE << io.conn << ": '" << gopherState->entry->url() << "'" );
--
--    gopherState->fwd->fail(new ErrorState(ERR_READ_TIMEOUT, Http::scGatewayTimeout, gopherState->fwd->request));
--
--    if (Comm::IsConnOpen(io.conn))
--        io.conn->close();
--}
--
--/**
-- * This will be called when data is ready to be read from fd.
-- * Read until error or connection closed.
-- */
--static void
--gopherReadReply(const Comm::ConnectionPointer &conn, char *buf, size_t len, Comm::Flag flag, int xerrno, void *data)
--{
--    GopherStateData *gopherState = (GopherStateData *)data;
--    StoreEntry *entry = gopherState->entry;
--    int clen;
--    int bin;
--    size_t read_sz = BUFSIZ;
--#if USE_DELAY_POOLS
--    DelayId delayId = entry->mem_obj->mostBytesAllowed();
--#endif
--
--    /* Bail out early on Comm::ERR_CLOSING - close handlers will tidy up for us */
--
--    if (flag == Comm::ERR_CLOSING) {
--        return;
--    }
--
--    assert(buf == gopherState->replybuf);
--
--    // XXX: Should update delayId, statCounter, etc. before bailing
--    if (!entry->isAccepting()) {
--        debugs(10, 3, "terminating due to bad " << *entry);
--        // TODO: Do not abuse connection for triggering cleanup.
--        gopherState->serverConn->close();
--        return;
--    }
--
--#if USE_DELAY_POOLS
--    read_sz = delayId.bytesWanted(1, read_sz);
--#endif
--
--    /* leave one space for \0 in gopherToHTML */
--
--    if (flag == Comm::OK && len > 0) {
--#if USE_DELAY_POOLS
--        delayId.bytesIn(len);
--#endif
--
--        statCounter.server.all.kbytes_in += len;
--        statCounter.server.other.kbytes_in += len;
--    }
--
--    debugs(10, 5, HERE << conn << " read len=" << len);
--
--    if (flag == Comm::OK && len > 0) {
--        AsyncCall::Pointer nil;
--        commSetConnTimeout(conn, Config.Timeout.read, nil);
--        ++IOStats.Gopher.reads;
--
--        for (clen = len - 1, bin = 0; clen; ++bin)
--            clen >>= 1;
--
--        ++IOStats.Gopher.read_hist[bin];
--
--        HttpRequest *req = gopherState->fwd->request;
--        if (req->hier.bodyBytesRead < 0) {
--            req->hier.bodyBytesRead = 0;
--            // first bytes read, update Reply flags:
--            gopherState->reply_->sources |= HttpMsg::srcGopher;
--        }
--
--        req->hier.bodyBytesRead += len;
--    }
--
--    if (flag != Comm::OK) {
--        debugs(50, DBG_IMPORTANT, MYNAME << "error reading: " << xstrerr(xerrno));
--
--        if (ignoreErrno(xerrno)) {
--            AsyncCall::Pointer call = commCbCall(5,4, "gopherReadReply",
--                                                 CommIoCbPtrFun(gopherReadReply, gopherState));
--            comm_read(conn, buf, read_sz, call);
--        } else {
--            ErrorState *err = new ErrorState(ERR_READ_ERROR, Http::scInternalServerError, gopherState->fwd->request);
--            err->xerrno = xerrno;
--            gopherState->fwd->fail(err);
--            gopherState->serverConn->close();
--        }
--    } else if (len == 0 && entry->isEmpty()) {
--        gopherState->fwd->fail(new ErrorState(ERR_ZERO_SIZE_OBJECT, Http::scServiceUnavailable, gopherState->fwd->request));
--        gopherState->serverConn->close();
--    } else if (len == 0) {
--        /* Connection closed; retrieval done. */
--        /* flush the rest of data in temp buf if there is one. */
--
--        if (gopherState->conversion != GopherStateData::NORMAL)
--            gopherEndHTML(gopherState);
--
--        entry->timestampsSet();
--        entry->flush();
--        gopherState->fwd->complete();
--        gopherState->serverConn->close();
--    } else {
--        if (gopherState->conversion != GopherStateData::NORMAL) {
--            gopherToHTML(gopherState, buf, len);
--        } else {
--            entry->append(buf, len);
--        }
--        AsyncCall::Pointer call = commCbCall(5,4, "gopherReadReply",
--                                             CommIoCbPtrFun(gopherReadReply, gopherState));
--        comm_read(conn, buf, read_sz, call);
--    }
--}
--
--/**
-- * This will be called when request write is complete. Schedule read of reply.
-- */
--static void
--gopherSendComplete(const Comm::ConnectionPointer &conn, char *, size_t size, Comm::Flag errflag, int xerrno, void *data)
--{
--    GopherStateData *gopherState = (GopherStateData *) data;
--    StoreEntry *entry = gopherState->entry;
--    debugs(10, 5, HERE << conn << " size: " << size << " errflag: " << errflag);
--
--    if (size > 0) {
--        fd_bytes(conn->fd, size, FD_WRITE);
--        statCounter.server.all.kbytes_out += size;
--        statCounter.server.other.kbytes_out += size;
--    }
--
--    if (!entry->isAccepting()) {
--        debugs(10, 3, "terminating due to bad " << *entry);
--        // TODO: Do not abuse connection for triggering cleanup.
--        gopherState->serverConn->close();
--        return;
--    }
--
--    if (errflag) {
--        ErrorState *err;
--        err = new ErrorState(ERR_WRITE_ERROR, Http::scServiceUnavailable, gopherState->fwd->request);
--        err->xerrno = xerrno;
--        err->port = gopherState->fwd->request->url.port();
--        err->url = xstrdup(entry->url());
--        gopherState->fwd->fail(err);
--        gopherState->serverConn->close();
--        return;
--    }
--
--    /*
--     * OK. We successfully reach remote site.  Start MIME typing
--     * stuff.  Do it anyway even though request is not HTML type.
--     */
--    entry->buffer();
--
--    gopherMimeCreate(gopherState);
--
--    switch (gopherState->type_id) {
--
--    case GOPHER_DIRECTORY:
--        /* we got to convert it first */
--        gopherState->conversion = GopherStateData::HTML_DIR;
--        gopherState->HTML_header_added = 0;
--        break;
--
--    case GOPHER_INDEX:
--        /* we got to convert it first */
--        gopherState->conversion = GopherStateData::HTML_INDEX_RESULT;
--        gopherState->HTML_header_added = 0;
--        break;
--
--    case GOPHER_CSO:
--        /* we got to convert it first */
--        gopherState->conversion = GopherStateData::HTML_CSO_RESULT;
--        gopherState->cso_recno = 0;
--        gopherState->HTML_header_added = 0;
--        break;
--
--    default:
--        gopherState->conversion = GopherStateData::NORMAL;
--        entry->flush();
--    }
--
--    /* Schedule read reply. */
--    AsyncCall::Pointer call =  commCbCall(5,5, "gopherReadReply",
--                                          CommIoCbPtrFun(gopherReadReply, gopherState));
--    entry->delayAwareRead(conn, gopherState->replybuf, BUFSIZ, call);
--}
--
--/**
-- * This will be called when connect completes. Write request.
-- */
--static void
--gopherSendRequest(int, void *data)
--{
--    GopherStateData *gopherState = (GopherStateData *)data;
--    MemBuf mb;
--    mb.init();
--
--    if (gopherState->type_id == GOPHER_CSO) {
--        const char *t = strchr(gopherState->request, '?');
--
--        if (t)
--            ++t;        /* skip the ? */
--        else
--            t = "";
--
--        mb.appendf("query %s\r\nquit", t);
--    } else {
--        if (gopherState->type_id == GOPHER_INDEX) {
--            if (char *t = strchr(gopherState->request, '?'))
--                *t = '\t';
--        }
--        mb.append(gopherState->request, strlen(gopherState->request));
--    }
--    mb.append("\r\n", 2);
--
--    debugs(10, 5, gopherState->serverConn);
--    AsyncCall::Pointer call = commCbCall(5,5, "gopherSendComplete",
--                                         CommIoCbPtrFun(gopherSendComplete, gopherState));
--    Comm::Write(gopherState->serverConn, &mb, call);
--
--    if (!gopherState->entry->makePublic())
--        gopherState->entry->makePrivate(true);
--}
--
--void
--gopherStart(FwdState * fwd)
--{
--    GopherStateData *gopherState = new GopherStateData(fwd);
--
--    debugs(10, 3, gopherState->entry->url());
--
--    ++ statCounter.server.all.requests;
--
--    ++ statCounter.server.other.requests;
--
--    /* Parse url. */
--    gopher_request_parse(fwd->request,
--                         &gopherState->type_id, gopherState->request);
--
--    comm_add_close_handler(fwd->serverConnection()->fd, gopherStateFree, gopherState);
--
--    if (((gopherState->type_id == GOPHER_INDEX) || (gopherState->type_id == GOPHER_CSO))
--            && (strchr(gopherState->request, '?') == NULL)) {
--        /* Index URL without query word */
--        /* We have to generate search page back to client. No need for connection */
--        gopherMimeCreate(gopherState);
--
--        if (gopherState->type_id == GOPHER_INDEX) {
--            gopherState->conversion = GopherStateData::HTML_INDEX_PAGE;
--        } else {
--            if (gopherState->type_id == GOPHER_CSO) {
--                gopherState->conversion = GopherStateData::HTML_CSO_PAGE;
--            } else {
--                gopherState->conversion = GopherStateData::HTML_INDEX_PAGE;
--            }
--        }
--
--        gopherToHTML(gopherState, (char *) NULL, 0);
--        fwd->complete();
--        return;
--    }
--
--    gopherState->serverConn = fwd->serverConnection();
--    gopherSendRequest(fwd->serverConnection()->fd, gopherState);
--    AsyncCall::Pointer timeoutCall = commCbCall(5, 4, "gopherTimeout",
--                                     CommTimeoutCbPtrFun(gopherTimeout, gopherState));
--    commSetConnTimeout(fwd->serverConnection(), Config.Timeout.read, timeoutCall);
--}
--
 diff --git a/src/gopher.h b/src/gopher.h
 deleted file mode 100644
 index 1d73bac..0000000
@@ -3331,13 +1624,14 @@ index 11135c3..bfffd91 100644
  Squid handles all requests in a single, non-blocking process.
  .PP
 diff --git a/src/stat.cc b/src/stat.cc
-index 8a59be4..9f2ac49 100644
+index 8a59be4..4ed2c57 100644
 --- a/src/stat.cc
 +++ b/src/stat.cc
-@@ -207,11 +207,6 @@ GetIoStats(Mgr::IoActionData& stats)
+@@ -206,12 +206,6 @@ GetIoStats(Mgr::IoActionData& stats)
+     for (i = 0; i < IoStats::histSize; ++i) {
          stats.ftp_read_hist[i] = IOStats.Ftp.read_hist[i];
      }
- 
+-
 -    stats.gopher_reads = IOStats.Gopher.reads;
 -
 -    for (i = 0; i < IoStats::histSize; ++i) {
@@ -3346,7 +1640,7 @@ index 8a59be4..9f2ac49 100644
  }
  
  void
-@@ -244,18 +239,6 @@ DumpIoStats(Mgr::IoActionData& stats, StoreEntry* sentry)
+@@ -244,19 +238,6 @@ DumpIoStats(Mgr::IoActionData& stats, StoreEntry* sentry)
                            Math::doublePercent(stats.ftp_read_hist[i], stats.ftp_reads));
      }
  
@@ -3362,9 +1656,10 @@ index 8a59be4..9f2ac49 100644
 -                          stats.gopher_read_hist[i],
 -                          Math::doublePercent(stats.gopher_read_hist[i], stats.gopher_reads));
 -    }
- 
+-
      storeAppendPrintf(sentry, "\n");
  }
+ 
 diff --git a/test-suite/squidconf/regressions-3.4.0.1 b/test-suite/squidconf/regressions-3.4.0.1
 index 41a441b..85f0a64 100644
 --- a/test-suite/squidconf/regressions-3.4.0.1
@@ -3376,6 +1671,3 @@ index 41a441b..85f0a64 100644
 -refresh_pattern ^gopher:        1440    0%      1440
  refresh_pattern -i (/cgi-bin/|\?)       0       0%      0
  refresh_pattern .       0       20%     4320
--- 
-2.39.3
-
diff --git a/SOURCES/squid-4.15-CVE-2023-49285.patch b/SOURCES/squid-4.15-CVE-2023-49285.patch
index 59ebd5a..f6351e4 100644
--- a/SOURCES/squid-4.15-CVE-2023-49285.patch
+++ b/SOURCES/squid-4.15-CVE-2023-49285.patch
@@ -1,22 +1,16 @@
-commit deee944f9a12c9fd399ce52f3e2526bb573a9470
+commit 77b3fb4df0f126784d5fd4967c28ed40eb8d521b
 Author: Alex Rousskov 
 Date:   Wed Oct 25 19:41:45 2023 +0000
 
     RFC 1123: Fix date parsing (#1538)
-
+    
     The bug was discovered and detailed by Joshua Rogers at
     https://megamansec.github.io/Squid-Security-Audit/datetime-overflow.html
     where it was filed as "1-Byte Buffer OverRead in RFC 1123 date/time
     Handling".
 
-Back port upstream patch
-Signed-Off-By: tianyue.lan@oracle.com
----
- lib/rfc1123.c | 6 ++++++
- 1 file changed, 6 insertions(+)
-
 diff --git a/lib/rfc1123.c b/lib/rfc1123.c
-index 2d889cc..add63f0 100644
+index e5bf9a4d7..cb484cc00 100644
 --- a/lib/rfc1123.c
 +++ b/lib/rfc1123.c
 @@ -50,7 +50,13 @@ make_month(const char *s)
@@ -33,6 +27,4 @@ index 2d889cc..add63f0 100644
      month[2] = xtolower(*(s + 2));
  
      for (i = 0; i < 12; i++)
--- 
-2.39.3
 
diff --git a/SOURCES/squid-4.15-CVE-2023-49286.patch b/SOURCES/squid-4.15-CVE-2023-49286.patch
index f151bc5..28f5beb 100644
--- a/SOURCES/squid-4.15-CVE-2023-49286.patch
+++ b/SOURCES/squid-4.15-CVE-2023-49286.patch
@@ -1,28 +1,5 @@
-commit 6014c6648a2a54a4ecb7f952ea1163e0798f9264
-Author: Alex Rousskov 
-Date:   Fri Oct 27 21:27:20 2023 +0000
-
-    Exit without asserting when helper process startup fails (#1543)
-
-    ... to dup() after fork() and before execvp().
-
-    Assertions are for handling program logic errors. Helper initialization
-    code already handled system call errors correctly (i.e. by exiting the
-    newly created helper process with an error), except for a couple of
-    assert()s that could be triggered by dup(2) failures.
-
-    This bug was discovered and detailed by Joshua Rogers at
-    https://megamansec.github.io/Squid-Security-Audit/ipc-assert.html
-    where it was filed as 'Assertion in Squid "Helper" Process Creator'.
-
-Back port upstream patch
-Signed-Off-By: tianyue.lan@oracle.com
----
- src/ipc.cc | 32 ++++++++++++++++++++++++++------
- 1 file changed, 26 insertions(+), 6 deletions(-)
-
 diff --git a/src/ipc.cc b/src/ipc.cc
-index e92a27f..3ddae70 100644
+index 42e11e6..a68e623 100644
 --- a/src/ipc.cc
 +++ b/src/ipc.cc
 @@ -19,6 +19,11 @@
@@ -83,6 +60,3 @@ index e92a27f..3ddae70 100644
  
      assert(t1 > 2 && t2 > 2 && t3 > 2);
  
--- 
-2.39.3
-
diff --git a/SOURCES/squid-4.15-CVE-2023-50269.patch b/SOURCES/squid-4.15-CVE-2023-50269.patch
new file mode 100644
index 0000000..06ea82c
--- /dev/null
+++ b/SOURCES/squid-4.15-CVE-2023-50269.patch
@@ -0,0 +1,50 @@
+diff --git a/src/ClientRequestContext.h b/src/ClientRequestContext.h
+index fe2edf6..47aa935 100644
+--- a/src/ClientRequestContext.h
++++ b/src/ClientRequestContext.h
+@@ -81,6 +81,10 @@ public:
+ #endif
+     ErrorState *error; ///< saved error page for centralized/delayed processing
+     bool readNextRequest; ///< whether Squid should read after error handling
++
++#if FOLLOW_X_FORWARDED_FOR
++    size_t currentXffHopNumber = 0; ///< number of X-Forwarded-For header values processed so far
++#endif
+ };
+ 
+ #endif /* SQUID_CLIENTREQUESTCONTEXT_H */
+diff --git a/src/client_side_request.cc b/src/client_side_request.cc
+index 1c6ff62..b758f6f 100644
+--- a/src/client_side_request.cc
++++ b/src/client_side_request.cc
+@@ -78,6 +78,11 @@
+ static const char *const crlf = "\r\n";
+ 
+ #if FOLLOW_X_FORWARDED_FOR
++
++#if !defined(SQUID_X_FORWARDED_FOR_HOP_MAX)
++#define SQUID_X_FORWARDED_FOR_HOP_MAX 64
++#endif
++
+ static void clientFollowXForwardedForCheck(allow_t answer, void *data);
+ #endif /* FOLLOW_X_FORWARDED_FOR */
+ 
+@@ -485,8 +490,16 @@ clientFollowXForwardedForCheck(allow_t answer, void *data)
+                 /* override the default src_addr tested if we have to go deeper than one level into XFF */
+                 Filled(calloutContext->acl_checklist)->src_addr = request->indirect_client_addr;
+             }
+-            calloutContext->acl_checklist->nonBlockingCheck(clientFollowXForwardedForCheck, data);
+-            return;
++            if (++calloutContext->currentXffHopNumber < SQUID_X_FORWARDED_FOR_HOP_MAX) {
++                calloutContext->acl_checklist->nonBlockingCheck(clientFollowXForwardedForCheck, data);
++                return;
++            }
++            const auto headerName = Http::HeaderLookupTable.lookup(Http::HdrType::X_FORWARDED_FOR).name;
++            debugs(28, DBG_CRITICAL, "ERROR: Ignoring trailing " << headerName << " addresses" <<
++                   Debug::Extra << "addresses allowed by follow_x_forwarded_for: " << calloutContext->currentXffHopNumber <<
++                   Debug::Extra << "last/accepted address: " << request->indirect_client_addr <<
++                   Debug::Extra << "ignored trailing addresses: " << request->x_forwarded_for_iterator);
++            // fall through to resume clientAccessCheck() processing
+         }
+     }
+ 
diff --git a/SOURCES/0002-Remove-serialized-HTTP-headers-from-storeClientCopy.patch b/SOURCES/squid-4.15-CVE-2023-5824.patch
similarity index 64%
rename from SOURCES/0002-Remove-serialized-HTTP-headers-from-storeClientCopy.patch
rename to SOURCES/squid-4.15-CVE-2023-5824.patch
index 76eac17..4395c71 100644
--- a/SOURCES/0002-Remove-serialized-HTTP-headers-from-storeClientCopy.patch
+++ b/SOURCES/squid-4.15-CVE-2023-5824.patch
@@ -1,251 +1,150 @@
-From e4e1a48a6d53cad77c8bab561addb1ed48abba4f Mon Sep 17 00:00:00 2001
-From: Alex Rousskov 
-Date: Thu, 7 Dec 2023 17:58:49 +0000
-Subject: [PATCH 2/7] Remove serialized HTTP headers from storeClientCopy() 
- (#1335)
+commit bf9a9ec5329bde6acc26797d1fa7a7a165fec01f
+Author: Tomas Korbar 
+Date:   Tue Nov 21 13:21:43 2023 +0100
 
-Do not send serialized HTTP response header bytes in storeClientCopy()
-answers. Ignore serialized header size when calling storeClientCopy().
-
-This complex change adjusts storeClientCopy() API to addresses several
-related problems with storeClientCopy() and its callers. The sections
-below summarize storeClientCopy() changes and then move on to callers.
-
-Squid incorrectly assumed that serialized HTTP response headers are read
-from disk in a single storeRead() request. In reality, many situations
-lead to store_client::readBody() receiving partial HTTP headers,
-resulting in parseCharBuf() failure and a level-0 cache.log message:
-
-    Could not parse headers from on disk object
-
-Inadequate handling of this failure resulted in a variety of problems.
-Squid now accumulates storeRead() results to parse larger headers and
-also handles parsing failures better, but we could not just stop there.
-
-With the storeRead() accumulation in place, it is no longer possible to
-send parsed serialized HTTP headers to storeClientCopy() callers because
-those callers do not provide enough buffer space to fit larger headers.
-Increasing caller buffer capacity does not work well because the actual
-size of the serialized header is unknown in advance and may be quite
-large. Always allocating large buffers "just in case" is bad for
-performance. Finally, larger buffers may jeopardize hard-to-find code
-that uses hard-coded 4KB buffers without using HTTP_REQBUF_SZ macro.
-
-Fortunately, storeClientCopy() callers either do not care about
-serialized HTTP response headers or should not care about them! The API
-forced callers to deal with serialized headers, but callers could (and
-some did) just use the parsed headers available in the corresponding
-MemObject. With this API change, storeClientCopy() callers no longer
-receive serialized headers and do not need to parse or skip them.
-Consequently, callers also do not need to account for response headers
-size when computing offsets for subsequent storeClientCopy() requests.
-
-Restricting storeClientCopy() API to HTTP _body_ bytes removed a lot of
-problematic caller code. Caller changes are summarized further below.
-
-A similar HTTP response header parsing problem existed in shared memory
-cache code. That code was actually aware that headers may span multiple
-cache slices but incorrectly assumed that httpMsgParseStep() accumulates
-input as needed (to make another parsing "step"). It does not. Large
-response headers cached in shared memory triggered a level-1 message:
-
-    Corrupted mem-cached headers: e:...
-
-Fixed MemStore code now accumulates serialized HTTP response headers as
-needed to parse them, sharing high-level parsing code with store_client.
-
-Old clientReplyContext methods worked hard to skip received serialized
-HTTP headers. The code contained dangerous and often complex/unreadable
-manipulation of various raw offsets and buffer pointers, aggravated by
-the perceived need to save/restore those offsets across asynchronous
-checks (see below). That header skipping code is gone now. Several stale
-and misleading comments related to Store buffers management were also
-removed or updated.
-
-We replaced reqofs/reqsize with simpler/safer lastStreamBufferedBytes,
-while becoming more consistent with that "cached" info invalidation. We
-still need this info to resume HTTP body processing after asynchronous
-http_reply_access checks and cache hit validations, but we no longer
-save/restore this info for hit validation: No need to save/restore
-information about the buffer that hit validation does not use and must
-never touch!
-
-The API change also moved from-Store StoreIOBuffer usage closer to
-StoreIOBuffers manipulated by Clients Streams code. Buffers in both
-categories now contain just the body bytes, and both now treat zero
-length as EOF only _after_ processing the response headers.
-
-These changes improve overall code quality, but this code path and these
-changes still suffer from utterly unsafe legacy interfaces like
-StoreIOBuffer and clientStreamNode. We cannot rely on the compiler to
-check our work. The risk of these changes exposing/causing bugs is high.
-
-asHandleReply() expected WHOIS response body bytes where serialized HTTP
-headers were! The code also had multiple problems typical for manually
-written C parsers dealing with raw input buffers. Now replaced with a
-Tokenizer-based code.
-
-To skip received HTTP response headers, peerDigestHandleReply() helper
-functions called headersEnd() on the received buffer. Twice. We have now
-merged those two parsing helper functions into one (that just checks the
-already parsed headers). This merger preserved "304s must come with
-fetch->pd->cd" logic that was hidden/spread across those two functions.
-
-urnHandleReply() re-parsed received HTTP response headers. We left its
-HTTP body parsing code unchanged except for polishing NUL-termination.
-
-netdbExchangeHandleReply() re-parsed received HTTP response headers to
-find where they end (via headersEnd()). We improved handing of corner
-cases and replaced some "tricky bits" code, reusing the new
-Store::ParsingBuffer class. The net_db record parsing code is unchanged.
-[root@ol8-gcc squid]# cat 0002-Remove-serialized-HTTP-headers-from-storeClientCopy-.patch |head -n 140
-From 8d50a09c3a0e9500becd21624ea62eb02660cc6d Mon Sep 17 00:00:00 2001
-From: Alex Rousskov 
-Date: Thu, 23 Nov 2023 18:26:33 +0000
-Subject: [PATCH 2/6] Remove serialized HTTP headers from storeClientCopy()
- (#1335)
-
-Do not send serialized HTTP response header bytes in storeClientCopy()
-answers. Ignore serialized header size when calling storeClientCopy().
-
-This complex change adjusts storeClientCopy() API to addresses several
-related problems with storeClientCopy() and its callers. The sections
-below summarize storeClientCopy() changes and then move on to callers.
-
-Squid incorrectly assumed that serialized HTTP response headers are read
-from disk in a single storeRead() request. In reality, many situations
-lead to store_client::readBody() receiving partial HTTP headers,
-resulting in parseCharBuf() failure and a level-0 cache.log message:
-
-    Could not parse headers from on disk object
-
-Inadequate handling of this failure resulted in a variety of problems.
-Squid now accumulates storeRead() results to parse larger headers and
-also handles parsing failures better, but we could not just stop there.
-
-With the storeRead() accumulation in place, it is no longer possible to
-send parsed serialized HTTP headers to storeClientCopy() callers because
-those callers do not provide enough buffer space to fit larger headers.
-Increasing caller buffer capacity does not work well because the actual
-size of the serialized header is unknown in advance and may be quite
-large. Always allocating large buffers "just in case" is bad for
-performance. Finally, larger buffers may jeopardize hard-to-find code
-that uses hard-coded 4KB buffers without using HTTP_REQBUF_SZ macro.
-
-Fortunately, storeClientCopy() callers either do not care about
-serialized HTTP response headers or should not care about them! The API
-forced callers to deal with serialized headers, but callers could (and
-some did) just use the parsed headers available in the corresponding
-MemObject. With this API change, storeClientCopy() callers no longer
-receive serialized headers and do not need to parse or skip them.
-Consequently, callers also do not need to account for response headers
-size when computing offsets for subsequent storeClientCopy() requests.
-
-Restricting storeClientCopy() API to HTTP _body_ bytes removed a lot of
-problematic caller code. Caller changes are summarized further below.
-
-A similar HTTP response header parsing problem existed in shared memory
-cache code. That code was actually aware that headers may span multiple
-cache slices but incorrectly assumed that httpMsgParseStep() accumulates
-input as needed (to make another parsing "step"). It does not. Large
-response headers cached in shared memory triggered a level-1 message:
-
-    Corrupted mem-cached headers: e:...
-
-Fixed MemStore code now accumulates serialized HTTP response headers as
-needed to parse them, sharing high-level parsing code with store_client.
-
-Old clientReplyContext methods worked hard to skip received serialized
-HTTP headers. The code contained dangerous and often complex/unreadable
-manipulation of various raw offsets and buffer pointers, aggravated by
-the perceived need to save/restore those offsets across asynchronous
-checks (see below). That header skipping code is gone now. Several stale
-and misleading comments related to Store buffers management were also
-removed or updated.
-
-We replaced reqofs/reqsize with simpler/safer lastStreamBufferedBytes,
-while becoming more consistent with that "cached" info invalidation. We
-still need this info to resume HTTP body processing after asynchronous
-http_reply_access checks and cache hit validations, but we no longer
-save/restore this info for hit validation: No need to save/restore
-information about the buffer that hit validation does not use and must
-never touch!
-
-The API change also moved from-Store StoreIOBuffer usage closer to
-StoreIOBuffers manipulated by Clients Streams code. Buffers in both
-categories now contain just the body bytes, and both now treat zero
-length as EOF only _after_ processing the response headers.
-
-These changes improve overall code quality, but this code path and these
-changes still suffer from utterly unsafe legacy interfaces like
-StoreIOBuffer and clientStreamNode. We cannot rely on the compiler to
-check our work. The risk of these changes exposing/causing bugs is high.
-
-asHandleReply() expected WHOIS response body bytes where serialized HTTP
-headers were! The code also had multiple problems typical for manually
-written C parsers dealing with raw input buffers. Now replaced with a
-Tokenizer-based code.
-
-To skip received HTTP response headers, peerDigestHandleReply() helper
-functions called headersEnd() on the received buffer. Twice. We have now
-merged those two parsing helper functions into one (that just checks the
-already parsed headers). This merger preserved "304s must come with
-fetch->pd->cd" logic that was hidden/spread across those two functions.
-
-urnHandleReply() re-parsed received HTTP response headers. We left its
-HTTP body parsing code unchanged except for polishing NUL-termination.
-
-netdbExchangeHandleReply() re-parsed received HTTP response headers to
-find where they end (via headersEnd()). We improved handing of corner
-cases and replaced some "tricky bits" code, reusing the new
-Store::ParsingBuffer class. The net_db record parsing code is unchanged.
-
-Mgr::StoreToCommWriter::noteStoreCopied() is a very special case. It
-actually worked OK because, unlike all other storeClientCopy() callers,
-this code does not get serialized HTTP headers from Store: The code
-adding bytes to the corresponding StoreEntry does not write serialized
-HTTP headers at all. StoreToCommWriter is used to deliver kid-specific
-pieces of an HTTP body of an SMP cache manager response. The HTTP
-headers of that response are handled elsewhere. We left this code
-unchanged, but the existence of the special no-headers case does
-complicate storeClientCopy() API documentation, implementation, and
-understanding.
-
-Co-authored-by: Eduard Bagdasaryan 
-
-Modified-by: Alex Burmashev 
-Signed-off-by: Alex Burmashev 
----
- src/HttpReply.cc            |  34 +++
- src/HttpReply.h             |   7 +
- src/MemObject.cc            |   6 +
- src/MemObject.h             |   9 +
- src/MemStore.cc             |  75 ++++---
- src/MemStore.h              |   2 +-
- src/StoreClient.h           |  65 +++++-
- src/StoreIOBuffer.h         |   3 +
- src/acl/Asn.cc              | 163 +++++---------
- src/clientStream.cc         |   3 +-
- src/client_side_reply.cc    | 322 +++++++++++----------------
- src/client_side_reply.h     |  38 +++-
- src/enums.h                 |   1 -
- src/icmp/net_db.cc          | 144 ++++--------
- src/peer_digest.cc          |  96 ++------
- src/store.cc                |  11 +
- src/store/Makefile.am       |   2 +
- src/store/Makefile.in       |   9 +-
- src/store/ParsingBuffer.cc  | 198 +++++++++++++++++
- src/store/ParsingBuffer.h   | 128 +++++++++++
- src/store/forward.h         |   1 +
- src/store_client.cc         | 429 ++++++++++++++++++++++++------------
- src/tests/stub_HttpReply.cc |   1 +
- src/urn.cc                  |  89 +++-----
- 24 files changed, 1094 insertions(+), 742 deletions(-)
- create mode 100644 src/store/ParsingBuffer.cc
- create mode 100644 src/store/ParsingBuffer.h
+    Fix CVE-2023-5824 (#1335) (#1561) (#1562)
+    Supply ALE with HttpReply before checking http_reply_access (#398)
+    Replace adjustable base reply - downstream change neccessary for
+    backport
 
+diff --git a/src/AccessLogEntry.cc b/src/AccessLogEntry.cc
+index 1956c9b..4f1e73e 100644
+--- a/src/AccessLogEntry.cc
++++ b/src/AccessLogEntry.cc
+@@ -10,6 +10,7 @@
+ #include "AccessLogEntry.h"
+ #include "HttpReply.h"
+ #include "HttpRequest.h"
++#include "MemBuf.h"
+ #include "SquidConfig.h"
+ 
+ #if USE_OPENSSL
+@@ -89,6 +90,8 @@ AccessLogEntry::getExtUser() const
+     return nullptr;
+ }
+ 
++AccessLogEntry::AccessLogEntry() {}
++
+ AccessLogEntry::~AccessLogEntry()
+ {
+     safe_free(headers.request);
+@@ -97,14 +100,11 @@ AccessLogEntry::~AccessLogEntry()
+     safe_free(adapt.last_meta);
+ #endif
+ 
+-    safe_free(headers.reply);
+-
+     safe_free(headers.adapted_request);
+     HTTPMSGUNLOCK(adapted_request);
+ 
+     safe_free(lastAclName);
+ 
+-    HTTPMSGUNLOCK(reply);
+     HTTPMSGUNLOCK(request);
+ #if ICAP_CLIENT
+     HTTPMSGUNLOCK(icap.reply);
+@@ -124,3 +124,10 @@ AccessLogEntry::effectiveVirginUrl() const
+     return nullptr;
+ }
+ 
++void
++AccessLogEntry::packReplyHeaders(MemBuf &mb) const
++{
++    if (reply)
++        reply->packHeadersUsingFastPacker(mb);
++}
++
+diff --git a/src/AccessLogEntry.h b/src/AccessLogEntry.h
+index 1f29e61..f1d2ecc 100644
+--- a/src/AccessLogEntry.h
++++ b/src/AccessLogEntry.h
+@@ -40,13 +40,7 @@ class AccessLogEntry: public RefCountable
+ public:
+     typedef RefCount Pointer;
+ 
+-    AccessLogEntry() :
+-        url(nullptr),
+-        lastAclName(nullptr),
+-        reply(nullptr),
+-        request(nullptr),
+-        adapted_request(nullptr)
+-    {}
++    AccessLogEntry();
+     ~AccessLogEntry();
+ 
+     /// Fetch the client IP log string into the given buffer.
+@@ -63,6 +57,9 @@ public:
+     /// Fetch the transaction method string (ICP opcode, HTCP opcode or HTTP method)
+     SBuf getLogMethod() const;
+ 
++    /// dump all reply headers (for sending or risky logging)
++    void packReplyHeaders(MemBuf &mb) const;
++
+     SBuf url;
+ 
+     /// TCP/IP level details about the client connection
+@@ -187,14 +184,12 @@ public:
+ 
+     public:
+         Headers() : request(NULL),
+-            adapted_request(NULL),
+-            reply(NULL) {}
++            adapted_request(NULL)
++            {}
+ 
+         char *request; //< virgin HTTP request headers
+ 
+         char *adapted_request; //< HTTP request headers after adaptation and redirection
+-
+-        char *reply;
+     } headers;
+ 
+ #if USE_ADAPTATION
+@@ -212,13 +207,13 @@ public:
+     } adapt;
+ #endif
+ 
+-    const char *lastAclName; ///< string for external_acl_type %ACL format code
++    const char *lastAclName = nullptr; ///< string for external_acl_type %ACL format code
+     SBuf lastAclData; ///< string for external_acl_type %DATA format code
+ 
+     HierarchyLogEntry hier;
+-    HttpReply *reply;
+-    HttpRequest *request; //< virgin HTTP request
+-    HttpRequest *adapted_request; //< HTTP request after adaptation and redirection
++    HttpReplyPointer reply;
++    HttpRequest *request = nullptr; //< virgin HTTP request
++    HttpRequest *adapted_request = nullptr; //< HTTP request after adaptation and redirection
+ 
+     /// key:value pairs set by squid.conf note directive and
+     /// key=value pairs returned from URL rewrite/redirect helper
+diff --git a/src/HttpHeader.cc b/src/HttpHeader.cc
+index 8dcc7e3..21206a9 100644
+--- a/src/HttpHeader.cc
++++ b/src/HttpHeader.cc
+@@ -9,6 +9,7 @@
+ /* DEBUG: section 55    HTTP Header */
+ 
+ #include "squid.h"
++#include "base/Assure.h"
+ #include "base/EnumIterator.h"
+ #include "base64.h"
+ #include "globals.h"
+diff --git a/src/HttpHeaderTools.cc b/src/HttpHeaderTools.cc
+index f1e45a4..1337b8d 100644
+--- a/src/HttpHeaderTools.cc
++++ b/src/HttpHeaderTools.cc
+@@ -479,7 +479,7 @@ httpHdrAdd(HttpHeader *heads, HttpRequest *request, const AccessLogEntryPointer
+ 
+     checklist.al = al;
+     if (al && al->reply) {
+-        checklist.reply = al->reply;
++        checklist.reply = al->reply.getRaw();
+         HTTPMSGLOCK(checklist.reply);
+     }
+ 
 diff --git a/src/HttpReply.cc b/src/HttpReply.cc
-index 6feb262..af2bd4d 100644
+index 6feb262..e74960b 100644
 --- a/src/HttpReply.cc
 +++ b/src/HttpReply.cc
 @@ -20,7 +20,9 @@
@@ -269,13 +168,13 @@ index 6feb262..af2bd4d 100644
 +    const bool eof = false; // TODO: Remove after removing atEnd from HttpHeader::parse()
 +    if (parse(terminatedBuf, bufSize, eof, &error)) {
 +        debugs(58, 7, "success after accumulating " << bufSize << " bytes and parsing " << hdr_sz);
-+        Assure(pstate == Http::Message::psParsed);
++        Assure(pstate == psParsed);
 +        Assure(hdr_sz > 0);
 +        Assure(!Less(bufSize, hdr_sz)); // cannot parse more bytes than we have
 +        return hdr_sz; // success
 +    }
 +
-+    Assure(pstate != Http::Message::psParsed);
++    Assure(pstate != psParsed);
 +    hdr_sz = 0;
 +
 +    if (error) {
@@ -316,9 +215,20 @@ index 6c90e20..4301cfd 100644
      /** initialize */
      void init();
 diff --git a/src/MemObject.cc b/src/MemObject.cc
-index 4ba63cc..d7aaf5e 100644
+index df7791f..650d3fd 100644
 --- a/src/MemObject.cc
 +++ b/src/MemObject.cc
+@@ -196,8 +196,8 @@ struct LowestMemReader : public unary_function {
+     LowestMemReader(int64_t seed):current(seed) {}
+ 
+     void operator() (store_client const &x) {
+-        if (x.memReaderHasLowerOffset(current))
+-            current = x.copyInto.offset;
++        if (x.getType() == STORE_MEM_CLIENT)
++            current = std::min(current, x.discardableHttpEnd());
+     }
+ 
+     int64_t current;
 @@ -369,6 +369,12 @@ MemObject::policyLowestOffsetToKeep(bool swap) const
       */
      int64_t lowest_offset = lowestMemReaderOffset();
@@ -332,11 +242,28 @@ index 4ba63cc..d7aaf5e 100644
      if (endOffset() < lowest_offset ||
              endOffset() - inmem_lo > (int64_t)Config.Store.maxInMemObjSize ||
              (swap && !Config.onoff.memory_cache_first))
+@@ -492,7 +498,7 @@ MemObject::mostBytesAllowed() const
+ 
+ #endif
+ 
+-        j = sc->delayId.bytesWanted(0, sc->copyInto.length);
++        j = sc->bytesWanted();
+ 
+         if (j > jmax) {
+             jmax = j;
 diff --git a/src/MemObject.h b/src/MemObject.h
-index 711966d..ba6646f 100644
+index 711966d..9f4add0 100644
 --- a/src/MemObject.h
 +++ b/src/MemObject.h
-@@ -59,6 +59,15 @@ public:
+@@ -56,9 +56,23 @@ public:
+ 
+     void write(const StoreIOBuffer &buf);
+     void unlinkRequest();
++
++    /// HTTP response before 304 (Not Modified) updates
++    /// starts "empty"; modified via replaceBaseReply() or adjustableBaseReply()
++    HttpReply &baseReply() const { return *_reply; }
++
      HttpReply const *getReply() const;
      void replaceHttpReply(HttpReply *newrep);
      void stat (MemBuf * mb) const;
@@ -353,7 +280,7 @@ index 711966d..ba6646f 100644
      void markEndOfReplyHeaders(); ///< sets _reply->hdr_sz to endOffset()
      /// negative if unknown; otherwise, expected object_sz, expected endOffset
 diff --git a/src/MemStore.cc b/src/MemStore.cc
-index a4a6ab2..fe7af2f 100644
+index a4a6ab2..6762c4f 100644
 --- a/src/MemStore.cc
 +++ b/src/MemStore.cc
 @@ -17,6 +17,8 @@
@@ -424,8 +351,8 @@ index a4a6ab2..fe7af2f 100644
                     " from " << extra.page << '+' << prefixSize);
 +
 +            // parse headers if needed; they might span multiple slices!
-+            auto &reply = e.mem().adjustableBaseReply();
-+            if (reply.pstate != Http::Message::psParsed) {
++            auto &reply = e.mem().baseReply();
++            if (reply.pstate != psParsed) {
 +                httpHeaderParsingBuffer.append(sliceBuf.data, sliceBuf.length);
 +                if (reply.parseTerminatedPrefix(httpHeaderParsingBuffer.c_str(), httpHeaderParsingBuffer.length()))
 +                    httpHeaderParsingBuffer = SBuf(); // we do not need these bytes anymore
@@ -437,7 +364,7 @@ index a4a6ab2..fe7af2f 100644
      debugs(20, 5, "mem-loaded all " << e.mem_obj->endOffset() << '/' <<
             anchor.basics.swap_file_sz << " bytes of " << e);
  
-+    if (e.mem().adjustableBaseReply().pstate != Http::Message::psParsed)
++    if (e.mem().baseReply().pstate != psParsed)
 +        throw TextException(ToSBuf("truncated mem-cached headers; accumulated: ", httpHeaderParsingBuffer.length()), Here());
 +
      // from StoreEntry::complete()
@@ -499,38 +426,247 @@ index 516da3c..31a2015 100644
  
      void updateHeadersOrThrow(Ipc::StoreMapUpdate &update);
  
+diff --git a/src/SquidMath.h b/src/SquidMath.h
+index c70acd1..bfca0cc 100644
+--- a/src/SquidMath.h
++++ b/src/SquidMath.h
+@@ -9,6 +9,11 @@
+ #ifndef _SQUID_SRC_SQUIDMATH_H
+ #define _SQUID_SRC_SQUIDMATH_H
+ 
++#include 
++#include 
++
++// TODO: Move to src/base/Math.h and drop the Math namespace
++
+ /* Math functions we define locally for Squid */
+ namespace Math
+ {
+@@ -21,5 +26,165 @@ double doubleAverage(const double, const double, int, const int);
+ 
+ } // namespace Math
+ 
++// If Sum() performance becomes important, consider using GCC and clang
++// built-ins like __builtin_add_overflow() instead of manual overflow checks.
++
++/// detects a pair of unsigned types
++/// reduces code duplication in declarations further below
++template 
++using AllUnsigned = typename std::conditional<
++                    std::is_unsigned::value && std::is_unsigned::value,
++                    std::true_type,
++                    std::false_type
++                    >::type;
++
++// TODO: Replace with std::cmp_less() after migrating to C++20.
++/// whether integer a is less than integer b, with correct overflow handling
++template 
++constexpr bool
++Less(const A a, const B b) {
++    // The casts below make standard C++ integer conversions explicit. They
++    // quell compiler warnings about signed/unsigned comparison. The first two
++    // lines exclude different-sign a and b, making the casts/comparison safe.
++    using AB = typename std::common_type::type;
++    return
++        (a >= 0 && b < 0) ? false :
++        (a < 0 && b >= 0) ? true :
++        /* (a >= 0) == (b >= 0) */ static_cast(a) < static_cast(b);
++}
++
++/// ensure that T is supported by NaturalSum() and friends
++template
++constexpr void
++AssertNaturalType()
++{
++    static_assert(std::numeric_limits::is_bounded, "std::numeric_limits::max() is meaningful");
++    static_assert(std::numeric_limits::is_exact, "no silent loss of precision");
++    static_assert(!std::is_enum::value, "no silent creation of non-enumerated values");
++}
++
++// TODO: Investigate whether this optimization can be expanded to [signed] types
++// A and B when std::numeric_limits::is_modulo is true.
++/// This IncreaseSumInternal() overload is optimized for speed.
++/// \returns a non-overflowing sum of the two unsigned arguments (or nothing)
++/// \prec both argument types are unsigned
++template ::value, int> = 0>
++std::pair
++IncreaseSumInternal(const A a, const B b) {
++    // paranoid: AllUnsigned precondition established that already
++    static_assert(std::is_unsigned::value, "AllUnsigned dispatch worked for A");
++    static_assert(std::is_unsigned::value, "AllUnsigned dispatch worked for B");
++
++    AssertNaturalType();
++    AssertNaturalType();
++    AssertNaturalType();
++
++    // we should only be called by IncreaseSum(); it forces integer promotion
++    static_assert(std::is_same::value, "a will not be promoted");
++    static_assert(std::is_same::value, "b will not be promoted");
++    // and without integer promotions, a sum of unsigned integers is unsigned
++    static_assert(std::is_unsigned::value, "a+b is unsigned");
++
++    // with integer promotions ruled out, a or b can only undergo integer
++    // conversion to the higher rank type (A or B, we do not know which)
++    using AB = typename std::common_type::type;
++    static_assert(std::is_same::value || std::is_same::value, "no unexpected conversions");
++    static_assert(std::is_same::value, "lossless assignment");
++    const AB sum = a + b;
++
++    static_assert(std::numeric_limits::is_modulo, "we can detect overflows");
++    // 1. modulo math: overflowed sum is smaller than any of its operands
++    // 2. the sum may overflow S (i.e. the return base type)
++    // We do not need Less() here because we compare promoted unsigned types.
++    return (sum >= a && sum <= std::numeric_limits::max()) ?
++           std::make_pair(sum, true) : std::make_pair(S(), false);
++}
++
++/// This IncreaseSumInternal() overload supports a larger variety of types.
++/// \returns a non-overflowing sum of the two arguments (or nothing)
++/// \returns nothing if at least one of the arguments is negative
++/// \prec at least one of the argument types is signed
++template ::value, int> = 0>
++std::pair constexpr
++IncreaseSumInternal(const A a, const B b) {
++    AssertNaturalType();
++    AssertNaturalType();
++    AssertNaturalType();
++
++    // we should only be called by IncreaseSum() that does integer promotion
++    static_assert(std::is_same::value, "a will not be promoted");
++    static_assert(std::is_same::value, "b will not be promoted");
++
++    return
++        // We could support a non-under/overflowing sum of negative numbers, but
++        // our callers use negative values specially (e.g., for do-not-use or
++        // do-not-limit settings) and are not supposed to do math with them.
++        (a < 0 || b < 0) ? std::make_pair(S(), false) :
++        // To avoid undefined behavior of signed overflow, we must not compute
++        // the raw a+b sum if it may overflow. When A is not B, a or b undergoes
++        // (safe for non-negatives) integer conversion in these expressions, so
++        // we do not know the resulting a+b type AB and its maximum. We must
++        // also detect subsequent casting-to-S overflows.
++        // Overflow condition: (a + b > maxAB) or (a + b > maxS).
++        // A is an integer promotion of S, so maxS <= maxA <= maxAB.
++        // Since maxS <= maxAB, it is sufficient to just check: a + b > maxS,
++        // which is the same as the overflow-safe condition here: maxS - a < b.
++        // Finally, (maxS - a) cannot overflow because a is not negative and
++        // cannot underflow because a is a promotion of s: 0 <= a <= maxS.
++        Less(std::numeric_limits::max() - a, b) ? std::make_pair(S(), false) :
++        std::make_pair(S(a + b), true);
++}
++
++/// argument pack expansion termination for IncreaseSum()
++template 
++std::pair
++IncreaseSum(const S s, const T t)
++{
++    // Force (always safe) integer promotions now, to give std::enable_if_t<>
++    // promoted types instead of entering IncreaseSumInternal(s,t)
++    // but getting a _signed_ promoted value of s or t in s + t.
++    return IncreaseSumInternal(+s, +t);
++}
++
++/// \returns a non-overflowing sum of the arguments (or nothing)
++template 
++std::pair
++IncreaseSum(const S sum, const T t, const Args... args) {
++    const auto head = IncreaseSum(sum, t);
++    if (head.second) {
++        return IncreaseSum(head.first, args...);
++    } else {
++        // std::optional() triggers bogus -Wmaybe-uninitialized warnings in GCC v10.3
++        return std::make_pair(S(), false);
++    }
++}
++
++/// \returns an exact, non-overflowing sum of the arguments (or nothing)
++template 
++std::pair
++NaturalSum(const Args... args) {
++    return IncreaseSum(0, args...);
++}
++
++/// Safely resets the given variable to NaturalSum() of the given arguments.
++/// If the sum overflows, resets to variable's maximum possible value.
++/// \returns the new variable value (like an assignment operator would)
++template 
++S
++SetToNaturalSumOrMax(S &var, const Args... args)
++{
++    var = NaturalSum(args...).value_or(std::numeric_limits::max());
++    return var;
++}
++
++/// converts a given non-negative integer into an integer of a given type
++/// without loss of information or undefined behavior
++template 
++Result
++NaturalCast(const Source s)
++{
++    return NaturalSum(s).value();
++}
++
+ #endif /* _SQUID_SRC_SQUIDMATH_H */
+ 
+diff --git a/src/Store.h b/src/Store.h
+index 3eb6b84..2475fe0 100644
+--- a/src/Store.h
++++ b/src/Store.h
+@@ -49,6 +49,9 @@ public:
+     StoreEntry();
+     virtual ~StoreEntry();
+ 
++    MemObject &mem() { assert(mem_obj); return *mem_obj; }
++    const MemObject &mem() const { assert(mem_obj); return *mem_obj; }
++
+     virtual HttpReply const *getReply() const;
+     virtual void write (StoreIOBuffer);
+ 
 diff --git a/src/StoreClient.h b/src/StoreClient.h
-index 457844a..1d90e5a 100644
+index 65472d8..942f9fc 100644
 --- a/src/StoreClient.h
 +++ b/src/StoreClient.h
-@@ -10,11 +10,24 @@
+@@ -9,11 +9,13 @@
+ #ifndef SQUID_STORECLIENT_H
  #define SQUID_STORECLIENT_H
  
++#include "base/AsyncCall.h"
  #include "dlink.h"
 +#include "store/ParsingBuffer.h"
  #include "StoreIOBuffer.h"
  #include "StoreIOState.h"
- #include "base/AsyncCall.h"
  
 -typedef void STCB(void *, StoreIOBuffer);   /* store callback */
-+/// A storeClientCopy() callback function.
-+///
-+/// Upon storeClientCopy() success, StoreIOBuffer::flags.error is zero, and
-+/// * HTTP response headers (if any) are available via MemObject::freshestReply();
-+/// * HTTP response body bytes (if any) are available via StoreIOBuffer.
-+///
-+/// STCB callbacks may use response semantics to detect certain EOF conditions.
-+/// Callbacks that expect HTTP headers may call store_client::atEof(). Similar
-+/// to clientStreamCallback() callbacks, callbacks dedicated to receiving HTTP
-+/// bodies may use zero StoreIOBuffer::length as an EOF condition.
-+///
-+/// Errors are indicated by setting StoreIOBuffer flags.error.
-+using STCB = void (void *, StoreIOBuffer);
++using STCB = void (void *, StoreIOBuffer);   /* store callback */
  
  class StoreEntry;
  
-@@ -68,7 +81,13 @@ public:
- 
+@@ -39,17 +41,34 @@ class store_client
+ public:
+     store_client(StoreEntry *);
+     ~store_client();
+-    bool memReaderHasLowerOffset(int64_t) const;
++
++    /// the client will not use HTTP response bytes with lower offsets (if any)
++    auto discardableHttpEnd() const { return discardableHttpEnd_; }
++
+     int getType() const;
+-    void fail();
+-    void callback(ssize_t len, bool error = false);
++
++    /// React to the end of reading the response from disk. There will be no
++    /// more readHeader() and readBody() callbacks for the current storeRead()
++    /// swapin after this notification.
++    void noteSwapInDone(bool error);
++
+     void doCopy (StoreEntry *e);
+     void readHeader(const char *buf, ssize_t len);
+     void readBody(const char *buf, ssize_t len);
++
++    /// Request StoreIOBuffer-described response data via an asynchronous STCB
++    /// callback. At most one outstanding request is allowed per store_client.
+     void copy(StoreEntry *, StoreIOBuffer, STCB *, void *);
++
      void dumpStats(MemBuf * output, int clientNumber) const;
  
 -    int64_t cmp_offset;
@@ -544,8 +680,36 @@ index 457844a..1d90e5a 100644
  #if STORE_CLIENT_LIST_DEBUG
  
      void *owner;
-@@ -103,19 +122,28 @@ public:
+@@ -59,33 +78,86 @@ public:
+     StoreIOState::Pointer swapin_sio;
+ 
+     struct {
++        /// whether we are expecting a response to be swapped in from disk
++        /// (i.e. whether async storeRead() is currently in progress)
++        // TODO: a better name reflecting the 'in' scope of the flag
+         bool disk_io_pending;
++
++        /// whether the store_client::doCopy()-initiated STCB sequence is
++        /// currently in progress
+         bool store_copying;
+-        bool copy_event_pending;
+     } flags;
+ 
+ #if USE_DELAY_POOLS
+     DelayId delayId;
++
++    /// The maximum number of bytes the Store client can read/copy next without
++    /// overflowing its buffer and without violating delay pool limits. Store
++    /// I/O is not rate-limited, but we assume that the same number of bytes may
++    /// be read from the Squid-to-server connection that may be rate-limited.
++    int bytesWanted() const;
++
+     void setDelayId(DelayId delay_id);
+ #endif
+ 
      dlink_node node;
+-    /* Below here is private - do no alter outside storeClient calls */
+-    StoreIOBuffer copyInto;
  
  private:
 -    bool moreToSend() const;
@@ -568,28 +732,32 @@ index 457844a..1d90e5a 100644
 +    bool parseHttpHeadersFromDisk();
 +    bool tryParsingHttpHeaders();
 +    void skipHttpHeadersFromDisk();
++
++    void fail();
++    void callback(ssize_t);
++    void noteCopiedBytes(size_t);
++    void noteNews();
++    void finishCallback();
++    static void FinishCallback(store_client *);
  
-     void fail();
-     void callback(ssize_t);
-     void noteCopiedBytes(size_t);
--    void noteEof();
-     void noteNews();
-     void finishCallback();
-     static void FinishCallback(store_client *);
-@@ -123,13 +151,23 @@ private:
      int type;
      bool object_ok;
  
 +    /// \copydoc atEof()
 +    bool atEof_;
 +
-     /// Storage and metadata associated with the current copy() request. Ought
-     /// to be ignored when not answering a copy() request.
-     StoreIOBuffer copyInto;
- 
--    /// The number of bytes loaded from Store into copyInto while answering the
--    /// current copy() request. Ought to be ignored when not answering.
--    size_t copiedSize;
++    /// Storage and metadata associated with the current copy() request. Ought
++    /// to be ignored when not answering a copy() request.
++    /// * copyInto.offset is the requested HTTP response body offset;
++    /// * copyInto.data is the client-owned, client-provided result buffer;
++    /// * copyInto.length is the size of the .data result buffer;
++    /// * copyInto.flags are unused by this class.
++    StoreIOBuffer copyInto;
++
++    // TODO: Convert to uint64_t after fixing mem_hdr::endOffset() and friends.
++    /// \copydoc discardableHttpEnd()
++    int64_t discardableHttpEnd_ = 0;
++
 +    /// the total number of finishCallback() calls
 +    uint64_t answers;
 +
@@ -597,31 +765,21 @@ index 457844a..1d90e5a 100644
 +    /// request. Buffer contents depends on the source and parsing stage; it may
 +    /// hold (parts of) swap metadata, HTTP response headers, and/or HTTP
 +    /// response body bytes.
-+    std::optional parsingBuffer;
++    std::pair parsingBuffer = std::make_pair(Store::ParsingBuffer(), false);
 +
 +    StoreIOBuffer lastDiskRead; ///< buffer used for the last storeRead() call
- 
++
      /* Until we finish stuffing code into store_client */
  
-@@ -152,7 +190,18 @@ public:
+ public:
+@@ -97,6 +169,7 @@ public:
+         bool pending() const;
+         STCB *callback_handler;
+         void *callback_data;
++        AsyncCall::Pointer notifier;
      } _callback;
  };
  
-+/// Asynchronously read HTTP response headers and/or body bytes from Store.
-+///
-+/// The requested zero-based HTTP body offset is specified via the
-+/// StoreIOBuffer::offset field. The first call (for a given store_client
-+/// object) must specify zero offset.
-+///
-+/// The requested HTTP body portion size is specified via the
-+/// StoreIOBuffer::length field. The function may return fewer body bytes.
-+///
-+/// See STCB for result delivery details.
- void storeClientCopy(store_client *, StoreEntry *, StoreIOBuffer, STCB *, void *);
-+
- store_client* storeClientListAdd(StoreEntry * e, void *data);
- int storeClientCopyPending(store_client *, StoreEntry * e, void *data);
- int storeUnregister(store_client * sc, StoreEntry * e, void *data);
 diff --git a/src/StoreIOBuffer.h b/src/StoreIOBuffer.h
 index 009aafe..ad1c491 100644
 --- a/src/StoreIOBuffer.h
@@ -637,7 +795,7 @@ index 009aafe..ad1c491 100644
          if (fwrite(data, length, 1, stderr)) {}
          if (fwrite("\n", 1, 1, stderr)) {}
 diff --git a/src/acl/Asn.cc b/src/acl/Asn.cc
-index 94ec862..ad450c0 100644
+index 94ec862..07353d6 100644
 --- a/src/acl/Asn.cc
 +++ b/src/acl/Asn.cc
 @@ -16,20 +16,22 @@
@@ -664,7 +822,15 @@ index 94ec862..ad450c0 100644
  
  /* BEGIN of definitions for radix tree entries */
  
-@@ -77,10 +79,9 @@ public:
+@@ -70,33 +72,18 @@ class ASState
+     CBDATA_CLASS(ASState);
+ 
+ public:
+-    ASState();
++    ASState() = default;
+     ~ASState();
+ 
+     StoreEntry *entry;
      store_client *sc;
      HttpRequest::Pointer request;
      int as_number;
@@ -672,13 +838,27 @@ index 94ec862..ad450c0 100644
 -    int reqofs;
 -    char reqbuf[AS_REQBUF_SZ];
 -    bool dataRead;
-+
-+    /// for receiving a WHOIS reply body from Store and interpreting it
 +    Store::ParsingBuffer parsingBuffer;
  };
  
  CBDATA_CLASS_INIT(ASState);
-@@ -112,7 +113,7 @@ struct rtentry_t {
+ 
+-ASState::ASState() :
+-    entry(NULL),
+-    sc(NULL),
+-    request(NULL),
+-    as_number(0),
+-    offset(0),
+-    reqofs(0),
+-    dataRead(false)
+-{
+-    memset(reqbuf, 0, AS_REQBUF_SZ);
+-}
+-
+ ASState::~ASState()
+ {
+     debugs(53, 3, entry->url());
+@@ -112,7 +99,7 @@ struct rtentry_t {
      m_ADDR e_mask;
  };
  
@@ -687,7 +867,7 @@ index 94ec862..ad450c0 100644
  
  static void asnCacheStart(int as);
  
-@@ -256,8 +257,7 @@ asnCacheStart(int as)
+@@ -256,8 +243,7 @@ asnCacheStart(int as)
      }
  
      asState->entry = e;
@@ -697,7 +877,7 @@ index 94ec862..ad450c0 100644
  }
  
  static void
-@@ -265,13 +265,8 @@ asHandleReply(void *data, StoreIOBuffer result)
+@@ -265,13 +251,8 @@ asHandleReply(void *data, StoreIOBuffer result)
  {
      ASState *asState = (ASState *)data;
      StoreEntry *e = asState->entry;
@@ -712,7 +892,7 @@ index 94ec862..ad450c0 100644
  
      /* First figure out whether we should abort the request */
  
-@@ -280,11 +275,7 @@ asHandleReply(void *data, StoreIOBuffer result)
+@@ -280,11 +261,7 @@ asHandleReply(void *data, StoreIOBuffer result)
          return;
      }
  
@@ -725,7 +905,7 @@ index 94ec862..ad450c0 100644
          debugs(53, DBG_IMPORTANT, "asHandleReply: Called with Error set and size=" << (unsigned int) result.length);
          delete asState;
          return;
-@@ -294,78 +285,39 @@ asHandleReply(void *data, StoreIOBuffer result)
+@@ -294,117 +271,85 @@ asHandleReply(void *data, StoreIOBuffer result)
          return;
      }
  
@@ -734,26 +914,6 @@ index 94ec862..ad450c0 100644
 -     * Remembering that the actual buffer size is retsize + reqofs!
 -     */
 -    s = buf;
--
--    while ((size_t)(s - buf) < result.length + asState->reqofs && *s != '\0') {
--        while (*s && xisspace(*s))
--            ++s;
--
--        for (t = s; *t; ++t) {
--            if (xisspace(*t))
--                break;
--        }
--
--        if (*t == '\0') {
--            /* oof, word should continue on next block */
--            break;
--        }
--
--        *t = '\0';
--        debugs(53, 3, "asHandleReply: AS# " << s << " (" << asState->as_number << ")");
--        asnAddNet(s, asState->as_number);
--        s = t + 1;
--        asState->dataRead = true;
 +    asState->parsingBuffer.appended(result.data, result.length);
 +    Parser::Tokenizer tok(SBuf(asState->parsingBuffer.content().data, asState->parsingBuffer.contentSize()));
 +    SBuf address;
@@ -762,8 +922,51 @@ index 94ec862..ad450c0 100644
 +    static const auto WhoisSpaces = CharacterSet("ASCII_spaces", " \f\r\n\t\v");
 +    while (tok.token(address, WhoisSpaces)) {
 +        (void)asnAddNet(address, asState->as_number);
++    }
++    asState->parsingBuffer.consume(tok.parsedSize());
++    const auto leftoverBytes = asState->parsingBuffer.contentSize();
+ 
+-    while ((size_t)(s - buf) < result.length + asState->reqofs && *s != '\0') {
+-        while (*s && xisspace(*s))
+-            ++s;
++    if (asState->sc->atEof()) {
++        if (leftoverBytes)
++            debugs(53, 2, "WHOIS: Discarding the last " << leftoverBytes << " received bytes of a truncated AS response");
++        delete asState;
++        return;
++    }
+ 
+-        for (t = s; *t; ++t) {
+-            if (xisspace(*t))
+-                break;
+-        }
++    if (asState->sc->atEof()) {
++        if (leftoverBytes)
++            debugs(53, 2, "WHOIS: Discarding the last " << leftoverBytes << " received bytes of a truncated AS response");
++        delete asState;
++        return;
++    }
+ 
+-        if (*t == '\0') {
+-            /* oof, word should continue on next block */
+-            break;
+-        }
++    const auto remainingSpace = asState->parsingBuffer.space().positionAt(result.offset + result.length);
+ 
+-        *t = '\0';
+-        debugs(53, 3, "asHandleReply: AS# " << s << " (" << asState->as_number << ")");
+-        asnAddNet(s, asState->as_number);
+-        s = t + 1;
+-        asState->dataRead = true;
++    if (!remainingSpace.length) {
++        Assure(leftoverBytes);
++        debugs(53, DBG_IMPORTANT, "WARNING: Ignoring the tail of a WHOIS AS response" <<
++               " with an unparsable section of " << leftoverBytes <<
++               " bytes ending at offset " << remainingSpace.offset);
++        delete asState;
++        return;
      }
--
+ 
 -    /*
 -     * Next, grab the end of the 'valid data' in the buffer, and figure
 -     * out how much data is left in our buffer, which we need to keep
@@ -809,23 +1012,7 @@ index 94ec862..ad450c0 100644
 -                        tempBuffer,
 -                        asHandleReply,
 -                        asState);
-+    asState->parsingBuffer.consume(tok.parsedSize());
-+    const auto leftoverBytes = asState->parsingBuffer.contentSize();
-+    if (asState->sc->atEof()) {
-+        if (leftoverBytes)
-+            debugs(53, 2, "WHOIS: Discarding the last " << leftoverBytes << " received bytes of a truncated AS response");
-+        delete asState;
-+        return;
-+    }
-+    const auto remainingSpace = asState->parsingBuffer.space().positionAt(result.offset + result.length);
-+    if (!remainingSpace.length) {
-+        Assure(leftoverBytes);
-+        debugs(53, DBG_IMPORTANT, "WARNING: Ignoring the tail of a WHOIS AS response" <<
-+               " with an unparsable section of " << leftoverBytes <<
-+               " bytes ending at offset " << remainingSpace.offset);
-+        delete asState;
-+        return;
-+    }
+-    }
 +    const decltype(StoreIOBuffer::offset) stillReasonableOffset = 100000; // an arbitrary limit in bytes
 +    if (remainingSpace.offset > stillReasonableOffset) {
 +        // stop suspicious accumulation of parsed addresses and/or work
@@ -833,10 +1020,12 @@ index 94ec862..ad450c0 100644
 +               " exceeding " << stillReasonableOffset << " bytes");
 +        delete asState;
 +        return;
-     }
++     }
++
++    storeClientCopy(asState->sc, e, remainingSpace, asHandleReply, asState);
  }
  
-@@ -373,38 +325,29 @@ asHandleReply(void *data, StoreIOBuffer result)
+ /**
   * add a network (addr, mask) to the radix tree, with matching AS number
   */
  static int
@@ -863,19 +1052,19 @@ index 94ec862..ad450c0 100644
          debugs(53, 3, "asnAddNet: failed, invalid response from whois server.");
          return 0;
      }
--
+ 
 -    *t = '\0';
 -    addr = as_string;
 -    bitl = atoi(t + 1);
 -
 -    if (bitl < 0)
 -        bitl = 0;
--
 +    const Ip::Address addr = addressToken.c_str();
+ 
      // INET6 TODO : find a better way of identifying the base IPA family for mask than this.
 -    t = strchr(as_string, '.');
--
 +    const auto addrFamily = (addressToken.find('.') != SBuf::npos) ? AF_INET : AF_INET6;
+ 
      // generate Netbits Format Mask
 +    Ip::Address mask;
      mask.setNoAddr();
@@ -886,6 +1075,282 @@ index 94ec862..ad450c0 100644
  
      debugs(53, 3, "asnAddNet: called for " << addr << "/" << mask );
  
+diff --git a/src/acl/FilledChecklist.cc b/src/acl/FilledChecklist.cc
+index 9826c24..33eeb67 100644
+--- a/src/acl/FilledChecklist.cc
++++ b/src/acl/FilledChecklist.cc
+@@ -116,7 +116,6 @@ ACLFilledChecklist::verifyAle() const
+     if (reply && !al->reply) {
+         showDebugWarning("HttpReply object");
+         al->reply = reply;
+-        HTTPMSGLOCK(al->reply);
+     }
+ 
+ #if USE_IDENT
+diff --git a/src/adaptation/icap/ModXact.cc b/src/adaptation/icap/ModXact.cc
+index 370f077..2bcc917 100644
+--- a/src/adaptation/icap/ModXact.cc
++++ b/src/adaptation/icap/ModXact.cc
+@@ -1292,11 +1292,8 @@ void Adaptation::Icap::ModXact::finalizeLogInfo()
+     al.adapted_request = adapted_request_;
+     HTTPMSGLOCK(al.adapted_request);
+ 
+-    if (adapted_reply_) {
+-        al.reply = adapted_reply_;
+-        HTTPMSGLOCK(al.reply);
+-    } else
+-        al.reply = NULL;
++    // XXX: This reply (and other ALE members!) may have been needed earlier.
++    al.reply = adapted_reply_;
+ 
+     if (h->rfc931.size())
+         al.cache.rfc931 = h->rfc931.termedBuf();
+@@ -1331,12 +1328,6 @@ void Adaptation::Icap::ModXact::finalizeLogInfo()
+         if (replyHttpBodySize >= 0)
+             al.cache.highOffset = replyHttpBodySize;
+         //don't set al.cache.objectSize because it hasn't exist yet
+-
+-        MemBuf mb;
+-        mb.init();
+-        adapted_reply_->header.packInto(&mb);
+-        al.headers.reply = xstrdup(mb.buf);
+-        mb.clean();
+     }
+     prepareLogWithRequestDetails(adapted_request_, alep);
+     Xaction::finalizeLogInfo();
+diff --git a/src/adaptation/icap/icap_log.cc b/src/adaptation/icap/icap_log.cc
+index ecc4baf..6bb5a6d 100644
+--- a/src/adaptation/icap/icap_log.cc
++++ b/src/adaptation/icap/icap_log.cc
+@@ -62,7 +62,7 @@ void icapLogLog(AccessLogEntry::Pointer &al)
+     if (IcapLogfileStatus == LOG_ENABLE) {
+         ACLFilledChecklist checklist(NULL, al->adapted_request, NULL);
+         if (al->reply) {
+-            checklist.reply = al->reply;
++            checklist.reply = al->reply.getRaw();
+             HTTPMSGLOCK(checklist.reply);
+         }
+         accessLogLogTo(Config.Log.icaplogs, al, &checklist);
+diff --git a/src/base/Assure.cc b/src/base/Assure.cc
+new file mode 100644
+index 0000000..cb69fc5
+--- /dev/null
++++ b/src/base/Assure.cc
+@@ -0,0 +1,24 @@
++/*
++ * Copyright (C) 1996-2022 The Squid Software Foundation and contributors
++ *
++ * Squid software is distributed under GPLv2+ license and includes
++ * contributions from numerous individuals and organizations.
++ * Please see the COPYING and CONTRIBUTORS files for details.
++ */
++
++#include "squid.h"
++#include "base/Assure.h"
++#include "base/TextException.h"
++#include "sbuf/Stream.h"
++
++[[ noreturn ]] void
++ReportAndThrow_(const int debugLevel, const char *description, const SourceLocation &location)
++{
++    const TextException ex(description, location);
++    const auto label = debugLevel <= DBG_IMPORTANT ? "ERROR: Squid BUG: " : "";
++    // TODO: Consider also printing the number of BUGs reported so far. It would
++    // require GC, but we could even print the number of same-location reports.
++    debugs(0, debugLevel, label << ex);
++    throw ex;
++}
++
+diff --git a/src/base/Assure.h b/src/base/Assure.h
+new file mode 100644
+index 0000000..bb571d2
+--- /dev/null
++++ b/src/base/Assure.h
+@@ -0,0 +1,52 @@
++/*
++ * Copyright (C) 1996-2022 The Squid Software Foundation and contributors
++ *
++ * Squid software is distributed under GPLv2+ license and includes
++ * contributions from numerous individuals and organizations.
++ * Please see the COPYING and CONTRIBUTORS files for details.
++ */
++
++#ifndef SQUID_SRC_BASE_ASSURE_H
++#define SQUID_SRC_BASE_ASSURE_H
++
++#include "base/Here.h"
++
++/// Reports the description (at the given debugging level) and throws
++/// the corresponding exception. Reduces compiled code size of Assure() and
++/// Must() callers. Do not call directly; use Assure() instead.
++/// \param description explains the condition (i.e. what MUST happen)
++[[ noreturn ]] void ReportAndThrow_(int debugLevel, const char *description, const SourceLocation &);
++
++/// Calls ReportAndThrow() if needed. Reduces caller code duplication.
++/// Do not call directly; use Assure() instead.
++/// \param description c-string explaining the condition (i.e. what MUST happen)
++#define Assure_(debugLevel, condition, description, location) \
++    while (!(condition)) \
++        ReportAndThrow_((debugLevel), (description), (location))
++
++#if !defined(NDEBUG)
++
++/// Like assert() but throws an exception instead of aborting the process. Use
++/// this macro to detect code logic mistakes (i.e. bugs) where aborting the
++/// current AsyncJob or a similar task is unlikely to jeopardize Squid service
++/// integrity. For example, this macro is _not_ appropriate for detecting bugs
++/// that indicate a dangerous global state corruption which may go unnoticed by
++/// other jobs after the current job or task is aborted.
++#define Assure(condition) \
++        Assure2((condition), #condition)
++
++/// Like Assure() but allows the caller to customize the exception message.
++/// \param description string literal describing the condition (i.e. what MUST happen)
++#define Assure2(condition, description) \
++        Assure_(0, (condition), ("assurance failed: " description), Here())
++
++#else
++
++/* do-nothing implementations for NDEBUG builds */
++#define Assure(condition) ((void)0)
++#define Assure2(condition, description) ((void)0)
++
++#endif /* NDEBUG */
++
++#endif /* SQUID_SRC_BASE_ASSURE_H */
++
+diff --git a/src/base/Makefile.am b/src/base/Makefile.am
+index 9b0f4cf..d5f4c01 100644
+--- a/src/base/Makefile.am
++++ b/src/base/Makefile.am
+@@ -11,6 +11,8 @@ include $(top_srcdir)/src/TestHeaders.am
+ noinst_LTLIBRARIES = libbase.la
+ 
+ libbase_la_SOURCES = \
++	Assure.cc \
++	Assure.h \
+ 	AsyncCall.cc \
+ 	AsyncCall.h \
+ 	AsyncCallQueue.cc \
+diff --git a/src/base/Makefile.in b/src/base/Makefile.in
+index 90a4f5b..6a83aa4 100644
+--- a/src/base/Makefile.in
++++ b/src/base/Makefile.in
+@@ -163,7 +163,7 @@ CONFIG_CLEAN_FILES =
+ CONFIG_CLEAN_VPATH_FILES =
+ LTLIBRARIES = $(noinst_LTLIBRARIES)
+ libbase_la_LIBADD =
+-am_libbase_la_OBJECTS = AsyncCall.lo AsyncCallQueue.lo AsyncJob.lo \
++am_libbase_la_OBJECTS = Assure.lo AsyncCall.lo AsyncCallQueue.lo AsyncJob.lo \
+ 	CharacterSet.lo File.lo Here.lo RegexPattern.lo \
+ 	RunnersRegistry.lo TextException.lo
+ libbase_la_OBJECTS = $(am_libbase_la_OBJECTS)
+@@ -186,7 +186,7 @@ am__v_at_1 =
+ DEFAULT_INCLUDES = 
+ depcomp = $(SHELL) $(top_srcdir)/cfgaux/depcomp
+ am__maybe_remake_depfiles = depfiles
+-am__depfiles_remade = ./$(DEPDIR)/AsyncCall.Plo \
++am__depfiles_remade = ./$(DEPDIR)/Assure.Plo ./$(DEPDIR)/AsyncCall.Plo \
+ 	./$(DEPDIR)/AsyncCallQueue.Plo ./$(DEPDIR)/AsyncJob.Plo \
+ 	./$(DEPDIR)/CharacterSet.Plo ./$(DEPDIR)/File.Plo \
+ 	./$(DEPDIR)/Here.Plo ./$(DEPDIR)/RegexPattern.Plo \
+@@ -729,6 +729,8 @@ COMPAT_LIB = $(top_builddir)/compat/libcompatsquid.la $(LIBPROFILER)
+ subst_perlshell = sed -e 's,[@]PERL[@],$(PERL),g' <$(srcdir)/$@.pl.in >$@ || ($(RM) -f $@ ; exit 1)
+ noinst_LTLIBRARIES = libbase.la
+ libbase_la_SOURCES = \
++	Assure.cc \
++	Assure.h \
+ 	AsyncCall.cc \
+ 	AsyncCall.h \
+ 	AsyncCallQueue.cc \
+@@ -827,6 +829,7 @@ mostlyclean-compile:
+ distclean-compile:
+ 	-rm -f *.tab.c
+ 
++@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Assure.Plo@am__quote@ # am--include-marker
+ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/AsyncCall.Plo@am__quote@ # am--include-marker
+ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/AsyncCallQueue.Plo@am__quote@ # am--include-marker
+ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/AsyncJob.Plo@am__quote@ # am--include-marker
+@@ -1167,7 +1170,8 @@ clean-am: clean-checkPROGRAMS clean-generic clean-libtool \
+ 	clean-noinstLTLIBRARIES mostlyclean-am
+ 
+ distclean: distclean-am
+-		-rm -f ./$(DEPDIR)/AsyncCall.Plo
++		-rm -f ./$(DEPDIR)/Assure.Plo
++	-rm -f ./$(DEPDIR)/AsyncCall.Plo
+ 	-rm -f ./$(DEPDIR)/AsyncCallQueue.Plo
+ 	-rm -f ./$(DEPDIR)/AsyncJob.Plo
+ 	-rm -f ./$(DEPDIR)/CharacterSet.Plo
+@@ -1221,7 +1225,8 @@ install-ps-am:
+ installcheck-am:
+ 
+ maintainer-clean: maintainer-clean-am
+-		-rm -f ./$(DEPDIR)/AsyncCall.Plo
++		-rm -f ./$(DEPDIR)/Assure.Plo
++	-rm -f ./$(DEPDIR)/AsyncCall.Plo
+ 	-rm -f ./$(DEPDIR)/AsyncCallQueue.Plo
+ 	-rm -f ./$(DEPDIR)/AsyncJob.Plo
+ 	-rm -f ./$(DEPDIR)/CharacterSet.Plo
+diff --git a/src/base/TextException.cc b/src/base/TextException.cc
+index 5cfeb26..f895ae9 100644
+--- a/src/base/TextException.cc
++++ b/src/base/TextException.cc
+@@ -58,6 +58,13 @@ TextException::what() const throw()
+     return result.what();
+ }
+ 
++std::ostream &
++operator <<(std::ostream &os, const TextException &ex)
++{
++    ex.print(os);
++    return os;
++}
++
+ std::ostream &
+ CurrentException(std::ostream &os)
+ {
+diff --git a/src/base/TextException.h b/src/base/TextException.h
+index 6a79536..1f9ca11 100644
+--- a/src/base/TextException.h
++++ b/src/base/TextException.h
+@@ -9,6 +9,7 @@
+ #ifndef SQUID__TEXTEXCEPTION_H
+ #define SQUID__TEXTEXCEPTION_H
+ 
++#include "base/Assure.h"
+ #include "base/Here.h"
+ 
+ #include 
+@@ -51,11 +52,12 @@ public:
+ /// prints active (i.e., thrown but not yet handled) exception
+ std::ostream &CurrentException(std::ostream &);
+ 
++/// efficiently prints TextException
++std::ostream &operator <<(std::ostream &, const TextException &);
++
+ /// legacy convenience macro; it is not difficult to type Here() now
+ #define TexcHere(msg) TextException((msg), Here())
+ 
+-/// Like assert() but throws an exception instead of aborting the process
+-/// and allows the caller to specify a custom exception message.
+ #define Must2(condition, message) \
+     do { \
+         if (!(condition)) { \
+@@ -65,8 +67,13 @@ std::ostream &CurrentException(std::ostream &);
+         } \
+     } while (/*CONSTCOND*/ false)
+ 
++/// Like assert() but throws an exception instead of aborting the process
++/// and allows the caller to specify a custom exception message.
++#define Must3(condition, description, location) \
++    Assure_(3, (condition), ("check failed: " description), (location))
++
+ /// Like assert() but throws an exception instead of aborting the process.
+-#define Must(condition) Must2((condition), "check failed: " #condition)
++#define Must(condition) Must3((condition), #condition, Here())
+ 
+ /// Reports and swallows all exceptions to prevent compiler warnings and runtime
+ /// errors related to throwing class destructors. Should be used for most dtors.
 diff --git a/src/clientStream.cc b/src/clientStream.cc
 index 04d89c0..bd5dd09 100644
 --- a/src/clientStream.cc
@@ -900,18 +1365,61 @@ index 04d89c0..bd5dd09 100644
      next->callback(next, http, rep, replyBuffer);
  }
  
+diff --git a/src/client_side.cc b/src/client_side.cc
+index ab393e4..c46a845 100644
+--- a/src/client_side.cc
++++ b/src/client_side.cc
+@@ -429,7 +429,7 @@ ClientHttpRequest::logRequest()
+         // The al->notes and request->notes must point to the same object.
+         (void)SyncNotes(*al, *request);
+         for (auto i = Config.notes.begin(); i != Config.notes.end(); ++i) {
+-            if (const char *value = (*i)->match(request, al->reply, al)) {
++            if (const char *value = (*i)->match(request, al->reply.getRaw(), al)) {
+                 NotePairs ¬es = SyncNotes(*al, *request);
+                 notes.add((*i)->key.termedBuf(), value);
+                 debugs(33, 3, (*i)->key.termedBuf() << " " << value);
+@@ -439,7 +439,7 @@ ClientHttpRequest::logRequest()
+ 
+     ACLFilledChecklist checklist(NULL, request, NULL);
+     if (al->reply) {
+-        checklist.reply = al->reply;
++        checklist.reply = al->reply.getRaw();
+         HTTPMSGLOCK(checklist.reply);
+     }
+ 
+@@ -457,7 +457,7 @@ ClientHttpRequest::logRequest()
+         ACLFilledChecklist statsCheck(Config.accessList.stats_collection, request, NULL);
+         statsCheck.al = al;
+         if (al->reply) {
+-            statsCheck.reply = al->reply;
++            statsCheck.reply = al->reply.getRaw();
+             HTTPMSGLOCK(statsCheck.reply);
+         }
+         updatePerformanceCounters = statsCheck.fastCheck().allowed();
+@@ -3844,6 +3844,11 @@ ConnStateData::finishDechunkingRequest(bool withSuccess)
+ void
+ ConnStateData::sendControlMsg(HttpControlMsg msg)
+ {
++    if (const auto context = pipeline.front()) {
++        if (context->http)
++            context->http->al->reply = msg.reply;
++    }
++
+     if (!isOpen()) {
+         debugs(33, 3, HERE << "ignoring 1xx due to earlier closure");
+         return;
 diff --git a/src/client_side_reply.cc b/src/client_side_reply.cc
-index c919af4..861f4b4 100644
+index c919af4..fea5ecb 100644
 --- a/src/client_side_reply.cc
 +++ b/src/client_side_reply.cc
-@@ -33,6 +33,7 @@
- #include "refresh.h"
+@@ -34,6 +34,7 @@
  #include "RequestFlags.h"
  #include "SquidConfig.h"
-+#include "SquidMath.h"
  #include "SquidTime.h"
++#include "SquidMath.h"
  #include "Store.h"
  #include "StrList.h"
+ #include "tools.h"
 @@ -76,11 +77,7 @@ clientReplyContext::clientReplyContext(ClientHttpRequest *clientContext) :
      purgeStatus(Http::scNone),
      lookingforstore(0),
@@ -933,12 +1441,13 @@ index c919af4..861f4b4 100644
      if (http->request)
          http->request->ignoreRange(reason);
      flags.storelogiccomplete = 1;
-@@ -206,13 +201,9 @@ clientReplyContext::saveState()
+@@ -206,13 +201,10 @@ clientReplyContext::saveState()
      old_sc = sc;
      old_lastmod = http->request->lastmod;
      old_etag = http->request->etag;
 -    old_reqsize = reqsize;
 -    tempBuffer.offset = reqofs;
++
      /* Prevent accessing the now saved entries */
      http->storeEntry(NULL);
      sc = NULL;
@@ -947,7 +1456,7 @@ index c919af4..861f4b4 100644
  }
  
  void
-@@ -223,8 +214,6 @@ clientReplyContext::restoreState()
+@@ -223,8 +215,6 @@ clientReplyContext::restoreState()
      removeClientStoreReference(&sc, http);
      http->storeEntry(old_entry);
      sc = old_sc;
@@ -956,15 +1465,16 @@ index c919af4..861f4b4 100644
      http->request->lastmod = old_lastmod;
      http->request->etag = old_etag;
      /* Prevent accessed the old saved entries */
-@@ -232,7 +221,6 @@ clientReplyContext::restoreState()
+@@ -232,7 +222,7 @@ clientReplyContext::restoreState()
      old_sc = NULL;
      old_lastmod = -1;
      old_etag.clean();
 -    old_reqsize = 0;
++
      tempBuffer.offset = 0;
  }
  
-@@ -250,18 +238,27 @@ clientReplyContext::getNextNode() const
+@@ -250,18 +240,27 @@ clientReplyContext::getNextNode() const
      return (clientStreamNode *)ourNode->node.next->data;
  }
  
@@ -1001,7 +1511,7 @@ index c919af4..861f4b4 100644
  }
  
  /* there is an expired entry in the store.
-@@ -358,30 +355,22 @@ clientReplyContext::processExpired()
+@@ -358,30 +357,23 @@ clientReplyContext::processExpired()
      {
          /* start counting the length from 0 */
          StoreIOBuffer localTempBuffer(HTTP_REQBUF_SZ, 0, tempbuf);
@@ -1032,11 +1542,12 @@ index c919af4..861f4b4 100644
 -    tempresult.length = reqsize;
 -    tempresult.data = tempbuf;
 -    sendMoreData(tempresult);
++
 +    sendMoreData(upstreamResponse);
  }
  
  void
-@@ -398,11 +387,9 @@ clientReplyContext::sendClientOldEntry()
+@@ -398,11 +390,9 @@ clientReplyContext::sendClientOldEntry()
      restoreState();
      /* here the data to send is in the next nodes buffers already */
      assert(!EBIT_TEST(http->storeEntry()->flags, ENTRY_ABORTED));
@@ -1051,13 +1562,7 @@ index c919af4..861f4b4 100644
  }
  
  /* This is the workhorse of the HandleIMSReply callback.
-@@ -411,16 +398,16 @@ clientReplyContext::sendClientOldEntry()
-  * IMS request to revalidate a stale entry.
-  */
- void
--clientReplyContext::handleIMSReply(StoreIOBuffer result)
-+clientReplyContext::handleIMSReply(const StoreIOBuffer result)
- {
+@@ -416,11 +406,11 @@ clientReplyContext::handleIMSReply(StoreIOBuffer result)
      if (deleting)
          return;
  
@@ -1071,7 +1576,7 @@ index c919af4..861f4b4 100644
      if (result.flags.error && !EBIT_TEST(http->storeEntry()->flags, ENTRY_ABORTED))
          return;
  
-@@ -433,9 +420,6 @@ clientReplyContext::handleIMSReply(StoreIOBuffer result)
+@@ -433,9 +423,6 @@ clientReplyContext::handleIMSReply(StoreIOBuffer result)
          return;
      }
  
@@ -1081,7 +1586,7 @@ index c919af4..861f4b4 100644
      const Http::StatusCode status = http->storeEntry()->getReply()->sline.status();
  
      // request to origin was aborted
-@@ -460,7 +444,7 @@ clientReplyContext::handleIMSReply(StoreIOBuffer result)
+@@ -460,7 +447,7 @@ clientReplyContext::handleIMSReply(StoreIOBuffer result)
          if (http->request->flags.ims && !old_entry->modifiedSince(http->request->ims, http->request->imslen)) {
              // forward the 304 from origin
              debugs(88, 3, "origin replied 304, revalidating existing entry and forwarding 304 to client");
@@ -1090,7 +1595,7 @@ index c919af4..861f4b4 100644
          } else {
              // send existing entry, it's still valid
              debugs(88, 3, "origin replied 304, revalidating existing entry and sending " <<
-@@ -484,7 +468,7 @@ clientReplyContext::handleIMSReply(StoreIOBuffer result)
+@@ -484,7 +471,7 @@ clientReplyContext::handleIMSReply(StoreIOBuffer result)
              http->logType = LOG_TCP_REFRESH_MODIFIED;
              debugs(88, 3, "origin replied " << status <<
                     ", replacing existing entry and forwarding to client");
@@ -1099,7 +1604,7 @@ index c919af4..861f4b4 100644
          }
      }
  
-@@ -493,7 +477,7 @@ clientReplyContext::handleIMSReply(StoreIOBuffer result)
+@@ -493,7 +480,7 @@ clientReplyContext::handleIMSReply(StoreIOBuffer result)
          http->logType = LOG_TCP_REFRESH_FAIL_ERR;
          debugs(88, 3, "origin replied with error " << status <<
                 ", forwarding to client due to fail_on_validation_err");
@@ -1108,7 +1613,7 @@ index c919af4..861f4b4 100644
      } else {
          // ignore and let client have old entry
          http->logType = LOG_TCP_REFRESH_FAIL_OLD;
-@@ -506,13 +490,7 @@ clientReplyContext::handleIMSReply(StoreIOBuffer result)
+@@ -506,13 +493,7 @@ clientReplyContext::handleIMSReply(StoreIOBuffer result)
  SQUIDCEXTERN CSR clientGetMoreData;
  SQUIDCEXTERN CSD clientReplyDetach;
  
@@ -1123,7 +1628,7 @@ index c919af4..861f4b4 100644
  void
  clientReplyContext::CacheHit(void *data, StoreIOBuffer result)
  {
-@@ -520,11 +498,11 @@ clientReplyContext::CacheHit(void *data, StoreIOBuffer result)
+@@ -520,11 +501,11 @@ clientReplyContext::CacheHit(void *data, StoreIOBuffer result)
      context->cacheHit(result);
  }
  
@@ -1139,7 +1644,7 @@ index c919af4..861f4b4 100644
  {
      /** Ignore if the HIT object is being deleted. */
      if (deleting) {
-@@ -536,7 +514,7 @@ clientReplyContext::cacheHit(StoreIOBuffer result)
+@@ -536,7 +517,7 @@ clientReplyContext::cacheHit(StoreIOBuffer result)
  
      HttpRequest *r = http->request;
  
@@ -1148,7 +1653,7 @@ index c919af4..861f4b4 100644
  
      if (http->storeEntry() == NULL) {
          debugs(88, 3, "clientCacheHit: request aborted");
-@@ -560,20 +538,7 @@ clientReplyContext::cacheHit(StoreIOBuffer result)
+@@ -560,20 +541,7 @@ clientReplyContext::cacheHit(StoreIOBuffer result)
          return;
      }
  
@@ -1169,7 +1674,7 @@ index c919af4..861f4b4 100644
  
      /*
       * Got the headers, now grok them
-@@ -587,6 +552,8 @@ clientReplyContext::cacheHit(StoreIOBuffer result)
+@@ -587,6 +555,8 @@ clientReplyContext::cacheHit(StoreIOBuffer result)
          return;
      }
  
@@ -1178,7 +1683,7 @@ index c919af4..861f4b4 100644
      switch (varyEvaluateMatch(e, r)) {
  
      case VARY_NONE:
-@@ -687,7 +654,7 @@ clientReplyContext::cacheHit(StoreIOBuffer result)
+@@ -687,7 +657,7 @@ clientReplyContext::cacheHit(StoreIOBuffer result)
          return;
      } else if (r->conditional()) {
          debugs(88, 5, "conditional HIT");
@@ -1187,7 +1692,7 @@ index c919af4..861f4b4 100644
              return;
      }
  
-@@ -806,7 +773,7 @@ clientReplyContext::processOnlyIfCachedMiss()
+@@ -806,7 +776,7 @@ clientReplyContext::processOnlyIfCachedMiss()
  
  /// process conditional request from client
  bool
@@ -1196,7 +1701,7 @@ index c919af4..861f4b4 100644
  {
      StoreEntry *const e = http->storeEntry();
  
-@@ -984,16 +951,7 @@ clientReplyContext::purgeFoundObject(StoreEntry *entry)
+@@ -984,16 +954,7 @@ clientReplyContext::purgeFoundObject(StoreEntry *entry)
  
      http->logType = LOG_TCP_HIT;
  
@@ -1214,7 +1719,7 @@ index c919af4..861f4b4 100644
  }
  
  void
-@@ -1111,16 +1069,10 @@ clientReplyContext::purgeDoPurgeHead(StoreEntry *newEntry)
+@@ -1111,16 +1072,10 @@ clientReplyContext::purgeDoPurgeHead(StoreEntry *newEntry)
  }
  
  void
@@ -1233,7 +1738,7 @@ index c919af4..861f4b4 100644
      http->storeEntry()->releaseRequest();
      http->storeEntry()->buffer();
      HttpReply *rep = new HttpReply;
-@@ -1169,16 +1121,15 @@ int
+@@ -1169,16 +1124,16 @@ int
  clientReplyContext::storeOKTransferDone() const
  {
      assert(http->storeEntry()->objectLen() >= 0);
@@ -1246,7 +1751,7 @@ index c919af4..861f4b4 100644
 -               " headers_sz=" << headers_sz);
 -        return 1;
 -    }
--
+ 
 -    return 0;
 +    const auto done = http->out.offset >= http->storeEntry()->objectLen() - headers_sz;
 +    const auto debugLevel = done ? 3 : 5;
@@ -1258,20 +1763,20 @@ index c919af4..861f4b4 100644
  }
  
  int
-@@ -1190,11 +1141,8 @@ clientReplyContext::storeNotOKTransferDone() const
+@@ -1190,10 +1145,9 @@ clientReplyContext::storeNotOKTransferDone() const
      MemObject *mem = http->storeEntry()->mem_obj;
      assert(mem != NULL);
      assert(http->request != NULL);
 -    /* mem->reply was wrong because it uses the UPSTREAM header length!!! */
 -    HttpReply const *curReply = mem->getReply();
++    const auto expectedBodySize = mem->baseReply().content_length;
  
 -    if (headers_sz == 0)
--        /* haven't found end of headers yet */
-+    if (mem->baseReply().pstate != Http::Message::psParsed)
++    if (mem->baseReply().pstate != psParsed)
+         /* haven't found end of headers yet */
          return 0;
  
-     /*
-@@ -1202,19 +1150,12 @@ clientReplyContext::storeNotOKTransferDone() const
+@@ -1202,19 +1156,14 @@ clientReplyContext::storeNotOKTransferDone() const
       * If we are sending a body and we don't have a content-length,
       * then we must wait for the object to become STORE_OK.
       */
@@ -1281,7 +1786,8 @@ index c919af4..861f4b4 100644
 -    uint64_t expectedLength = curReply->content_length + http->out.headers_sz;
 -
 -    if (http->out.size < expectedLength)
--        return 0;
++    if (expectedBodySize < 0)
+         return 0;
 -    else {
 -        debugs(88,3,HERE << "storeNotOKTransferDone " <<
 -               " out.size=" << http->out.size <<
@@ -1297,7 +1803,16 @@ index c919af4..861f4b4 100644
  }
  
  /* A write has completed, what is the next status based on the
-@@ -1778,20 +1719,12 @@ clientGetMoreData(clientStreamNode * aNode, ClientHttpRequest * http)
+@@ -1632,6 +1581,8 @@ clientReplyContext::cloneReply()
+     reply = http->storeEntry()->getReply()->clone();
+     HTTPMSGLOCK(reply);
+ 
++    http->al->reply = reply;
++
+     if (reply->sline.protocol == AnyP::PROTO_HTTP) {
+         /* RFC 2616 requires us to advertise our version (but only on real HTTP traffic) */
+         reply->sline.version = Http::ProtocolVersion();
+@@ -1778,20 +1729,12 @@ clientGetMoreData(clientStreamNode * aNode, ClientHttpRequest * http)
      assert (context);
      assert(context->http == http);
  
@@ -1319,7 +1834,7 @@ index c919af4..861f4b4 100644
          return;
      }
  
-@@ -1804,7 +1737,7 @@ clientGetMoreData(clientStreamNode * aNode, ClientHttpRequest * http)
+@@ -1804,7 +1747,7 @@ clientGetMoreData(clientStreamNode * aNode, ClientHttpRequest * http)
  
      if (context->http->request->method == Http::METHOD_TRACE) {
          if (context->http->request->header.getInt64(Http::HdrType::MAX_FORWARDS) == 0) {
@@ -1328,7 +1843,7 @@ index c919af4..861f4b4 100644
              return;
          }
  
-@@ -1834,7 +1767,6 @@ clientReplyContext::doGetMoreData()
+@@ -1834,7 +1777,6 @@ clientReplyContext::doGetMoreData()
  #endif
  
          assert(http->logType.oldType == LOG_TCP_HIT);
@@ -1336,7 +1851,7 @@ index c919af4..861f4b4 100644
          /* guarantee nothing has been sent yet! */
          assert(http->out.size == 0);
          assert(http->out.offset == 0);
-@@ -1849,10 +1781,7 @@ clientReplyContext::doGetMoreData()
+@@ -1849,10 +1791,7 @@ clientReplyContext::doGetMoreData()
              }
          }
  
@@ -1348,40 +1863,7 @@ index c919af4..861f4b4 100644
      } else {
          /* MISS CASE, http->logType is already set! */
          processMiss();
-@@ -1878,6 +1807,32 @@ clientReplyContext::SendMoreData(void *data, StoreIOBuffer result)
-     context->sendMoreData (result);
- }
- 
-+/// Whether the given body area describes the start of our Client Stream buffer.
-+/// An empty area does.
-+bool
-+clientReplyContext::matchesStreamBodyBuffer(const StoreIOBuffer &their) const
-+{
-+    // the answer is undefined for errors; they are not really "body buffers"
-+    Assure(!their.flags.error);
-+
-+    if (!their.length)
-+        return true; // an empty body area always matches our body area
-+
-+    if (their.data != next()->readBuffer.data) {
-+        debugs(88, 7, "no: " << their << " vs. " << next()->readBuffer);
-+        return false;
-+    }
-+
-+    return true;
-+}
-+
-+void
-+clientReplyContext::noteStreamBufferredBytes(const StoreIOBuffer &result)
-+{
-+    Assure(matchesStreamBodyBuffer(result));
-+    lastStreamBufferedBytes = result; // may be unchanged and/or zero-length
-+}
-+
- void
- clientReplyContext::makeThisHead()
- {
-@@ -1887,12 +1842,11 @@ clientReplyContext::makeThisHead()
+@@ -1887,12 +1826,11 @@ clientReplyContext::makeThisHead()
  }
  
  bool
@@ -1396,7 +1878,7 @@ index c919af4..861f4b4 100644
  }
  
  void
-@@ -1913,24 +1867,16 @@ clientReplyContext::sendStreamError(StoreIOBuffer const &result)
+@@ -1913,24 +1851,17 @@ clientReplyContext::sendStreamError(StoreIOBuffer const &result)
  }
  
  void
@@ -1416,15 +1898,15 @@ index c919af4..861f4b4 100644
 -
 -    if (localTempBuffer.length)
 -        localTempBuffer.data = source;
--
 +    assert(!result.length || result.offset == next()->readBuffer.offset);
+ 
      clientStreamCallback((clientStreamNode*)http->client_stream.head->data, http, NULL,
 -                         localTempBuffer);
 +                         result);
  }
  
  clientStreamNode *
-@@ -2022,7 +1968,6 @@ clientReplyContext::processReplyAccess ()
+@@ -2022,7 +1953,6 @@ clientReplyContext::processReplyAccess ()
      if (http->logType.oldType == LOG_TCP_DENIED ||
              http->logType.oldType == LOG_TCP_DENIED_REPLY ||
              alwaysAllowResponse(reply->sline.status())) {
@@ -1432,7 +1914,7 @@ index c919af4..861f4b4 100644
          processReplyAccessResult(ACCESS_ALLOWED);
          return;
      }
-@@ -2033,8 +1978,6 @@ clientReplyContext::processReplyAccess ()
+@@ -2033,8 +1963,6 @@ clientReplyContext::processReplyAccess ()
          return;
      }
  
@@ -1441,7 +1923,7 @@ index c919af4..861f4b4 100644
      /** check for absent access controls (permit by default) */
      if (!Config.accessList.reply) {
          processReplyAccessResult(ACCESS_ALLOWED);
-@@ -2091,11 +2034,9 @@ clientReplyContext::processReplyAccessResult(const allow_t &accessAllowed)
+@@ -2091,11 +2019,9 @@ clientReplyContext::processReplyAccessResult(const allow_t &accessAllowed)
      /* Ok, the reply is allowed, */
      http->loggingEntry(http->storeEntry());
  
@@ -1456,7 +1938,7 @@ index c919af4..861f4b4 100644
  
      debugs(88, 3, "clientReplyContext::sendMoreData: Appending " <<
             (int) body_size << " bytes after " << reply->hdr_sz <<
-@@ -2123,19 +2064,27 @@ clientReplyContext::processReplyAccessResult(const allow_t &accessAllowed)
+@@ -2123,19 +2049,27 @@ clientReplyContext::processReplyAccessResult(const allow_t &accessAllowed)
      assert (!flags.headersSent);
      flags.headersSent = true;
  
@@ -1490,7 +1972,7 @@ index c919af4..861f4b4 100644
              /* Can't use any of the body we received. send nothing */
              localTempBuffer.length = 0;
              localTempBuffer.data = NULL;
-@@ -2148,7 +2097,6 @@ clientReplyContext::processReplyAccessResult(const allow_t &accessAllowed)
+@@ -2148,7 +2082,6 @@ clientReplyContext::processReplyAccessResult(const allow_t &accessAllowed)
          localTempBuffer.data = body_buf;
      }
  
@@ -1498,7 +1980,7 @@ index c919af4..861f4b4 100644
      clientStreamCallback((clientStreamNode *)http->client_stream.head->data,
                           http, reply, localTempBuffer);
  
-@@ -2161,6 +2109,8 @@ clientReplyContext::sendMoreData (StoreIOBuffer result)
+@@ -2161,6 +2094,8 @@ clientReplyContext::sendMoreData (StoreIOBuffer result)
      if (deleting)
          return;
  
@@ -1507,7 +1989,7 @@ index c919af4..861f4b4 100644
      StoreEntry *entry = http->storeEntry();
  
      if (ConnStateData * conn = http->getConn()) {
-@@ -2173,7 +2123,9 @@ clientReplyContext::sendMoreData (StoreIOBuffer result)
+@@ -2173,7 +2108,9 @@ clientReplyContext::sendMoreData (StoreIOBuffer result)
              return;
          }
  
@@ -1518,7 +2000,7 @@ index c919af4..861f4b4 100644
              if (Ip::Qos::TheConfig.isHitTosActive()) {
                  Ip::Qos::doTosLocalMiss(conn->clientConnection, http->request->hier.code);
              }
-@@ -2187,21 +2139,9 @@ clientReplyContext::sendMoreData (StoreIOBuffer result)
+@@ -2187,21 +2124,9 @@ clientReplyContext::sendMoreData (StoreIOBuffer result)
                 " out.offset=" << http->out.offset);
      }
  
@@ -1540,7 +2022,7 @@ index c919af4..861f4b4 100644
      assert(http->request != NULL);
  
      /* ESI TODO: remove this assert once everything is stable */
-@@ -2210,20 +2150,25 @@ clientReplyContext::sendMoreData (StoreIOBuffer result)
+@@ -2210,20 +2135,25 @@ clientReplyContext::sendMoreData (StoreIOBuffer result)
  
      makeThisHead();
  
@@ -1575,6 +2057,56 @@ index c919af4..861f4b4 100644
          return;
      }
  
+@@ -2234,23 +2164,38 @@ clientReplyContext::sendMoreData (StoreIOBuffer result)
+         sc->setDelayId(DelayId::DelayClient(http,reply));
+ #endif
+ 
+-    /* handle headers */
++    holdingBuffer = result;
++    processReplyAccess();
++    return;
++}
++
++/// Whether the given body area describes the start of our Client Stream buffer.
++/// An empty area does.
++bool
++clientReplyContext::matchesStreamBodyBuffer(const StoreIOBuffer &their) const
++{
++    // the answer is undefined for errors; they are not really "body buffers"
++    Assure(!their.flags.error);
+ 
+-    if (Config.onoff.log_mime_hdrs) {
+-        size_t k;
++    if (!their.length)
++        return true; // an empty body area always matches our body area
+ 
+-        if ((k = headersEnd(buf, reqofs))) {
+-            safe_free(http->al->headers.reply);
+-            http->al->headers.reply = (char *)xcalloc(k + 1, 1);
+-            xstrncpy(http->al->headers.reply, buf, k);
+-        }
++    if (their.data != next()->readBuffer.data) {
++        debugs(88, 7, "no: " << their << " vs. " << next()->readBuffer);
++        return false;
+     }
+ 
+-    holdingBuffer = result;
+-    processReplyAccess();
+-    return;
++    return true;
++}
++
++void
++clientReplyContext::noteStreamBufferredBytes(const StoreIOBuffer &result)
++{
++    Assure(matchesStreamBodyBuffer(result));
++    lastStreamBufferedBytes = result; // may be unchanged and/or zero-length
+ }
+ 
++
+ /* Using this breaks the client layering just a little!
+  */
+ void
 @@ -2289,13 +2234,6 @@ clientReplyContext::createStoreEntry(const HttpRequestMethod& m, RequestFlags re
      sc->setDelayId(DelayId::DelayClient(http));
  #endif
@@ -1590,7 +2122,7 @@ index c919af4..861f4b4 100644
       * buffers have been set up
       */
 diff --git a/src/client_side_reply.h b/src/client_side_reply.h
-index dddab1a..bc702e3 100644
+index dddab1a..bf705a4 100644
 --- a/src/client_side_reply.h
 +++ b/src/client_side_reply.h
 @@ -39,7 +39,6 @@ public:
@@ -1617,11 +2149,10 @@ index dddab1a..bc702e3 100644
 -    int headers_sz;
      store_client *sc;       /* The store_client we're using */
      StoreIOBuffer tempBuffer;   /* For use in validating requests via IMS */
--    int old_reqsize;        /* ... again, for the buffer */
+     int old_reqsize;        /* ... again, for the buffer */
 -    size_t reqsize;
 -    size_t reqofs;
 -    char tempbuf[HTTP_REQBUF_SZ];   ///< a temporary buffer if we need working storage
-+
 +    /// Buffer dedicated to receiving storeClientCopy() responses to generated
 +    /// revalidation requests. These requests cannot use next()->readBuffer
 +    /// because the latter keeps the contents of the stale HTTP response during
@@ -1686,6 +2217,49 @@ index dddab1a..bc702e3 100644
  };
  
  #endif /* SQUID_CLIENTSIDEREPLY_H */
+diff --git a/src/client_side_request.cc b/src/client_side_request.cc
+index ab08fd2..92da530 100644
+--- a/src/client_side_request.cc
++++ b/src/client_side_request.cc
+@@ -2045,6 +2045,8 @@ ClientHttpRequest::handleAdaptedHeader(HttpMsg *msg)
+         storeEntry()->replaceHttpReply(new_rep);
+         storeEntry()->timestampsSet();
+ 
++        al->reply = new_rep;
++
+         if (!adaptedBodySource) // no body
+             storeEntry()->complete();
+         clientGetMoreData(node, this);
+diff --git a/src/clients/Client.cc b/src/clients/Client.cc
+index f5defbb..cada70e 100644
+--- a/src/clients/Client.cc
++++ b/src/clients/Client.cc
+@@ -136,6 +136,8 @@ Client::setVirginReply(HttpReply *rep)
+     assert(rep);
+     theVirginReply = rep;
+     HTTPMSGLOCK(theVirginReply);
++    if (fwd->al)
++        fwd->al->reply = theVirginReply;
+     return theVirginReply;
+ }
+ 
+@@ -155,6 +157,8 @@ Client::setFinalReply(HttpReply *rep)
+     assert(rep);
+     theFinalReply = rep;
+     HTTPMSGLOCK(theFinalReply);
++    if (fwd->al)
++        fwd->al->reply = theFinalReply;
+ 
+     // give entry the reply because haveParsedReplyHeaders() expects it there
+     entry->replaceHttpReply(theFinalReply, false); // but do not write yet
+@@ -550,6 +554,7 @@ Client::blockCaching()
+         ACLFilledChecklist ch(acl, originalRequest(), NULL);
+         ch.reply = const_cast(entry->getReply()); // ACLFilledChecklist API bug
+         HTTPMSGLOCK(ch.reply);
++        ch.al = fwd->al;
+         if (!ch.fastCheck().allowed()) { // when in doubt, block
+             debugs(20, 3, "store_miss prohibits caching");
+             return true;
 diff --git a/src/enums.h b/src/enums.h
 index 4a860d8..262d62c 100644
 --- a/src/enums.h
@@ -1698,8 +2272,74 @@ index 4a860d8..262d62c 100644
      DIGEST_READ_CBLOCK,
      DIGEST_READ_MASK,
      DIGEST_READ_DONE
+diff --git a/src/format/Format.cc b/src/format/Format.cc
+index 3b6a44b..689bdf9 100644
+--- a/src/format/Format.cc
++++ b/src/format/Format.cc
+@@ -330,7 +330,7 @@ log_quoted_string(const char *str, char *out)
+ static const HttpMsg *
+ actualReplyHeader(const AccessLogEntry::Pointer &al)
+ {
+-    const HttpMsg *msg = al->reply;
++    const HttpMsg *msg = al->reply.getRaw();
+ #if ICAP_CLIENT
+     // al->icap.reqMethod is methodNone in access.log context
+     if (!msg && al->icap.reqMethod == Adaptation::methodReqmod)
+@@ -853,24 +853,35 @@ Format::Format::assemble(MemBuf &mb, const AccessLogEntry::Pointer &al, int logS
+             } else
+ #endif
+             {
++                // just headers without start-line and CRLF
++                // XXX: reconcile with 'headers.request;
+                 quote = 1;
+             }
+             break;
+ 
+         case LFT_ADAPTED_REQUEST_ALL_HEADERS:
++            // just headers without start-line and CRLF
++            // XXX: reconcile with 'headers.adapted_request;
+             quote = 1;
+             break;
+ 
+-        case LFT_REPLY_ALL_HEADERS:
+-            out = al->headers.reply;
++        case LFT_REPLY_ALL_HEADERS: {
++            MemBuf allHeaders;
++            allHeaders.init();
++            // status-line + headers + CRLF
++            // XXX: reconcile with '>h' and '>ha'
++            al->packReplyHeaders(allHeaders);
++            sb.assign(allHeaders.content(), allHeaders.contentSize());
++            out = sb.c_str();
+ #if ICAP_CLIENT
+             if (!out && al->icap.reqMethod == Adaptation::methodReqmod)
+                 out = al->headers.adapted_request;
+ #endif
+             quote = 1;
+-            break;
++        }
++        break;
+ 
+         case LFT_USER_NAME:
+ #if USE_AUTH
+diff --git a/src/http.cc b/src/http.cc
+index 017e492..877172d 100644
+--- a/src/http.cc
++++ b/src/http.cc
+@@ -775,6 +775,9 @@ HttpStateData::processReplyHeader()
+ void
+ HttpStateData::handle1xx(HttpReply *reply)
+ {
++    if (fwd->al)
++        fwd->al->reply = reply;
++
+     HttpReply::Pointer msg(reply); // will destroy reply if unused
+ 
+     // one 1xx at a time: we must not be called while waiting for previous 1xx
 diff --git a/src/icmp/net_db.cc b/src/icmp/net_db.cc
-index 7dc42a2..ce8067a 100644
+index 7dc42a2..52595f6 100644
 --- a/src/icmp/net_db.cc
 +++ b/src/icmp/net_db.cc
 @@ -33,6 +33,7 @@
@@ -1719,15 +2359,20 @@ index 7dc42a2..ce8067a 100644
  typedef enum {
      STATE_NONE,
      STATE_HEADER,
-@@ -72,7 +71,6 @@ public:
-         buf_ofs(0),
+@@ -67,12 +66,8 @@ public:
+         e(NULL),
+         sc(NULL),
+         r(theReq),
+-        used(0),
+-        buf_sz(NETDB_REQBUF_SZ),
+-        buf_ofs(0),
          connstate(STATE_HEADER)
      {
 -        *buf = 0;
  
          assert(NULL != r);
          HTTPMSGLOCK(r);
-@@ -92,10 +90,10 @@ public:
+@@ -92,10 +87,10 @@ public:
      StoreEntry *e;
      store_client *sc;
      HttpRequest *r;
@@ -1742,7 +2387,7 @@ index 7dc42a2..ce8067a 100644
      netdb_conn_state_t connstate;
  };
  
-@@ -698,24 +696,20 @@ netdbExchangeHandleReply(void *data, StoreIOBuffer receivedData)
+@@ -698,24 +693,19 @@ netdbExchangeHandleReply(void *data, StoreIOBuffer receivedData)
      Ip::Address addr;
  
      netdbExchangeState *ex = (netdbExchangeState *)data;
@@ -1765,18 +2410,17 @@ index 7dc42a2..ce8067a 100644
      rec_sz += 1 + sizeof(struct in_addr);
      rec_sz += 1 + sizeof(int);
      rec_sz += 1 + sizeof(int);
-+    // to make progress without growing buffer space, we must parse at least one record per call
 +    Assure(rec_sz <= ex->parsingBuffer.capacity());
      debugs(38, 3, "netdbExchangeHandleReply: " << receivedData.length << " read bytes");
  
      if (!cbdataReferenceValid(ex->p)) {
-@@ -726,64 +720,28 @@ netdbExchangeHandleReply(void *data, StoreIOBuffer receivedData)
+@@ -726,64 +716,29 @@ netdbExchangeHandleReply(void *data, StoreIOBuffer receivedData)
  
      debugs(38, 3, "netdbExchangeHandleReply: for '" << ex->p->host << ":" << ex->p->http_port << "'");
  
 -    if (receivedData.length == 0 && !receivedData.flags.error) {
--        debugs(38, 3, "netdbExchangeHandleReply: Done");
 +    if (receivedData.flags.error) {
+         debugs(38, 3, "netdbExchangeHandleReply: Done");
          delete ex;
          return;
      }
@@ -1833,7 +2477,8 @@ index 7dc42a2..ce8067a 100644
 +        if (scode != Http::scOkay) {
 +            delete ex;
              return;
-         }
+-        }
++         }
 +        ex->connstate = STATE_BODY;
      }
  
@@ -1846,7 +2491,7 @@ index 7dc42a2..ce8067a 100644
      /* If we get here, we have some body to parse .. */
      debugs(38, 5, "netdbExchangeHandleReply: start parsing loop, size = " << size);
  
-@@ -792,6 +750,7 @@ netdbExchangeHandleReply(void *data, StoreIOBuffer receivedData)
+@@ -792,6 +747,7 @@ netdbExchangeHandleReply(void *data, StoreIOBuffer receivedData)
          addr.setAnyAddr();
          hops = rtt = 0.0;
  
@@ -1854,7 +2499,7 @@ index 7dc42a2..ce8067a 100644
          for (o = 0; o < rec_sz;) {
              switch ((int) *(p + o)) {
  
-@@ -829,8 +788,6 @@ netdbExchangeHandleReply(void *data, StoreIOBuffer receivedData)
+@@ -829,8 +785,6 @@ netdbExchangeHandleReply(void *data, StoreIOBuffer receivedData)
  
          assert(o == rec_sz);
  
@@ -1863,7 +2508,7 @@ index 7dc42a2..ce8067a 100644
          size -= rec_sz;
  
          p += rec_sz;
-@@ -838,32 +795,8 @@ netdbExchangeHandleReply(void *data, StoreIOBuffer receivedData)
+@@ -838,32 +792,8 @@ netdbExchangeHandleReply(void *data, StoreIOBuffer receivedData)
          ++nused;
      }
  
@@ -1898,7 +2543,7 @@ index 7dc42a2..ce8067a 100644
  
      debugs(38, 3, "netdbExchangeHandleReply: size left over in this buffer: " << size << " bytes");
  
-@@ -871,20 +804,26 @@ netdbExchangeHandleReply(void *data, StoreIOBuffer receivedData)
+@@ -871,20 +801,26 @@ netdbExchangeHandleReply(void *data, StoreIOBuffer receivedData)
             " entries, (x " << rec_sz << " bytes) == " << nused * rec_sz <<
             " bytes total");
  
@@ -1935,7 +2580,7 @@ index 7dc42a2..ce8067a 100644
  }
  
  #endif /* USE_ICMP */
-@@ -1296,14 +1235,9 @@ netdbExchangeStart(void *data)
+@@ -1296,14 +1232,9 @@ netdbExchangeStart(void *data)
      ex->e = storeCreateEntry(uri, uri, RequestFlags(), Http::METHOD_GET);
      assert(NULL != ex->e);
  
@@ -1951,8 +2596,68 @@ index 7dc42a2..ce8067a 100644
      ex->r->flags.loopDetected = true;   /* cheat! -- force direct */
  
      // XXX: send as Proxy-Authenticate instead
+diff --git a/src/internal.cc b/src/internal.cc
+index 81d5175..3a04ce0 100644
+--- a/src/internal.cc
++++ b/src/internal.cc
+@@ -9,6 +9,7 @@
+ /* DEBUG: section 76    Internal Squid Object handling */
+ 
+ #include "squid.h"
++#include "base/Assure.h"
+ #include "CacheManager.h"
+ #include "comm/Connection.h"
+ #include "errorpage.h"
+diff --git a/src/log/FormatHttpdCombined.cc b/src/log/FormatHttpdCombined.cc
+index 6639e88..70ea336 100644
+--- a/src/log/FormatHttpdCombined.cc
++++ b/src/log/FormatHttpdCombined.cc
+@@ -69,7 +69,10 @@ Log::Format::HttpdCombined(const AccessLogEntry::Pointer &al, Logfile * logfile)
+ 
+     if (Config.onoff.log_mime_hdrs) {
+         char *ereq = ::Format::QuoteMimeBlob(al->headers.request);
+-        char *erep = ::Format::QuoteMimeBlob(al->headers.reply);
++        MemBuf mb;
++        mb.init();
++        al->packReplyHeaders(mb);
++        auto erep = ::Format::QuoteMimeBlob(mb.content());
+         logfilePrintf(logfile, " [%s] [%s]\n", ereq, erep);
+         safe_free(ereq);
+         safe_free(erep);
+diff --git a/src/log/FormatHttpdCommon.cc b/src/log/FormatHttpdCommon.cc
+index 1613d0e..9e933a0 100644
+--- a/src/log/FormatHttpdCommon.cc
++++ b/src/log/FormatHttpdCommon.cc
+@@ -54,7 +54,10 @@ Log::Format::HttpdCommon(const AccessLogEntry::Pointer &al, Logfile * logfile)
+ 
+     if (Config.onoff.log_mime_hdrs) {
+         char *ereq = ::Format::QuoteMimeBlob(al->headers.request);
+-        char *erep = ::Format::QuoteMimeBlob(al->headers.reply);
++        MemBuf mb;
++        mb.init();
++        al->packReplyHeaders(mb);
++        auto erep = ::Format::QuoteMimeBlob(mb.content());
+         logfilePrintf(logfile, " [%s] [%s]\n", ereq, erep);
+         safe_free(ereq);
+         safe_free(erep);
+diff --git a/src/log/FormatSquidNative.cc b/src/log/FormatSquidNative.cc
+index 0ab97e4..23076b2 100644
+--- a/src/log/FormatSquidNative.cc
++++ b/src/log/FormatSquidNative.cc
+@@ -71,7 +71,10 @@ Log::Format::SquidNative(const AccessLogEntry::Pointer &al, Logfile * logfile)
+ 
+     if (Config.onoff.log_mime_hdrs) {
+         char *ereq = ::Format::QuoteMimeBlob(al->headers.request);
+-        char *erep = ::Format::QuoteMimeBlob(al->headers.reply);
++        MemBuf mb;
++        mb.init();
++        al->packReplyHeaders(mb);
++        auto erep = ::Format::QuoteMimeBlob(mb.content());
+         logfilePrintf(logfile, " [%s] [%s]\n", ereq, erep);
+         safe_free(ereq);
+         safe_free(erep);
 diff --git a/src/peer_digest.cc b/src/peer_digest.cc
-index 7b6314d..abfea4a 100644
+index 7b6314d..8a66277 100644
 --- a/src/peer_digest.cc
 +++ b/src/peer_digest.cc
 @@ -39,7 +39,6 @@ static EVH peerDigestCheck;
@@ -1973,18 +2678,25 @@ index 7b6314d..abfea4a 100644
      if (old_e)
          e->lastModified(old_e->lastModified());
  
-@@ -408,6 +410,11 @@ peerDigestHandleReply(void *data, StoreIOBuffer receivedData)
+@@ -408,11 +410,16 @@ peerDigestHandleReply(void *data, StoreIOBuffer receivedData)
      digest_read_state_t prevstate;
      int newsize;
  
+-    assert(fetch->pd && receivedData.data);
 +    if (receivedData.flags.error) {
 +        peerDigestFetchAbort(fetch, fetch->buf, "failure loading digest reply from Store");
 +        return;
 +    }
 +
-     assert(fetch->pd && receivedData.data);
++    assert(fetch->pd);
      /* The existing code assumes that the received pointer is
       * where we asked the data to be put
+      */
+-    assert(fetch->buf + fetch->bufofs == receivedData.data);
++    assert(!receivedData.data || fetch->buf + fetch->bufofs == receivedData.data);
+ 
+     /* Update the buffer size */
+     fetch->bufofs += receivedData.length;
 @@ -444,10 +451,6 @@ peerDigestHandleReply(void *data, StoreIOBuffer receivedData)
              retsize = peerDigestFetchReply(fetch, fetch->buf, fetch->bufofs);
              break;
@@ -2051,18 +2763,17 @@ index 7b6314d..abfea4a 100644
          } else if (status == Http::scOkay) {
              /* get rid of old entry if any */
  
-@@ -573,70 +578,15 @@ peerDigestFetchReply(void *data, char *buf, ssize_t size)
+@@ -573,67 +578,12 @@ peerDigestFetchReply(void *data, char *buf, ssize_t size)
                  fetch->old_entry->unlock("peerDigestFetchReply 200");
                  fetch->old_entry = NULL;
              }
-+
 +            fetch->state = DIGEST_READ_CBLOCK;
          } else {
              /* some kind of a bug */
              peerDigestFetchAbort(fetch, buf, reply->sline.reason());
              return -1;      /* XXX -1 will abort stuff in ReadReply! */
          }
- 
+-
 -        /* must have a ready-to-use store entry if we got here */
 -        /* can we stay with the old in-memory digest? */
 -        if (status == Http::scNotModified && fetch->pd->cd) {
@@ -2118,13 +2829,62 @@ index 7b6314d..abfea4a 100644
 -    if (size >= SM_PAGE_SIZE) {
 -        peerDigestFetchAbort(fetch, buf, "stored header too big");
 -        return -1;
+     }
+ 
+     return 0;       /* We need to read more to parse .. */
+@@ -755,7 +705,7 @@ peerDigestFetchedEnough(DigestFetchState * fetch, char *buf, ssize_t size, const
+     }
+ 
+     /* continue checking (maybe-successful eof case) */
+-    if (!reason && !size) {
++    if (!reason && !size && fetch->state != DIGEST_READ_REPLY) {
+         if (!pd->cd)
+             reason = "null digest?!";
+         else if (fetch->mask_offset != pd->cd->mask_size)
+diff --git a/src/servers/FtpServer.cc b/src/servers/FtpServer.cc
+index fab26cf..d3faa8d 100644
+--- a/src/servers/FtpServer.cc
++++ b/src/servers/FtpServer.cc
+@@ -777,12 +777,6 @@ Ftp::Server::handleReply(HttpReply *reply, StoreIOBuffer data)
+     Http::StreamPointer context = pipeline.front();
+     assert(context != nullptr);
+ 
+-    if (context->http && context->http->al != NULL &&
+-            !context->http->al->reply && reply) {
+-        context->http->al->reply = reply;
+-        HTTPMSGLOCK(context->http->al->reply);
 -    }
 -
--    return 0;       /* We need to read more to parse .. */
-+    return 0; // we consumed/used no buffered bytes
+     static ReplyHandler handlers[] = {
+         NULL, // fssBegin
+         NULL, // fssConnected
+diff --git a/src/servers/Http1Server.cc b/src/servers/Http1Server.cc
+index 7514779..e76fb3e 100644
+--- a/src/servers/Http1Server.cc
++++ b/src/servers/Http1Server.cc
+@@ -310,9 +310,6 @@ Http::One::Server::handleReply(HttpReply *rep, StoreIOBuffer receivedData)
+     }
+ 
+     assert(rep);
+-    HTTPMSGUNLOCK(http->al->reply);
+-    http->al->reply = rep;
+-    HTTPMSGLOCK(http->al->reply);
+     context->sendStartOfMessage(rep, receivedData);
+ }
+ 
+diff --git a/src/stmem.cc b/src/stmem.cc
+index d117c15..b627005 100644
+--- a/src/stmem.cc
++++ b/src/stmem.cc
+@@ -95,8 +95,6 @@ mem_hdr::freeDataUpto(int64_t target_offset)
+             break;
+     }
+ 
+-    assert (lowestOffset () <= target_offset);
+-
+     return lowestOffset ();
  }
  
- int
 diff --git a/src/store.cc b/src/store.cc
 index 1948447..b4c7f82 100644
 --- a/src/store.cc
@@ -2177,7 +2937,7 @@ index be177d8..ccfc2dd 100644
 +	ParsingBuffer.h \
  	Storage.h
 diff --git a/src/store/Makefile.in b/src/store/Makefile.in
-index bb4387d..1ea6c45 100644
+index bb4387d..1959c99 100644
 --- a/src/store/Makefile.in
 +++ b/src/store/Makefile.in
 @@ -163,7 +163,7 @@ CONFIG_CLEAN_FILES =
@@ -2189,15 +2949,15 @@ index bb4387d..1ea6c45 100644
  libstore_la_OBJECTS = $(am_libstore_la_OBJECTS)
  AM_V_lt = $(am__v_lt_@AM_V@)
  am__v_lt_ = $(am__v_lt_@AM_DEFAULT_V@)
-@@ -184,7 +184,7 @@ am__v_at_1 =
- DEFAULT_INCLUDES = 
+@@ -185,7 +185,7 @@ DEFAULT_INCLUDES =
  depcomp = $(SHELL) $(top_srcdir)/cfgaux/depcomp
  am__maybe_remake_depfiles = depfiles
--am__depfiles_remade = ./$(DEPDIR)/Controller.Plo ./$(DEPDIR)/Disk.Plo \
-+am__depfiles_remade = ./$(DEPDIR)/Controller.Plo ./$(DEPDIR)/Disk.Plo ./$(DEPDIR)/ParsingBuffer.Plo \
- 	./$(DEPDIR)/Disks.Plo ./$(DEPDIR)/LocalSearch.Plo
+ am__depfiles_remade = ./$(DEPDIR)/Controller.Plo ./$(DEPDIR)/Disk.Plo \
+-	./$(DEPDIR)/Disks.Plo ./$(DEPDIR)/LocalSearch.Plo
++	./$(DEPDIR)/Disks.Plo ./$(DEPDIR)/LocalSearch.Plo ./$(DEPDIR)/ParsingBuffer.Plo
  am__mv = mv -f
  CXXCOMPILE = $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) \
+ 	$(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS)
 @@ -776,6 +776,8 @@ libstore_la_SOURCES = \
  	forward.h \
  	LocalSearch.cc \
@@ -2233,10 +2993,10 @@ index bb4387d..1ea6c45 100644
  
 diff --git a/src/store/ParsingBuffer.cc b/src/store/ParsingBuffer.cc
 new file mode 100644
-index 0000000..e948fe2
+index 0000000..ca6be72
 --- /dev/null
 +++ b/src/store/ParsingBuffer.cc
-@@ -0,0 +1,198 @@
+@@ -0,0 +1,199 @@
 +/*
 + * Copyright (C) 1996-2023 The Squid Software Foundation and contributors
 + *
@@ -2249,7 +3009,7 @@ index 0000000..e948fe2
 +#include "sbuf/Stream.h"
 +#include "SquidMath.h"
 +#include "store/ParsingBuffer.h"
-+#include "base/Assure.h"
++
 +#include 
 +
 +// Several Store::ParsingBuffer() methods use assert() because the corresponding
@@ -2267,19 +3027,19 @@ index 0000000..e948fe2
 +const char *
 +Store::ParsingBuffer::memory() const
 +{
-+    return extraMemory_ ? extraMemory_->rawContent() : readerSuppliedMemory_.data;
++    return extraMemory_.second ? extraMemory_.first.rawContent() : readerSuppliedMemory_.data;
 +}
 +
 +size_t
 +Store::ParsingBuffer::capacity() const
 +{
-+    return extraMemory_ ? (extraMemory_->length() + extraMemory_->spaceSize()) : readerSuppliedMemory_.length;
++    return extraMemory_.second ? (extraMemory_.first.length() + extraMemory_.first.spaceSize()) : readerSuppliedMemory_.length;
 +}
 +
 +size_t
 +Store::ParsingBuffer::contentSize() const
 +{
-+    return extraMemory_ ? extraMemory_->length() : readerSuppliedMemoryContentSize_;
++    return extraMemory_.second ? extraMemory_.first.length() : readerSuppliedMemoryContentSize_;
 +}
 +
 +void
@@ -2295,10 +3055,10 @@ index 0000000..e948fe2
 +    assert(memory() + contentSize() == newBytes); // the new bytes start in our space
 +    // and now we know that newBytes is not nil either
 +
-+    if (extraMemory_)
-+        extraMemory_->rawAppendFinish(newBytes, newByteCount);
++    if (extraMemory_.second)
++        extraMemory_.first.rawAppendFinish(newBytes, newByteCount);
 +    else
-+        readerSuppliedMemoryContentSize_ = *IncreaseSum(readerSuppliedMemoryContentSize_, newByteCount);
++        readerSuppliedMemoryContentSize_ = IncreaseSum(readerSuppliedMemoryContentSize_, newByteCount).first;
 +
 +    assert(contentSize() <= capacity()); // paranoid
 +}
@@ -2307,8 +3067,8 @@ index 0000000..e948fe2
 +Store::ParsingBuffer::consume(const size_t parsedBytes)
 +{
 +    Assure(contentSize() >= parsedBytes); // more conservative than extraMemory_->consume()
-+    if (extraMemory_) {
-+        extraMemory_->consume(parsedBytes);
++    if (extraMemory_.second) {
++        extraMemory_.first.consume(parsedBytes);
 +    } else {
 +        readerSuppliedMemoryContentSize_ -= parsedBytes;
 +        if (parsedBytes && readerSuppliedMemoryContentSize_)
@@ -2320,8 +3080,8 @@ index 0000000..e948fe2
 +Store::ParsingBuffer::space()
 +{
 +    const auto size = spaceSize();
-+    const auto start = extraMemory_ ?
-+                       extraMemory_->rawAppendStart(size) :
++    const auto start = extraMemory_.second ?
++                       extraMemory_.first.rawAppendStart(size) :
 +                       (readerSuppliedMemory_.data + readerSuppliedMemoryContentSize_);
 +    return StoreIOBuffer(spaceSize(), 0, start);
 +}
@@ -2349,22 +3109,23 @@ index 0000000..e948fe2
 +Store::ParsingBuffer::growSpace(const size_t minimumSpaceSize)
 +{
 +    const auto capacityIncreaseAttempt = IncreaseSum(contentSize(), minimumSpaceSize);
-+    if (!capacityIncreaseAttempt)
++    if (!capacityIncreaseAttempt.second)
 +        throw TextException(ToSBuf("no support for a single memory block of ", contentSize(), '+', minimumSpaceSize, " bytes"), Here());
-+    const auto newCapacity = *capacityIncreaseAttempt;
++    const auto newCapacity = capacityIncreaseAttempt.first;
 +
 +    if (newCapacity <= capacity())
 +        return; // already have enough space; no reallocation is needed
 +
 +    debugs(90, 7, "growing to provide " << minimumSpaceSize << " in " << *this);
 +
-+    if (extraMemory_) {
-+        extraMemory_->reserveCapacity(newCapacity);
++    if (extraMemory_.second) {
++        extraMemory_.first.reserveCapacity(newCapacity);
 +    } else {
 +        SBuf newStorage;
 +        newStorage.reserveCapacity(newCapacity);
 +        newStorage.append(readerSuppliedMemory_.data, readerSuppliedMemoryContentSize_);
-+        extraMemory_ = std::move(newStorage);
++        extraMemory_.first = std::move(newStorage);
++        extraMemory_.second = true;
 +    }
 +    Assure(spaceSize() >= minimumSpaceSize);
 +}
@@ -2372,14 +3133,14 @@ index 0000000..e948fe2
 +SBuf
 +Store::ParsingBuffer::toSBuf() const
 +{
-+    return extraMemory_ ? *extraMemory_ : SBuf(content().data, content().length);
++    return extraMemory_.second ? extraMemory_.first : SBuf(content().data, content().length);
 +}
 +
 +size_t
 +Store::ParsingBuffer::spaceSize() const
 +{
-+    if (extraMemory_)
-+        return extraMemory_->spaceSize();
++    if (extraMemory_.second)
++        return extraMemory_.first.spaceSize();
 +
 +    assert(readerSuppliedMemoryContentSize_ <= readerSuppliedMemory_.length);
 +    return readerSuppliedMemory_.length - readerSuppliedMemoryContentSize_;
@@ -2408,12 +3169,12 @@ index 0000000..e948fe2
 +    result.length = bytesToPack;
 +    Assure(result.data);
 +
-+    if (!extraMemory_) {
++    if (!extraMemory_.second) {
 +        // no accumulated bytes copying because they are in readerSuppliedMemory_
 +        debugs(90, 7, "quickly exporting " << result.length << " bytes via " << readerSuppliedMemory_);
 +    } else {
-+        debugs(90, 7, "slowly exporting " << result.length << " bytes from " << extraMemory_->id << " back into " << readerSuppliedMemory_);
-+        memmove(result.data, extraMemory_->rawContent(), result.length);
++        debugs(90, 7, "slowly exporting " << result.length << " bytes from " << extraMemory_.first.id << " back into " << readerSuppliedMemory_);
++        memmove(result.data, extraMemory_.first.rawContent(), result.length);
 +    }
 +
 +    return result;
@@ -2424,9 +3185,9 @@ index 0000000..e948fe2
 +{
 +    os << "size=" << contentSize();
 +
-+    if (extraMemory_) {
++    if (extraMemory_.second) {
 +        os << " capacity=" << capacity();
-+        os << " extra=" << extraMemory_->id;
++        os << " extra=" << extraMemory_.first.id;
 +    }
 +
 +    // report readerSuppliedMemory_ (if any) even if we are no longer using it
@@ -2437,7 +3198,7 @@ index 0000000..e948fe2
 +
 diff --git a/src/store/ParsingBuffer.h b/src/store/ParsingBuffer.h
 new file mode 100644
-index 0000000..b8aa957
+index 0000000..b473ac6
 --- /dev/null
 +++ b/src/store/ParsingBuffer.h
 @@ -0,0 +1,128 @@
@@ -2555,7 +3316,7 @@ index 0000000..b8aa957
 +
 +    /// our internal buffer that takes over readerSuppliedMemory_ when the
 +    /// latter becomes full and more memory is needed
-+    std::optional extraMemory_;
++    std::pair extraMemory_ = std::make_pair(SBuf(), false);
 +};
 +
 +inline std::ostream &
@@ -2582,90 +3343,110 @@ index 1422a85..db5ee1c 100644
  typedef ::StoreEntry Entry;
  typedef ::MemStore Memory;
 diff --git a/src/store_client.cc b/src/store_client.cc
-index 207c96b..1731c4c 100644
+index 1b54f04..a5f2440 100644
 --- a/src/store_client.cc
 +++ b/src/store_client.cc
-@@ -16,9 +16,11 @@
- #include "HttpRequest.h"
+@@ -9,6 +9,7 @@
+ /* DEBUG: section 90    Storage Manager Client-Side Interface */
+ 
+ #include "squid.h"
++#include "base/AsyncCbdataCalls.h"
+ #include "event.h"
+ #include "globals.h"
+ #include "HttpReply.h"
+@@ -16,8 +17,10 @@
  #include "MemBuf.h"
  #include "MemObject.h"
-+#include "sbuf/Stream.h"
  #include "mime_header.h"
++#include "sbuf/Stream.h"
  #include "profiler/Profiler.h"
  #include "SquidConfig.h"
 +#include "SquidMath.h"
  #include "StatCounters.h"
  #include "Store.h"
  #include "store_swapin.h"
-@@ -98,19 +100,6 @@ storeClientListAdd(StoreEntry * e, void *data)
-     return sc;
+@@ -39,17 +42,10 @@
+ static StoreIOState::STRCB storeClientReadBody;
+ static StoreIOState::STRCB storeClientReadHeader;
+ static void storeClientCopy2(StoreEntry * e, store_client * sc);
+-static EVH storeClientCopyEvent;
+ static bool CheckQuickAbortIsReasonable(StoreEntry * entry);
+ 
+ CBDATA_CLASS_INIT(store_client);
+ 
+-bool
+-store_client::memReaderHasLowerOffset(int64_t anOffset) const
+-{
+-    return getType() == STORE_MEM_CLIENT && copyInto.offset < anOffset;
+-}
+-
+ int
+ store_client::getType() const
+ {
+@@ -105,25 +101,35 @@ storeClientListAdd(StoreEntry * e, void *data)
  }
  
--/// schedules asynchronous STCB call to relay disk or memory read results
--/// \param outcome an error signal (if negative), an EOF signal (if zero), or the number of bytes read
--void
--store_client::callback(const ssize_t outcome)
--{
--    if (outcome > 0)
--        return noteCopiedBytes(outcome);
--
--    if (outcome < 0)
--        return fail();
--
--    noteEof();
--}
- /// finishCallback() wrapper; TODO: Add NullaryMemFunT for non-jobs.
  void
- store_client::FinishCallback(store_client * const sc)
-@@ -125,14 +114,20 @@ store_client::finishCallback()
-     Assure(_callback.callback_handler);
-     Assure(_callback.notifier);
+-store_client::callback(ssize_t sz, bool error)
++store_client::FinishCallback(store_client * const sc)
+ {
+-    size_t bSz = 0;
++    sc->finishCallback();
++}
  
--    // callers are not ready to handle a content+error combination
--    Assure(object_ok || !copiedSize);
--
--    StoreIOBuffer result(copiedSize, copyInto.offset, copyInto.data);
+-    if (sz >= 0 && !error)
+-        bSz = sz;
++void
++store_client::finishCallback()
++{
++    Assure(_callback.callback_handler);
++    Assure(_callback.notifier);
+ 
+-    StoreIOBuffer result(bSz, 0 ,copyInto.data);
 +    // XXX: Some legacy code relies on zero-length buffers having nil data
 +    // pointers. Some other legacy code expects "correct" result.offset even
 +    // when there is no body to return. Accommodate all those expectations.
 +    auto result = StoreIOBuffer(0, copyInto.offset, nullptr);
-+    if (object_ok && parsingBuffer && parsingBuffer->contentSize())
-+        result = parsingBuffer->packBack();
-     result.flags.error = object_ok ? 0 : 1;
--    copiedSize = 0;
++    if (object_ok && parsingBuffer.second && parsingBuffer.first.contentSize())
++        result = parsingBuffer.first.packBack();
++    result.flags.error = object_ok ? 0 : 1;
  
--    cmp_offset = result.offset + result.length;
+-    if (sz < 0 || error)
+-        result.flags.error = 1;
 +    // no HTTP headers and no body bytes (but not because there was no space)
 +    atEof_ = !sendingHttpHeaders() && !result.length && copyInto.length;
 +
-+    parsingBuffer.reset();
++    parsingBuffer.second = false;
 +    ++answers;
-+
+ 
+-    result.offset = cmp_offset;
+-    assert(_callback.pending());
+-    cmp_offset = copyInto.offset + bSz;
      STCB *temphandler = _callback.callback_handler;
      void *cbdata = _callback.callback_data;
-     _callback = Callback(NULL, NULL);
-@@ -144,35 +139,15 @@ store_client::finishCallback()
+-    _callback = Callback(NULL, NULL);
+-    copyInto.data = NULL;
++    _callback = Callback(nullptr, nullptr);
++    copyInto.data = nullptr;
+ 
+     if (cbdataReferenceValid(cbdata))
+         temphandler(cbdata, result);
+@@ -131,32 +137,18 @@ store_client::callback(ssize_t sz, bool error)
      cbdataReferenceDone(cbdata);
  }
  
--/// schedules asynchronous STCB call to relay a successful disk or memory read
--/// \param bytesCopied the number of response bytes copied into copyInto
--void
--store_client::noteCopiedBytes(const size_t bytesCopied)
+-static void
+-storeClientCopyEvent(void *data)
 -{
--    debugs(90, 5, bytesCopied);
--    Assure(bytesCopied > 0);
--    Assure(!copiedSize);
--    copiedSize = bytesCopied;
--    noteNews();
--}
+-    store_client *sc = (store_client *)data;
+-    debugs(90, 3, "storeClientCopyEvent: Running");
+-    assert (sc->flags.copy_event_pending);
+-    sc->flags.copy_event_pending = false;
 -
--void
--store_client::noteEof()
--{
--    debugs(90, 5, copiedSize);
--    Assure(!copiedSize);
--    noteNews();
+-    if (!sc->_callback.pending())
+-        return;
+-
+-    storeClientCopy2(sc->entry, sc);
 -}
 -
  store_client::store_client(StoreEntry *e) :
@@ -2675,14 +3456,18 @@ index 207c96b..1731c4c 100644
  #endif
      entry(e),
      type(e->storeClientType()),
-     object_ok(true),
--    copiedSize(0)
+-    object_ok(true)
++    object_ok(true),
 +    atEof_(false),
 +    answers(0)
  {
      flags.disk_io_pending = false;
      flags.store_copying = false;
-@@ -221,16 +196,29 @@ store_client::copy(StoreEntry * anEntry,
+-    flags.copy_event_pending = false;
+     ++ entry->refcount;
+ 
+     if (getType() == STORE_DISK_CLIENT) {
+@@ -202,16 +194,33 @@ store_client::copy(StoreEntry * anEntry,
  #endif
  
      assert(!_callback.pending());
@@ -2714,11 +3499,15 @@ index 207c96b..1731c4c 100644
 +    // when we already can respond with HTTP headers.
 +    Assure(!copyInto.offset || answeredOnce());
 +
-+    parsingBuffer.emplace(copyInto);
++    parsingBuffer.first = Store::ParsingBuffer(copyInto);
++    parsingBuffer.second = true;
++
++    discardableHttpEnd_ = nextHttpReadOffset();
++    debugs(90, 7, "discardableHttpEnd_=" << discardableHttpEnd_);
  
      static bool copying (false);
      assert (!copying);
-@@ -258,33 +246,30 @@ store_client::copy(StoreEntry * anEntry,
+@@ -239,50 +248,41 @@ store_client::copy(StoreEntry * anEntry,
      // Add no code here. This object may no longer exist.
  }
  
@@ -2744,17 +3533,17 @@ index 207c96b..1731c4c 100644
 -    const bool canSwapIn = entry->hasDisk();
 -    if (len < 0)
 -        return canSwapIn;
--
--    if (copyInto.offset >= len)
--        return false; // sent everything there is
 +    if (!entry->hasDisk())
 +        return false; // cannot read anything from disk either
  
--    if (canSwapIn)
--        return true; // if we lack prefix, we can swap it in
+-    if (copyInto.offset >= len)
+-        return false; // sent everything there is
 +    if (entry->objectLen() >= 0 && copyInto.offset >= entry->contentLen())
 +        return false; // the disk cannot have byte(s) wanted by the client
  
+-    if (canSwapIn)
+-        return true; // if we lack prefix, we can swap it in
+-
 -    // If we cannot swap in, make sure we have what we want in RAM. Otherwise,
 -    // scheduleRead calls scheduleDiskRead which asserts without a swap file.
 -    const MemObject *mem = entry->mem_obj;
@@ -2766,22 +3555,57 @@ index 207c96b..1731c4c 100644
  }
  
  static void
-@@ -311,6 +296,14 @@ storeClientCopy2(StoreEntry * e, store_client * sc)
-     sc->doCopy(e);
- }
+ storeClientCopy2(StoreEntry * e, store_client * sc)
+ {
+     /* reentrancy not allowed  - note this could lead to
+-     * dropped events
++     * dropped notifications about response data availability
+      */
  
+-    if (sc->flags.copy_event_pending) {
+-        return;
+-    }
+-
+     if (sc->flags.store_copying) {
+-        sc->flags.copy_event_pending = true;
+-        debugs(90, 3, "storeClientCopy2: Queueing storeClientCopyEvent()");
+-        eventAdd("storeClientCopyEvent", storeClientCopyEvent, sc, 0.0, 0);
++        debugs(90, 3, "prevented recursive copying for " << *e);
+         return;
+     }
+ 
+@@ -295,39 +295,44 @@ storeClientCopy2(StoreEntry * e, store_client * sc)
+      * if the peer aborts, we want to give the client(s)
+      * everything we got before the abort condition occurred.
+      */
+-    /* Warning: doCopy may indirectly free itself in callbacks,
+-     * hence the lock to keep it active for the duration of
+-     * this function
+-     * XXX: Locking does not prevent calling sc destructor (it only prevents
+-     * freeing sc memory) so sc may become invalid from C++ p.o.v.
+-     */
+-    CbcPointer tmpLock = sc;
+-    assert (!sc->flags.store_copying);
+     sc->doCopy(e);
+-    assert(!sc->flags.store_copying);
++}
++
 +/// Whether our answer, if sent right now, will announce the availability of
 +/// HTTP response headers (to the STCB callback) for the first time.
 +bool
 +store_client::sendingHttpHeaders() const
 +{
 +    return !answeredOnce() && entry->mem().baseReply().hdr_sz > 0;
-+}
-+
+ }
+ 
  void
  store_client::doCopy(StoreEntry *anEntry)
  {
-@@ -322,20 +315,22 @@ store_client::doCopy(StoreEntry *anEntry)
++    Assure(_callback.pending());
++    Assure(!flags.disk_io_pending);
++    Assure(!flags.store_copying);
++
+     assert (anEntry == entry);
      flags.store_copying = true;
      MemObject *mem = entry->mem_obj;
  
@@ -2799,7 +3623,7 @@ index 207c96b..1731c4c 100644
 +    if (!sendHttpHeaders && !moreToRead()) {
          /* There is no more to send! */
          debugs(33, 3, HERE << "There is no more to send!");
--        noteEof();
+-        callback(0);
 +        noteNews();
          flags.store_copying = false;
          return;
@@ -2811,7 +3635,7 @@ index 207c96b..1731c4c 100644
          debugs(90, 3, "store_client::doCopy: Waiting for more");
          flags.store_copying = false;
          return;
-@@ -357,7 +352,24 @@ store_client::doCopy(StoreEntry *anEntry)
+@@ -349,7 +354,24 @@ store_client::doCopy(StoreEntry *anEntry)
          if (!startSwapin())
              return; // failure
      }
@@ -2837,27 +3661,27 @@ index 207c96b..1731c4c 100644
  }
  
  /// opens the swapin "file" if possible; otherwise, fail()s and returns false
-@@ -397,18 +409,7 @@ store_client::noteSwapInDone(const bool error)
-     if (error)
-         fail();
-     else
--        noteEof();
--}
--
--void
+@@ -383,14 +405,13 @@ store_client::startSwapin()
+ }
+ 
+ void
 -store_client::scheduleRead()
--{
++store_client::noteSwapInDone(const bool error)
+ {
 -    MemObject *mem = entry->mem_obj;
 -
 -    if (copyInto.offset >= mem->inmem_lo && copyInto.offset < mem->endOffset())
 -        scheduleMemRead();
--    else
++    Assure(_callback.pending());
++    if (error)
++        fail();
+     else
 -        scheduleDiskRead();
 +        noteNews();
  }
  
  void
-@@ -433,15 +434,44 @@ store_client::scheduleDiskRead()
+@@ -415,15 +436,44 @@ store_client::scheduleDiskRead()
      flags.store_copying = false;
  }
  
@@ -2868,14 +3692,14 @@ index 207c96b..1731c4c 100644
 +    const auto &mem = entry->mem();
 +    const auto memReadOffset = nextHttpReadOffset();
 +    return mem.inmem_lo <= memReadOffset && memReadOffset < mem.endOffset() &&
-+           parsingBuffer->spaceSize();
++           parsingBuffer.first.spaceSize();
 +}
 +
 +/// The offset of the next stored HTTP response byte wanted by the client.
 +int64_t
 +store_client::nextHttpReadOffset() const
 +{
-+    Assure(parsingBuffer);
++    Assure(parsingBuffer.second);
 +    const auto &mem = entry->mem();
 +    const auto hdr_sz = mem.baseReply().hdr_sz;
 +    // Certain SMP cache manager transactions do not store HTTP headers in
@@ -2883,7 +3707,7 @@ index 207c96b..1731c4c 100644
 +    // In such cases, hdr_sz ought to be zero. In all other (known) cases,
 +    // mem_hdr contains HTTP response headers (positive hdr_sz if parsed)
 +    // followed by HTTP response body. This code math accommodates all cases.
-+    return NaturalSum(hdr_sz, copyInto.offset, parsingBuffer->contentSize()).value();
++    return NaturalSum(hdr_sz, copyInto.offset, parsingBuffer.first.contentSize()).first;
 +}
 +
 +/// Copies at least some of the requested body bytes from MemObject memory,
@@ -2896,25 +3720,25 @@ index 207c96b..1731c4c 100644
 -    /* What the client wants is in memory */
 -    /* Old style */
 -    debugs(90, 3, "store_client::doCopy: Copying normal from memory");
--    const auto sz = entry->mem_obj->data_hdr.copy(copyInto); // may be <= 0 per copy() API
+-    size_t sz = entry->mem_obj->data_hdr.copy(copyInto);
 -    callback(sz);
 -    flags.store_copying = false;
-+    Assure(parsingBuffer);
-+    const auto readInto = parsingBuffer->space().positionAt(nextHttpReadOffset());
++    Assure(parsingBuffer.second);
++    const auto readInto = parsingBuffer.first.space().positionAt(nextHttpReadOffset());
 +
 +    debugs(90, 3, "copying HTTP body bytes from memory into " << readInto);
 +    const auto sz = entry->mem_obj->data_hdr.copy(readInto);
 +    Assure(sz > 0); // our canReadFromMemory() precondition guarantees that
-+    parsingBuffer->appended(readInto.data, sz);
++    parsingBuffer.first.appended(readInto.data, sz);
  }
  
  void
-@@ -453,59 +483,136 @@ store_client::fileRead()
+@@ -435,65 +485,150 @@ store_client::fileRead()
      assert(!flags.disk_io_pending);
      flags.disk_io_pending = true;
  
 +    // mem->swap_hdr_sz is zero here during initial read(s)
-+    const auto nextStoreReadOffset = NaturalSum(mem->swap_hdr_sz, nextHttpReadOffset()).value();
++    const auto nextStoreReadOffset = NaturalSum(mem->swap_hdr_sz, nextHttpReadOffset()).first;
 +
 +    // XXX: If fileRead() is called when we do not yet know mem->swap_hdr_sz,
 +    // then we must start reading from disk offset zero to learn it: we cannot
@@ -2927,7 +3751,6 @@ index 207c96b..1731c4c 100644
 +    // longer do that because trimMemory() path checks lowestMemReaderOffset().
 +    // It is also misplaced: We are not swapping out anything here and should
 +    // not care about any swapout invariants.
-+
      if (mem->swap_hdr_sz != 0)
          if (entry->swappingOut())
 -            assert(mem->swapout.sio->offset() > copyInto.offset + (int64_t)mem->swap_hdr_sz);
@@ -2940,10 +3763,10 @@ index 207c96b..1731c4c 100644
 +    // * performance effects of larger disk reads may be negative somewhere.
 +    const decltype(StoreIOBuffer::length) maxReadSize = SM_PAGE_SIZE;
 +
-+    Assure(parsingBuffer);
++    Assure(parsingBuffer.second);
 +    // also, do not read more than we can return (via a copyInto.length buffer)
 +    const auto readSize = std::min(copyInto.length, maxReadSize);
-+    lastDiskRead = parsingBuffer->makeSpace(readSize).positionAt(nextStoreReadOffset);
++    lastDiskRead = parsingBuffer.first.makeSpace(readSize).positionAt(nextStoreReadOffset);
 +    debugs(90, 5, "into " << lastDiskRead);
  
      storeRead(swapin_sio,
@@ -2962,39 +3785,45 @@ index 207c96b..1731c4c 100644
 -store_client::readBody(const char *, ssize_t len)
 +store_client::readBody(const char * const buf, const ssize_t lastIoResult)
  {
-     int parsed_header = 0;
- 
+-    int parsed_header = 0;
+-
 -    // Don't assert disk_io_pending here.. may be called by read_header
 +    Assure(flags.disk_io_pending);
      flags.disk_io_pending = false;
      assert(_callback.pending());
 -    debugs(90, 3, "storeClientReadBody: len " << len << "");
-+    Assure(parsingBuffer);
-+    debugs(90, 3, "got " << lastIoResult << " using " << *parsingBuffer);
-+    if (lastIoResult < 0)
-+        return fail();
++    Assure(parsingBuffer.second);
++    debugs(90, 3, "got " << lastIoResult << " using " << parsingBuffer.first);
  
 -    if (len < 0)
-+    if (!lastIoResult) {
-+        if (answeredOnce())
-+            return noteNews();
-+
-+        debugs(90, DBG_CRITICAL, "ERROR: Truncated HTTP headers in on-disk object");
++    if (lastIoResult < 0)
          return fail();
-+    }
-+    assert(lastDiskRead.data == buf);
-+    lastDiskRead.length = lastIoResult;
  
 -    if (copyInto.offset == 0 && len > 0 && entry->getReply()->sline.status() == Http::scNone) {
 -        /* Our structure ! */
 -        HttpReply *rep = (HttpReply *) entry->getReply(); // bypass const
-+    parsingBuffer->appended(buf, lastIoResult);
++    if (!lastIoResult) {
++        if (answeredOnce())
++            return noteNews();
  
 -        if (!rep->parseCharBuf(copyInto.data, headersEnd(copyInto.data, len))) {
 -            debugs(90, DBG_CRITICAL, "Could not parse headers from on disk object");
 -        } else {
 -            parsed_header = 1;
 -        }
++        debugs(90, DBG_CRITICAL, "ERROR: Truncated HTTP headers in on-disk object");
++        return fail();
+     }
+ 
+-    const HttpReply *rep = entry->getReply();
+-    if (len > 0 && rep && entry->mem_obj->inmem_lo == 0 && entry->objectLen() <= (int64_t)Config.Store.maxInMemObjSize && Config.onoff.memory_cache_disk) {
+-        storeGetMemSpace(len);
+-        // The above may start to free our object so we need to check again
++    assert(lastDiskRead.data == buf);
++    lastDiskRead.length = lastIoResult;
++
++    parsingBuffer.first.appended(buf, lastIoResult);
++
 +    // we know swap_hdr_sz by now and were reading beyond swap metadata because
 +    // readHead() would have been called otherwise (to read swap metadata)
 +    const auto swap_hdr_sz = entry->mem().swap_hdr_sz;
@@ -3022,15 +3851,12 @@ index 207c96b..1731c4c 100644
 +    if (!answeredOnce()) {
 +        // All on-disk responses have HTTP headers. First disk body read(s)
 +        // include HTTP headers that we must parse (if needed) and skip.
-+        const auto haveHttpHeaders = entry->mem_obj->baseReply().pstate == Http::Message::psParsed;
++        const auto haveHttpHeaders = entry->mem_obj->baseReply().pstate == psParsed;
 +        if (!haveHttpHeaders && !parseHttpHeadersFromDisk())
 +            return;
 +        skipHttpHeadersFromDisk();
-     }
- 
-     const HttpReply *rep = entry->getReply();
--    if (len > 0 && rep && entry->mem_obj->inmem_lo == 0 && entry->objectLen() <= (int64_t)Config.Store.maxInMemObjSize && Config.onoff.memory_cache_disk) {
--        storeGetMemSpace(len);
++    }
++
 +    noteNews();
 +}
 +
@@ -3052,7 +3878,7 @@ index 207c96b..1731c4c 100644
 +        // purge mem_hdr bytes of a locked entry, and we do lock ours. And
 +        // inmem_lo offset itself should not be relevant to appending new bytes.
 +        //
-         // The above may start to free our object so we need to check again
++        // recheck for the above call may purge entry's data from the memory cache
          if (entry->mem_obj->inmem_lo == 0) {
 -            /* Copy read data back into memory.
 -             * copyInto.offset includes headers, which is what mem cache needs
@@ -3060,34 +3886,76 @@ index 207c96b..1731c4c 100644
 -            int64_t mem_offset = entry->mem_obj->endOffset();
 -            if ((copyInto.offset == mem_offset) || (parsed_header && mem_offset == rep->hdr_sz)) {
 -                entry->mem_obj->write(StoreIOBuffer(len, copyInto.offset, copyInto.data));
+-            }
 +            // XXX: This code assumes a non-shared memory cache.
 +            if (httpResponsePart.offset == entry->mem_obj->endOffset())
 +                entry->mem_obj->write(httpResponsePart);
-             }
          }
      }
- 
+-
 -    callback(len);
  }
  
  void
-@@ -615,38 +722,21 @@ store_client::readHeader(char const *buf, ssize_t len)
+ store_client::fail()
+ {
++    debugs(90, 3, (object_ok ? "once" : "again"));
++    if (!object_ok)
++        return; // we failed earlier; nothing to do now
++
+     object_ok = false;
++
++    noteNews();
++}
++
++/// if necessary and possible, informs the Store reader about copy() result
++void
++store_client::noteNews()
++{
+     /* synchronous open failures callback from the store,
+      * before startSwapin detects the failure.
+      * TODO: fix this inconsistent behaviour - probably by
+@@ -501,8 +636,20 @@ store_client::fail()
+      * not synchronous
+      */
+ 
+-    if (_callback.pending())
+-        callback(0, true);
++    if (!_callback.callback_handler) {
++        debugs(90, 5, "client lost interest");
++        return;
++    }
++
++    if (_callback.notifier) {
++        debugs(90, 5, "earlier news is being delivered by " << _callback.notifier);
++        return;
++    }
++
++    _callback.notifier = asyncCall(90, 4, "store_client::FinishCallback", cbdataDialer(store_client::FinishCallback, this));
++    ScheduleCallHere(_callback.notifier);
++
++    Assure(!_callback.pending());
+ }
+ 
+ static void
+@@ -573,38 +720,22 @@ store_client::readHeader(char const *buf, ssize_t len)
      if (!object_ok)
          return;
  
-+    Assure(parsingBuffer);
-+    debugs(90, 3, "got " << len << " using " << *parsingBuffer);
++    Assure(parsingBuffer.second);
++    debugs(90, 3, "got " << len << " using " << parsingBuffer.first);
 +
      if (len < 0)
          return fail();
  
-+    Assure(!parsingBuffer->contentSize());
-+    parsingBuffer->appended(buf, len);
++    Assure(!parsingBuffer.first.contentSize());
++    parsingBuffer.first.appended(buf, len);
      if (!unpackHeader(buf, len)) {
          fail();
          return;
      }
--
++    parsingBuffer.first.consume(mem->swap_hdr_sz);
+ 
 -    /*
 -     * If our last read got some data the client wants, then give
 -     * it to them, otherwise schedule another read.
@@ -3112,13 +3980,48 @@ index 207c96b..1731c4c 100644
 -     * know the swap header size.
 -     */
 -    fileRead();
-+    parsingBuffer->consume(mem->swap_hdr_sz);
-+    maybeWriteFromDiskToMemory(parsingBuffer->content());
++    maybeWriteFromDiskToMemory(parsingBuffer.first.content());
 +    handleBodyFromDisk();
  }
  
  int
-@@ -903,6 +993,63 @@ CheckQuickAbortIsReasonable(StoreEntry * entry)
+@@ -673,10 +804,12 @@ storeUnregister(store_client * sc, StoreEntry * e, void *data)
+         ++statCounter.swap.ins;
+     }
+ 
+-    if (sc->_callback.pending()) {
+-        /* callback with ssize = -1 to indicate unexpected termination */
+-        debugs(90, 3, "store_client for " << *e << " has a callback");
+-        sc->fail();
++    if (sc->_callback.callback_handler || sc->_callback.notifier) {
++        debugs(90, 3, "forgetting store_client callback for " << *e);
++        // Do not notify: Callers want to stop copying and forget about this
++        // pending copy request. Some would mishandle a notification from here.
++        if (sc->_callback.notifier)
++            sc->_callback.notifier->cancel("storeUnregister");
+     }
+ 
+ #if STORE_CLIENT_LIST_DEBUG
+@@ -684,6 +817,8 @@ storeUnregister(store_client * sc, StoreEntry * e, void *data)
+ 
+ #endif
+ 
++    // XXX: We might be inside sc store_client method somewhere up the call
++    // stack. TODO: Convert store_client to AsyncJob to make destruction async.
+     delete sc;
+ 
+     assert(e->locked());
+@@ -740,6 +875,9 @@ StoreEntry::invokeHandlers()
+ 
+         if (sc->flags.disk_io_pending)
+             continue;
++        
++        if (sc->flags.store_copying)
++            continue;
+ 
+         storeClientCopy2(this, sc);
+     }
+@@ -847,6 +985,63 @@ CheckQuickAbortIsReasonable(StoreEntry * entry)
      return true;
  }
  
@@ -3136,8 +4039,8 @@ index 207c96b..1731c4c 100644
 +        // cache a header that we cannot parse and get here. Same for MemStore.
 +        debugs(90, DBG_CRITICAL, "ERROR: Cannot parse on-disk HTTP headers" <<
 +               Debug::Extra << "exception: " << CurrentException <<
-+               Debug::Extra << "raw input size: " << parsingBuffer->contentSize() << " bytes" <<
-+               Debug::Extra << "current buffer capacity: " << parsingBuffer->capacity() << " bytes");
++               Debug::Extra << "raw input size: " << parsingBuffer.first.contentSize() << " bytes" <<
++               Debug::Extra << "current buffer capacity: " << parsingBuffer.first.capacity() << " bytes");
 +        fail();
 +        return false;
 +    }
@@ -3148,10 +4051,10 @@ index 207c96b..1731c4c 100644
 +bool
 +store_client::tryParsingHttpHeaders()
 +{
-+    Assure(parsingBuffer);
++    Assure(parsingBuffer.second);
 +    Assure(!copyInto.offset); // otherwise, parsingBuffer cannot have HTTP response headers
-+    auto &adjustableReply = entry->mem().adjustableBaseReply();
-+    if (adjustableReply.parseTerminatedPrefix(parsingBuffer->c_str(), parsingBuffer->contentSize()))
++    auto &adjustableReply = entry->mem().baseReply();
++    if (adjustableReply.parseTerminatedPrefix(parsingBuffer.first.c_str(), parsingBuffer.first.contentSize()))
 +        return true;
 +
 +    // TODO: Optimize by checking memory as well. For simplicity sake, we
@@ -3168,12 +4071,12 @@ index 207c96b..1731c4c 100644
 +{
 +    const auto hdr_sz = entry->mem_obj->baseReply().hdr_sz;
 +    Assure(hdr_sz > 0); // all on-disk responses have HTTP headers
-+    if (Less(parsingBuffer->contentSize(), hdr_sz)) {
-+        debugs(90, 5, "discovered " << hdr_sz << "-byte HTTP headers in memory after reading some of them from disk: " << *parsingBuffer);
-+        parsingBuffer->consume(parsingBuffer->contentSize()); // skip loaded HTTP header prefix
++    if (Less(parsingBuffer.first.contentSize(), hdr_sz)) {
++        debugs(90, 5, "discovered " << hdr_sz << "-byte HTTP headers in memory after reading some of them from disk: " << parsingBuffer.first);
++        parsingBuffer.first.consume(parsingBuffer.first.contentSize()); // skip loaded HTTP header prefix
 +    } else {
-+        parsingBuffer->consume(hdr_sz); // skip loaded HTTP headers
-+        const auto httpBodyBytesAfterHeader = parsingBuffer->contentSize(); // may be zero
++        parsingBuffer.first.consume(hdr_sz); // skip loaded HTTP headers
++        const auto httpBodyBytesAfterHeader = parsingBuffer.first.contentSize(); // may be zero
 +        Assure(httpBodyBytesAfterHeader <= copyInto.length);
 +        debugs(90, 5, "read HTTP body prefix: " << httpBodyBytesAfterHeader);
 +    }
@@ -3182,6 +4085,51 @@ index 207c96b..1731c4c 100644
  void
  store_client::dumpStats(MemBuf * output, int clientNumber) const
  {
+@@ -864,8 +1059,8 @@ store_client::dumpStats(MemBuf * output, int clientNumber) const
+     if (flags.store_copying)
+         output->append(" store_copying", 14);
+ 
+-    if (flags.copy_event_pending)
+-        output->append(" copy_event_pending", 19);
++    if (_callback.notifier)
++        output->append(" notifying", 10);
+ 
+     output->append("\n",1);
+ }
+@@ -873,12 +1068,19 @@ store_client::dumpStats(MemBuf * output, int clientNumber) const
+ bool
+ store_client::Callback::pending() const
+ {
+-    return callback_handler && callback_data;
++    return callback_handler && !notifier;
+ }
+ 
+ store_client::Callback::Callback(STCB *function, void *data) : callback_handler(function), callback_data (data) {}
+ 
+ #if USE_DELAY_POOLS
++int
++store_client::bytesWanted() const
++{
++    // TODO: To avoid using stale copyInto, return zero if !_callback.pending()?
++    return delayId.bytesWanted(0, copyInto.length);
++}
++
+ void
+ store_client::setDelayId(DelayId delay_id)
+ {
+diff --git a/src/store_swapin.cc b/src/store_swapin.cc
+index a05d7e3..cd32e94 100644
+--- a/src/store_swapin.cc
++++ b/src/store_swapin.cc
+@@ -56,7 +56,7 @@ storeSwapInFileClosed(void *data, int errflag, StoreIOState::Pointer)
+ 
+     if (sc->_callback.pending()) {
+         assert (errflag <= 0);
+-        sc->callback(0, errflag ? true : false);
++        sc->noteSwapInDone(errflag);
+     }
+ 
+     ++statCounter.swap.ins;
 diff --git a/src/tests/stub_HttpReply.cc b/src/tests/stub_HttpReply.cc
 index 8ca7f9e..5cde8e6 100644
 --- a/src/tests/stub_HttpReply.cc
@@ -3194,8 +4142,26 @@ index 8ca7f9e..5cde8e6 100644
  bool HttpReply::parseFirstLine(const char *start, const char *end) STUB_RETVAL(false)
  void HttpReply::hdrCacheInit() STUB
  HttpReply * HttpReply::clone() const STUB_RETVAL(NULL)
+diff --git a/src/tests/stub_store_client.cc b/src/tests/stub_store_client.cc
+index 2a13874..debe24e 100644
+--- a/src/tests/stub_store_client.cc
++++ b/src/tests/stub_store_client.cc
+@@ -34,7 +34,12 @@ void storeLogOpen(void) STUB
+ void storeDigestInit(void) STUB
+ void storeRebuildStart(void) STUB
+ void storeReplSetup(void) STUB
+-bool store_client::memReaderHasLowerOffset(int64_t anOffset) const STUB_RETVAL(false)
+ void store_client::dumpStats(MemBuf * output, int clientNumber) const STUB
+ int store_client::getType() const STUB_RETVAL(0)
++void store_client::noteSwapInDone(bool) STUB
++#if USE_DELAY_POOLS
++int store_client::bytesWanted() const STUB_RETVAL(0)
++#endif
++
++
+ 
 diff --git a/src/urn.cc b/src/urn.cc
-index 74453e1..9f5e89d 100644
+index 74453e1..6efdec1 100644
 --- a/src/urn.cc
 +++ b/src/urn.cc
 @@ -26,8 +26,6 @@
@@ -3252,19 +4218,19 @@ index 74453e1..9f5e89d 100644
      url_entry *urls;
      url_entry *u;
      url_entry *min_u;
-@@ -234,10 +224,8 @@ urnHandleReply(void *data, StoreIOBuffer result)
+@@ -234,10 +224,7 @@ urnHandleReply(void *data, StoreIOBuffer result)
      ErrorState *err;
      int i;
      int urlcnt = 0;
 -    char *buf = urnState->reqbuf;
 -    StoreIOBuffer tempBuffer;
- 
+-
 -    debugs(52, 3, "urnHandleReply: Called with size=" << result.length << ".");
 +    debugs(52, 3, result << " with " << *e);
  
      if (EBIT_TEST(urlres_e->flags, ENTRY_ABORTED) || result.flags.error) {
          delete urnState;
-@@ -250,59 +238,38 @@ urnHandleReply(void *data, StoreIOBuffer result)
+@@ -250,59 +237,39 @@ urnHandleReply(void *data, StoreIOBuffer result)
          return;
      }
  
@@ -3276,7 +4242,7 @@ index 74453e1..9f5e89d 100644
 -        delete urnState;
 -        return;
 -    }
-++    urnState->parsingBuffer.appended(result.data, result.length);
++    urnState->parsingBuffer.appended(result.data, result.length);
  
      /* If we haven't received the entire object (urn), copy more */
 -    if (urlres_e->store_status == STORE_PENDING) {
@@ -3294,6 +4260,7 @@ index 74453e1..9f5e89d 100644
 +            delete urnState;
 +            return;
 +        }
++
          storeClientCopy(urnState->sc, urlres_e,
 -                        tempBuffer,
 +                        remainingSpace,
@@ -3383,6 +4350,3 @@ index 74453e1..9f5e89d 100644
      return list;
  }
  
--- 
-2.39.3
-
diff --git a/SOURCES/squid-4.15-CVE-2024-25111.patch b/SOURCES/squid-4.15-CVE-2024-25111.patch
new file mode 100644
index 0000000..e8ea010
--- /dev/null
+++ b/SOURCES/squid-4.15-CVE-2024-25111.patch
@@ -0,0 +1,193 @@
+diff --git a/src/http.cc b/src/http.cc
+index b006300..023e411 100644
+--- a/src/http.cc
++++ b/src/http.cc
+@@ -52,6 +52,7 @@
+ #include "rfc1738.h"
+ #include "SquidConfig.h"
+ #include "SquidTime.h"
++#include "SquidMath.h"
+ #include "StatCounters.h"
+ #include "Store.h"
+ #include "StrList.h"
+@@ -1150,18 +1151,26 @@ HttpStateData::readReply(const CommIoCbParams &io)
+      * Plus, it breaks our lame *HalfClosed() detection
+      */
+ 
+-    Must(maybeMakeSpaceAvailable(true));
+-    CommIoCbParams rd(this); // will be expanded with ReadNow results
+-    rd.conn = io.conn;
+-    rd.size = entry->bytesWanted(Range(0, inBuf.spaceSize()));
++    size_t moreDataPermission = 0;
++    if ((!canBufferMoreReplyBytes(&moreDataPermission) || !moreDataPermission)) {
++        abortTransaction("ready to read required data, but the read buffer is full and cannot be drained");
++        return;
++    }
++
++    const auto readSizeMax = maybeMakeSpaceAvailable(moreDataPermission);
++    // TODO: Move this logic inside maybeMakeSpaceAvailable():
++    const auto readSizeWanted = readSizeMax ? entry->bytesWanted(Range(0, readSizeMax)) : 0;
+ 
+-    if (rd.size <= 0) {
++    if (readSizeWanted <= 0) {
+         assert(entry->mem_obj);
+         AsyncCall::Pointer nilCall;
+         entry->mem_obj->delayRead(DeferredRead(readDelayed, this, CommRead(io.conn, NULL, 0, nilCall)));
+         return;
+     }
+ 
++    CommIoCbParams rd(this); // will be expanded with ReadNow results
++    rd.conn = io.conn;
++    rd.size = readSizeWanted;
+     switch (Comm::ReadNow(rd, inBuf)) {
+     case Comm::INPROGRESS:
+         if (inBuf.isEmpty())
+@@ -1520,8 +1529,11 @@ HttpStateData::maybeReadVirginBody()
+     if (!Comm::IsConnOpen(serverConnection) || fd_table[serverConnection->fd].closing())
+         return;
+ 
+-    if (!maybeMakeSpaceAvailable(false))
++    size_t moreDataPermission = 0;
++    if ((!canBufferMoreReplyBytes(&moreDataPermission)) || !moreDataPermission) {
++        abortTransaction("more response bytes required, but the read buffer is full and cannot be drained");
+         return;
++    }
+ 
+     // XXX: get rid of the do_next_read flag
+     // check for the proper reasons preventing read(2)
+@@ -1539,40 +1551,79 @@ HttpStateData::maybeReadVirginBody()
+     Comm::Read(serverConnection, call);
+ }
+ 
++/// Desired inBuf capacity based on various capacity preferences/limits:
++/// * a smaller buffer may not hold enough for look-ahead header/body parsers;
++/// * a smaller buffer may result in inefficient tiny network reads;
++/// * a bigger buffer may waste memory;
++/// * a bigger buffer may exceed SBuf storage capabilities (SBuf::maxSize);
++size_t
++HttpStateData::calcReadBufferCapacityLimit() const
++{
++    if (!flags.headers_parsed)
++        return Config.maxReplyHeaderSize;
++
++    // XXX: Our inBuf is not used to maintain the read-ahead gap, and using
++    // Config.readAheadGap like this creates huge read buffers for large
++    // read_ahead_gap values. TODO: Switch to using tcp_recv_bufsize as the
++    // primary read buffer capacity factor.
++    //
++    // TODO: Cannot reuse throwing NaturalCast() here. Consider removing
++    // .value() dereference in NaturalCast() or add/use NaturalCastOrMax().
++    const auto configurationPreferences = NaturalSum(Config.readAheadGap).second ? NaturalSum(Config.readAheadGap).first : SBuf::maxSize;
++
++    // TODO: Honor TeChunkedParser look-ahead and trailer parsing requirements
++    // (when explicit configurationPreferences are set too low).
++
++    return std::min(configurationPreferences, SBuf::maxSize);
++}
++
++/// The maximum number of virgin reply bytes we may buffer before we violate
++/// the currently configured response buffering limits.
++/// \retval std::nullopt means that no more virgin response bytes can be read
++/// \retval 0 means that more virgin response bytes may be read later
++/// \retval >0 is the number of bytes that can be read now (subject to other constraints)
+ bool
+-HttpStateData::maybeMakeSpaceAvailable(bool doGrow)
++HttpStateData::canBufferMoreReplyBytes(size_t *maxReadSize) const
+ {
+-    // how much we are allowed to buffer
+-    const int limitBuffer = (flags.headers_parsed ? Config.readAheadGap : Config.maxReplyHeaderSize);
+-
+-    if (limitBuffer < 0 || inBuf.length() >= (SBuf::size_type)limitBuffer) {
+-        // when buffer is at or over limit already
+-        debugs(11, 7, "will not read up to " << limitBuffer << ". buffer has (" << inBuf.length() << "/" << inBuf.spaceSize() << ") from " << serverConnection);
+-        debugs(11, DBG_DATA, "buffer has {" << inBuf << "}");
+-        // Process next response from buffer
+-        processReply();
+-        return false;
++#if USE_ADAPTATION
++    // If we do not check this now, we may say the final "no" prematurely below
++    // because inBuf.length() will decrease as adaptation drains buffered bytes.
++    if (responseBodyBuffer) {
++        debugs(11, 3, "yes, but waiting for adaptation to drain read buffer");
++        *maxReadSize = 0; // yes, we may be able to buffer more (but later)
++        return true;
++    }
++#endif
++
++    const auto maxCapacity = calcReadBufferCapacityLimit();
++    if (inBuf.length() >= maxCapacity) {
++        debugs(11, 3, "no, due to a full buffer: " << inBuf.length() << '/' << inBuf.spaceSize() << "; limit: " << maxCapacity);
++        return false; // no, configuration prohibits buffering more
+     }
+ 
++    *maxReadSize = (maxCapacity - inBuf.length()); // positive
++    debugs(11, 7, "yes, may read up to " << *maxReadSize << " into " << inBuf.length() << '/' << inBuf.spaceSize());
++    return true; // yes, can read up to this many bytes (subject to other constraints)
++}
++
++/// prepare read buffer for reading
++/// \return the maximum number of bytes the caller should attempt to read
++/// \retval 0 means that the caller should delay reading
++size_t
++HttpStateData::maybeMakeSpaceAvailable(const size_t maxReadSize)
++{
+     // how much we want to read
+-    const size_t read_size = calcBufferSpaceToReserve(inBuf.spaceSize(), (limitBuffer - inBuf.length()));
++    const size_t read_size = calcBufferSpaceToReserve(inBuf.spaceSize(), maxReadSize);
+ 
+-    if (!read_size) {
++    if (read_size < 2) {
+         debugs(11, 7, "will not read up to " << read_size << " into buffer (" << inBuf.length() << "/" << inBuf.spaceSize() << ") from " << serverConnection);
+-        return false;
++        return 0;
+     }
+ 
+-    // just report whether we could grow or not, do not actually do it
+-    if (doGrow)
+-        return (read_size >= 2);
+-
+     // we may need to grow the buffer
+     inBuf.reserveSpace(read_size);
+-    debugs(11, 8, (!flags.do_next_read ? "will not" : "may") <<
+-           " read up to " << read_size << " bytes info buf(" << inBuf.length() << "/" << inBuf.spaceSize() <<
+-           ") from " << serverConnection);
+-
+-    return (inBuf.spaceSize() >= 2); // only read if there is 1+ bytes of space available
++    debugs(11, 7, "may read up to " << read_size << " bytes info buffer (" << inBuf.length() << "/" << inBuf.spaceSize() << ") from " << serverConnection);
++    return read_size;
+ }
+ 
+ /// called after writing the very last request byte (body, last-chunk, etc)
+diff --git a/src/http.h b/src/http.h
+index 8965b77..007d2e6 100644
+--- a/src/http.h
++++ b/src/http.h
+@@ -15,6 +15,8 @@
+ #include "http/StateFlags.h"
+ #include "sbuf/SBuf.h"
+ 
++#include 
++
+ class FwdState;
+ class HttpHeader;
+ 
+@@ -107,16 +109,9 @@ private:
+ 
+     void abortTransaction(const char *reason) { abortAll(reason); } // abnormal termination
+ 
+-    /**
+-     * determine if read buffer can have space made available
+-     * for a read.
+-     *
+-     * \param grow  whether to actually expand the buffer
+-     *
+-     * \return whether the buffer can be grown to provide space
+-     *         regardless of whether the grow actually happened.
+-     */
+-    bool maybeMakeSpaceAvailable(bool grow);
++    size_t calcReadBufferCapacityLimit() const;
++    bool canBufferMoreReplyBytes(size_t *maxReadSize) const;
++    size_t maybeMakeSpaceAvailable(size_t maxReadSize);
+ 
+     // consuming request body
+     virtual void handleMoreRequestBodyAvailable();
diff --git a/SOURCES/squid-4.15-CVE-2024-25617.patch b/SOURCES/squid-4.15-CVE-2024-25617.patch
new file mode 100644
index 0000000..86e391a
--- /dev/null
+++ b/SOURCES/squid-4.15-CVE-2024-25617.patch
@@ -0,0 +1,105 @@
+diff --git a/src/SquidString.h b/src/SquidString.h
+index a791885..b9aef38 100644
+--- a/src/SquidString.h
++++ b/src/SquidString.h
+@@ -114,7 +114,16 @@ private:
+ 
+     size_type len_;  /* current length  */
+ 
+-    static const size_type SizeMax_ = 65535; ///< 64K limit protects some fixed-size buffers
++    /// An earlier 64KB limit was meant to protect some fixed-size buffers, but
++    /// (a) we do not know where those buffers are (or whether they still exist)
++    /// (b) too many String users unknowingly exceeded that limit and asserted.
++    /// We are now using a larger limit to reduce the number of (b) cases,
++    /// especially cases where "compact" lists of items grow 50% in size when we
++    /// convert them to canonical form. The new limit is selected to withstand
++    /// concatenation and ~50% expansion of two HTTP headers limited by default
++    /// request_header_max_size and reply_header_max_size settings.
++    static const size_type SizeMax_ = 3*64*1024 - 1;
++
+     /// returns true after increasing the first argument by extra if the sum does not exceed SizeMax_
+     static bool SafeAdd(size_type &base, size_type extra) { if (extra <= SizeMax_ && base <= SizeMax_ - extra) { base += extra; return true; } return false; }
+ 
+diff --git a/src/cache_cf.cc b/src/cache_cf.cc
+index a9c1b7e..46f07bb 100644
+--- a/src/cache_cf.cc
++++ b/src/cache_cf.cc
+@@ -935,6 +935,18 @@ configDoConfigure(void)
+                (uint32_t)Config.maxRequestBufferSize, (uint32_t)Config.maxRequestHeaderSize);
+     }
+ 
++    // Warn about the dangers of exceeding String limits when manipulating HTTP
++    // headers. Technically, we do not concatenate _requests_, so we could relax
++    // their check, but we keep the two checks the same for simplicity sake.
++    const auto safeRawHeaderValueSizeMax = (String::SizeMaxXXX()+1)/3;
++    // TODO: static_assert(safeRawHeaderValueSizeMax >= 64*1024); // no WARNINGs for default settings
++    if (Config.maxRequestHeaderSize > safeRawHeaderValueSizeMax)
++        debugs(3, DBG_CRITICAL, "WARNING: Increasing request_header_max_size beyond " << safeRawHeaderValueSizeMax <<
++               " bytes makes Squid more vulnerable to denial-of-service attacks; configured value: " << Config.maxRequestHeaderSize << " bytes");
++    if (Config.maxReplyHeaderSize > safeRawHeaderValueSizeMax)
++        debugs(3, DBG_CRITICAL, "WARNING: Increasing reply_header_max_size beyond " << safeRawHeaderValueSizeMax <<
++               " bytes makes Squid more vulnerable to denial-of-service attacks; configured value: " << Config.maxReplyHeaderSize << " bytes");
++
+     /*
+      * Disable client side request pipelining if client_persistent_connections OFF.
+      * Waste of resources queueing any pipelined requests when the first will close the connection.
+diff --git a/src/cf.data.pre b/src/cf.data.pre
+index bc2ddcd..d55b870 100644
+--- a/src/cf.data.pre
++++ b/src/cf.data.pre
+@@ -6196,11 +6196,14 @@ TYPE: b_size_t
+ DEFAULT: 64 KB
+ LOC: Config.maxRequestHeaderSize
+ DOC_START
+-	This specifies the maximum size for HTTP headers in a request.
+-	Request headers are usually relatively small (about 512 bytes).
+-	Placing a limit on the request header size will catch certain
+-	bugs (for example with persistent connections) and possibly
+-	buffer-overflow or denial-of-service attacks.
++	This directives limits the header size of a received HTTP request
++	(including request-line). Increasing this limit beyond its 64 KB default
++	exposes certain old Squid code to various denial-of-service attacks. This
++	limit also applies to received FTP commands.
++
++	This limit has no direct affect on Squid memory consumption.
++
++	Squid does not check this limit when sending requests.
+ DOC_END
+ 
+ NAME: reply_header_max_size
+@@ -6209,11 +6212,14 @@ TYPE: b_size_t
+ DEFAULT: 64 KB
+ LOC: Config.maxReplyHeaderSize
+ DOC_START
+-	This specifies the maximum size for HTTP headers in a reply.
+-	Reply headers are usually relatively small (about 512 bytes).
+-	Placing a limit on the reply header size will catch certain
+-	bugs (for example with persistent connections) and possibly
+-	buffer-overflow or denial-of-service attacks.
++	This directives limits the header size of a received HTTP response
++	(including status-line). Increasing this limit beyond its 64 KB default
++	exposes certain old Squid code to various denial-of-service attacks. This
++	limit also applies to FTP command responses.
++
++	Squid also checks this limit when loading hit responses from disk cache.
++
++	Squid does not check this limit when sending responses.
+ DOC_END
+ 
+ NAME: request_body_max_size
+diff --git a/src/http.cc b/src/http.cc
+index 877172d..b006300 100644
+--- a/src/http.cc
++++ b/src/http.cc
+@@ -1820,8 +1820,9 @@ HttpStateData::httpBuildRequestHeader(HttpRequest * request,
+ 
+         String strFwd = hdr_in->getList(Http::HdrType::X_FORWARDED_FOR);
+ 
+-        // if we cannot double strFwd size, then it grew past 50% of the limit
+-        if (!strFwd.canGrowBy(strFwd.size())) {
++        // Detect unreasonably long header values. And paranoidly check String
++        // limits: a String ought to accommodate two reasonable-length values.
++        if (strFwd.size() > 32*1024 || !strFwd.canGrowBy(strFwd.size())) {
+             // There is probably a forwarding loop with Via detection disabled.
+             // If we do nothing, String will assert on overflow soon.
+             // TODO: Terminate all transactions with huge XFF?
diff --git a/SOURCES/squid-4.15.tar.xz.asc b/SOURCES/squid-4.15.tar.xz.asc
index e69de29..7305eaa 100644
--- a/SOURCES/squid-4.15.tar.xz.asc
+++ b/SOURCES/squid-4.15.tar.xz.asc
@@ -0,0 +1,25 @@
+File: squid-4.15.tar.xz
+Date: Mon 10 May 2021 10:50:22 UTC
+Size: 2454176
+MD5 : a593de9dc888dfeca4f1f7db2cd7d3b9
+SHA1: 60bda34ba39657e2d870c8c1d2acece8a69c3075
+Key : CD6DBF8EF3B17D3E 
+            B068 84ED B779 C89B 044E  64E3 CD6D BF8E F3B1 7D3E
+      keyring = http://www.squid-cache.org/pgp.asc
+      keyserver = pool.sks-keyservers.net
+-----BEGIN PGP SIGNATURE-----
+
+iQIzBAABCgAdFiEEsGiE7bd5yJsETmTjzW2/jvOxfT4FAmCZD/UACgkQzW2/jvOx
+fT6zZg/+N8JMIYpmVJ7jm4lF0Ub2kEHGTOrc+tnlA3LGnlMQuTm61+BYk58g0SKW
+96NbJ0cycW215Q34L+Y0tWuxEbIU01vIc3AA7rQd0LKy+fQU0OtBuhk5Vf4bKilW
+uHEVIQZs9HmY6bqC+kgtCf49tVZvR8FZYNuilg/68+i/pQdwaDDmVb+j2oF7w+y2
+dgkTFWtM5NTL6bqUVC0E7lLFPjzMefKfxkkpWFdV/VrAhU25jN24kpnjcfotQhdW
+LDFy5okduz3ljso9pBYJfLeMXM1FZPpceC91zj32x3tcUyrD3yIoXob58rEKvfe4
+RDXN4SuClsNe4UQ4oNoGIES9XtaYlOzPR1PlbqPUrdp1cDnhgLJ+1fkAixlMqCml
+wuI1VIKSEY+nvRzQzFHnXJK9otV8QwMF76AHaytO9y+X6JuZmu/CcV1pq61qY9qv
+t1/8z99wWSxpu17zthZgq64J225GF/hkBedaFlYoS5k5YUMDLPlRSCC0yPmb8JBF
+Cns5i/aq2PmOx2ZhQ2RQIF416J3HK8Galw8ytFOjnEcn4ux9yzKNjL38p4+PJJA0
+7GCMAqYYNjok3LSkGbiR7cPgbHnkqRfYbPFLMj4FtruoFlZ9L5MIU3oFvqA3ZR6l
+Az6LaKLsAYPUmukAOPUSIrqpKXZHc7hdBWkT+7RYA4qaoU+9oIo=
+=1Re1
+-----END PGP SIGNATURE-----
diff --git a/SOURCES/squid.nm b/SOURCES/squid.nm
old mode 100644
new mode 100755
diff --git a/SPECS/squid.spec b/SPECS/squid.spec
index 2f8c257..a7017e6 100644
--- a/SPECS/squid.spec
+++ b/SPECS/squid.spec
@@ -2,7 +2,7 @@
 
 Name:     squid
 Version:  4.15
-Release:  7%{?dist}.5
+Release:  7%{?dist}.10
 Summary:  The Squid proxy caching server
 Epoch:    7
 # See CREDITS for breakdown of non GPLv2+ code
@@ -53,20 +53,22 @@ Patch302: squid-4.15-CVE-2022-41318.patch
 Patch303: squid-4.15-CVE-2023-46846.patch
 # https://bugzilla.redhat.com/show_bug.cgi?id=2245916
 Patch304: squid-4.15-CVE-2023-46847.patch
-
-#Oracle patches
-Patch1001: 0001-Break-long-store_client-call-chains-with-async-calls.patch
-Patch1002: 0002-Remove-serialized-HTTP-headers-from-storeClientCopy.patch
-Patch1003: 0003-Bug-5309-frequent-lowestOffset-target_offset-asserti.patch
-Patch1004: 0004-Remove-mem_hdr-freeDataUpto-assertion-1562.patch
-Patch1005: 0005-Backport-Add-Assure-as-a-replacement-for-problematic.patch
-Patch1006: 0006-Backport-additional-functions-for-SquidMath.patch
-Patch1007: 0007-Adapt-to-older-gcc-cleanup.patch
-Patch1008: squid-4.15-CVE-2023-46724.patch 
-Patch1009: squid-4.15-CVE-2023-46728.patch
-Patch1010: squid-4.15-CVE-2023-49285.patch
-Patch1011: squid-4.15-CVE-2023-49286.patch
-
+# https://issues.redhat.com/browse/RHEL-14792
+Patch305: squid-4.15-CVE-2023-5824.patch
+# https://bugzilla.redhat.com/show_bug.cgi?id=2248521
+Patch306: squid-4.15-CVE-2023-46728.patch
+# https://bugzilla.redhat.com/show_bug.cgi?id=2247567
+Patch307: squid-4.15-CVE-2023-46724.patch
+# https://bugzilla.redhat.com/show_bug.cgi?id=2252926
+Patch308: squid-4.15-CVE-2023-49285.patch
+# https://bugzilla.redhat.com/show_bug.cgi?id=2252923
+Patch309: squid-4.15-CVE-2023-49286.patch
+# https://bugzilla.redhat.com/show_bug.cgi?id=2264309
+Patch310: squid-4.15-CVE-2024-25617.patch
+# https://bugzilla.redhat.com/show_bug.cgi?id=2268366
+Patch311: squid-4.15-CVE-2024-25111.patch
+# https://bugzilla.redhat.com/show_bug.cgi?id=2254663
+Patch312: squid-4.15-CVE-2023-50269.patch
 
 Requires: bash >= 2.0
 Requires(pre): shadow-utils
@@ -136,20 +138,14 @@ lookup program (dnsserver), a program for retrieving FTP data
 %patch302 -p1 -b .CVE-2022-41318
 %patch303 -p1 -b .CVE-2023-46846
 %patch304 -p1 -b .CVE-2023-46847
-
-
-# Oracle patches
-%patch1001 -p1
-%patch1002 -p1
-%patch1003 -p1
-%patch1004 -p1
-%patch1005 -p1
-%patch1006 -p1
-%patch1007 -p1
-%patch1008 -p1
-%patch1009 -p1
-%patch1010 -p1
-%patch1011 -p1
+%patch305 -p1 -b .CVE-2023-5824
+%patch306 -p1 -b .CVE-2023-46728
+%patch307 -p1 -b .CVE-2023-46724
+%patch308 -p1 -b .CVE-2023-49285
+%patch309 -p1 -b .CVE-2023-49286
+%patch310 -p1 -b .CVE-2024-25617
+%patch311 -p1 -b .CVE-2024-25111
+%patch312 -p1 -b .CVE-2023-50269
 
 # https://bugzilla.redhat.com/show_bug.cgi?id=1679526
 # Patch in the vendor documentation and used different location for documentation
@@ -366,14 +362,37 @@ fi
 
 
 %changelog
-* Wed Jan 03 2024 Tianyue Lan  - 7:4.15-7.5
-- Fix squid: Denial of Service in SSL Certificate validation (CVE-2023-46724)
-- Fix squid: NULL pointer dereference in the gopher protocol code (CVE-2023-46728)
-- Fix squid: Buffer over-read in the HTTP Message processing feature (CVE-2023-49285)
-- Fix squid: Incorrect Check of Function Return Value In Helper Process management(CVE-2023-49286)
+* Thu Mar 14 2024 Luboš Uhliarik  - 7:4.15-7.10
+- Resolves: RHEL-19551 - squid:4/squid: denial of service in HTTP request
+  parsing (CVE-2023-50269)
 
-* Sun Dec 09 2023 Alex Burmashev  - 7:4.15-7.3
-- Fix squid: DoS against HTTP and HTTPS (CVE-2023-5824)
+* Fri Mar 08 2024 Luboš Uhliarik  - 7:4.15-7.9
+- Resolves: RHEL-28611 - squid:4/squid: Denial of Service in HTTP Chunked
+  Decoding (CVE-2024-25111)
+
+* Mon Feb 26 2024 Luboš Uhliarik  - 7:4.15-7.6
+- Resolves: RHEL-26087 - squid:4/squid: denial of service in HTTP header
+  parser (CVE-2024-25617)
+
+* Thu Dec 07 2023 Luboš Uhliarik  - 7:4.15-7.5
+- Resolves: RHEL-18483 - squid:4/squid: Buffer over-read in the HTTP Message
+  processing feature (CVE-2023-49285)
+- Resolves: RHEL-18485 - squid:4/squid: Incorrect Check of Function Return
+  Value In Helper Process management (CVE-2023-49286)
+
+* Wed Dec 06 2023 Luboš Uhliarik  - 7:4.15-7.4
+- Resolves: RHEL-16764 - squid:4/squid: Denial of Service in SSL Certificate
+  validation (CVE-2023-46724)
+- Resolves: RHEL-16775 - squid:4/squid: NULL pointer dereference in the gopher
+  protocol code (CVE-2023-46728)
+- Resolves: RHEL-18257 - squid crashes in assertion when a parent peer exists
+
+* Thu Nov 30 2023 Tomas Korbar  - 7:4.15-7.3
+- Related: RHEL-14792 - squid: squid multiple issues in HTTP response caching
+- Fix mistake in the patch
+
+* Tue Nov 21 2023 Tomas Korbar  - 7:4.15-7.2
+- Resolves: RHEL-14792 - squid: squid multiple issues in HTTP response caching
 
 * Mon Oct 30 2023 Luboš Uhliarik  - 7:4.15-7.1
 - Resolves: RHEL-14801 - squid: squid: Denial of Service in HTTP Digest

Die kasbediener se administrateur is %w.