Compare commits

..

4 Commits

21 changed files with 4477 additions and 2471 deletions

View File

@ -0,0 +1,599 @@
From 4896d07bf753683a3dbba4210384b0d862ff2d11 Mon Sep 17 00:00:00 2001
From: Eduard Bagdasaryan <eduard.bagdasaryan@measurement-factory.com>
Date: Thu, 7 Dec 2023 16:47:08 +0000
Subject: [PATCH 1/7] Break long store_client call chains with async calls
(#1056)
The store_client class design created very long call chains spanning
Squid-client and Squid-server processing and multiple transactions.
These call chains also create ideal conditions for dangerous recursive
relationships between communicating classes (a.k.a. "reentrancy" among
Squid developers). For example, storeClientCopy() enters store_client
and triggers disk I/O that triggers invokeHandlers() that re-enters the
same store_client object and starts competing with the original
storeClientCopy() processing state.
The official code prevented the worst recursion cases with three(!)
boolean flags and time-based events abused to break some of the call
chains, but that approach did not solve all of the problems while also
losing transaction context information across time-based events.
This change effectively makes STCB storeClientCopy() callbacks
asynchronous, eliminating the need for time-based events and one of the
flags. It shortens many call chains and preserves transaction context.
The remaining problems can and should be eliminated by converting
store_client into AsyncJob, but those changes deserve a dedicated PR.
store_client orchestrates cooperation of multiple asynchronous players:
* Sink: A Store client requests a STCB callback via a
storeClientCopy()/copy() call. A set _callback.callback_handler
implies that the client is waiting for this callback.
* Source1: A Store disk reading subsystem activated by the storeRead()
call "spontaneously" delivers response bytes via storeClientRead*()
callbacks. The disk_io_pending flag implies waiting for them.
* Source2: Store memory subsystem activated by storeClientListAdd()
"spontaneously" delivers response bytes via invokeHandlers().
* Source3: Store disk subsystem activated by storeSwapInStart()
"spontaneously" notifies of EOF/error by calling noteSwapInDone().
* Source4: A store_client object owner may delete the object by
"spontaneously" calling storeUnregister(). The official code was
converting this event into an error-notifying callback.
We continue to answer each storeClientCopy() request with the first
available information even though several SourceN calls are possible
while we are waiting to complete the STCB callback. The StoreIOBuffer
API and STCB recipients do not support data+error/eof combinations, and
future code will move this wait to the main event loop anyway. This
first-available approach means that the creation of the notifier call
effectively ends answer processing -- store_client just waits for that
call to fire so that it can relay the answer to still-synchronous STCB.
When STCB itself becomes asynchronous, this logic will continue to work.
Also stopped calling STCB from storeUnregister(). Conceptually, the
storeUnregister() and storeClientCopy() callers ought to represent the
same get-content-from-Store task; there should be no need to notify that
task about what it is doing. Technically, analysis of STCB callbacks
showed that many such notifications would be dangerous (if they are or
become reachable). At the time of the storeUnregister() call, the STCB
callbacks are usually unset (e.g., when storeUnregister() is called from
the destructor, after that object has finished copying -- a very common
case) or do not do anything (useful).
Also removed callback_data from the Callback::pending() condition. It is
conceptually wrong to require non-nil callback parameter, and it is
never cleared separately from the callback_handler data member anyway.
Also hid copyInto into the private store_client section to make sure it
is not modified while we are waiting to complete the STCB callback. This
move required adding a couple of read-only wrapper methods like
bytesWanted() and noteSwapInDone().
Also simplified error/EOF/bytes handling on copy()-STCB path using
dedicated methods (e.g., store_client::callback() API is no longer
mixing EOF and error signals).
Modified-by: Alex Burmashev <alexander.burmashev@oracle.com>
Signed-off-by: Alex Burmashev <alexander.burmashev@oracle.com>
---
src/MemObject.cc | 6 +-
src/StoreClient.h | 64 ++++++++++--
src/store_client.cc | 177 ++++++++++++++++++++++-----------
src/store_swapin.cc | 2 +-
src/tests/stub_store_client.cc | 5 +-
5 files changed, 186 insertions(+), 68 deletions(-)
diff --git a/src/MemObject.cc b/src/MemObject.cc
index df7791f..4ba63cc 100644
--- a/src/MemObject.cc
+++ b/src/MemObject.cc
@@ -196,8 +196,8 @@ struct LowestMemReader : public unary_function<store_client, void> {
LowestMemReader(int64_t seed):current(seed) {}
void operator() (store_client const &x) {
- if (x.memReaderHasLowerOffset(current))
- current = x.copyInto.offset;
+ if (x.getType() == STORE_MEM_CLIENT)
+ current = std::min(current, x.readOffset());
}
int64_t current;
@@ -492,7 +492,7 @@ MemObject::mostBytesAllowed() const
#endif
- j = sc->delayId.bytesWanted(0, sc->copyInto.length);
+ j = sc->bytesWanted();
if (j > jmax) {
jmax = j;
diff --git a/src/StoreClient.h b/src/StoreClient.h
index 65472d8..457844a 100644
--- a/src/StoreClient.h
+++ b/src/StoreClient.h
@@ -12,6 +12,7 @@
#include "dlink.h"
#include "StoreIOBuffer.h"
#include "StoreIOState.h"
+#include "base/AsyncCall.h"
typedef void STCB(void *, StoreIOBuffer); /* store callback */
@@ -39,14 +40,32 @@ class store_client
public:
store_client(StoreEntry *);
~store_client();
- bool memReaderHasLowerOffset(int64_t) const;
+
+ /// An offset into the stored response bytes, including the HTTP response
+ /// headers (if any). Note that this offset does not include Store entry
+ /// metadata, because it is not a part of the stored response.
+ /// \retval 0 means the client wants to read HTTP response headers.
+ /// \retval +N the response byte that the client wants to read next.
+ /// \retval -N should not occur.
+ // TODO: Callers do not expect negative offset. Verify that the return
+ // value cannot be negative and convert to unsigned in this case.
+ int64_t readOffset() const { return copyInto.offset; }
+
int getType() const;
- void fail();
- void callback(ssize_t len, bool error = false);
+
+ /// React to the end of reading the response from disk. There will be no
+ /// more readHeader() and readBody() callbacks for the current storeRead()
+ /// swapin after this notification.
+ void noteSwapInDone(bool error);
+
void doCopy (StoreEntry *e);
void readHeader(const char *buf, ssize_t len);
void readBody(const char *buf, ssize_t len);
+
+ /// Request StoreIOBuffer-described response data via an asynchronous STCB
+ /// callback. At most one outstanding request is allowed per store_client.
void copy(StoreEntry *, StoreIOBuffer, STCB *, void *);
+
void dumpStats(MemBuf * output, int clientNumber) const;
int64_t cmp_offset;
@@ -59,19 +78,29 @@ public:
StoreIOState::Pointer swapin_sio;
struct {
+ /// whether we are expecting a response to be swapped in from disk
+ /// (i.e. whether async storeRead() is currently in progress)
+ // TODO: a better name reflecting the 'in' scope of the flag
bool disk_io_pending;
+
+ /// whether the store_client::doCopy()-initiated STCB sequence is
+ /// currently in progress
bool store_copying;
- bool copy_event_pending;
} flags;
#if USE_DELAY_POOLS
DelayId delayId;
+
+ /// The maximum number of bytes the Store client can read/copy next without
+ /// overflowing its buffer and without violating delay pool limits. Store
+ /// I/O is not rate-limited, but we assume that the same number of bytes may
+ /// be read from the Squid-to-server connection that may be rate-limited.
+ int bytesWanted() const;
+
void setDelayId(DelayId delay_id);
#endif
dlink_node node;
- /* Below here is private - do no alter outside storeClient calls */
- StoreIOBuffer copyInto;
private:
bool moreToSend() const;
@@ -83,9 +112,25 @@ private:
bool startSwapin();
bool unpackHeader(char const *buf, ssize_t len);
+ void fail();
+ void callback(ssize_t);
+ void noteCopiedBytes(size_t);
+ void noteEof();
+ void noteNews();
+ void finishCallback();
+ static void FinishCallback(store_client *);
+
int type;
bool object_ok;
+ /// Storage and metadata associated with the current copy() request. Ought
+ /// to be ignored when not answering a copy() request.
+ StoreIOBuffer copyInto;
+
+ /// The number of bytes loaded from Store into copyInto while answering the
+ /// current copy() request. Ought to be ignored when not answering.
+ size_t copiedSize;
+
/* Until we finish stuffing code into store_client */
public:
@@ -94,9 +139,16 @@ public:
Callback ():callback_handler(NULL), callback_data(NULL) {}
Callback (STCB *, void *);
+
+ /// Whether the copy() answer is needed/expected (by the client) and has
+ /// not been computed (by us). False during (asynchronous) answer
+ /// delivery to the STCB callback_handler.
bool pending() const;
STCB *callback_handler;
void *callback_data;
+
+ /// a scheduled asynchronous finishCallback() call (or nil)
+ AsyncCall::Pointer notifier;
} _callback;
};
diff --git a/src/store_client.cc b/src/store_client.cc
index 1b54f04..207c96b 100644
--- a/src/store_client.cc
+++ b/src/store_client.cc
@@ -9,6 +9,7 @@
/* DEBUG: section 90 Storage Manager Client-Side Interface */
#include "squid.h"
+#include "base/AsyncCbdataCalls.h"
#include "event.h"
#include "globals.h"
#include "HttpReply.h"
@@ -39,17 +40,10 @@
static StoreIOState::STRCB storeClientReadBody;
static StoreIOState::STRCB storeClientReadHeader;
static void storeClientCopy2(StoreEntry * e, store_client * sc);
-static EVH storeClientCopyEvent;
static bool CheckQuickAbortIsReasonable(StoreEntry * entry);
CBDATA_CLASS_INIT(store_client);
-bool
-store_client::memReaderHasLowerOffset(int64_t anOffset) const
-{
- return getType() == STORE_MEM_CLIENT && copyInto.offset < anOffset;
-}
-
int
store_client::getType() const
{
@@ -104,22 +98,41 @@ storeClientListAdd(StoreEntry * e, void *data)
return sc;
}
+/// schedules asynchronous STCB call to relay disk or memory read results
+/// \param outcome an error signal (if negative), an EOF signal (if zero), or the number of bytes read
+void
+store_client::callback(const ssize_t outcome)
+{
+ if (outcome > 0)
+ return noteCopiedBytes(outcome);
+
+ if (outcome < 0)
+ return fail();
+
+ noteEof();
+}
+/// finishCallback() wrapper; TODO: Add NullaryMemFunT for non-jobs.
void
-store_client::callback(ssize_t sz, bool error)
+store_client::FinishCallback(store_client * const sc)
{
- size_t bSz = 0;
+ sc->finishCallback();
+}
- if (sz >= 0 && !error)
- bSz = sz;
+/// finishes a copy()-STCB sequence by synchronously calling STCB
+void
+store_client::finishCallback()
+{
+ Assure(_callback.callback_handler);
+ Assure(_callback.notifier);
- StoreIOBuffer result(bSz, 0 ,copyInto.data);
+ // callers are not ready to handle a content+error combination
+ Assure(object_ok || !copiedSize);
- if (sz < 0 || error)
- result.flags.error = 1;
+ StoreIOBuffer result(copiedSize, copyInto.offset, copyInto.data);
+ result.flags.error = object_ok ? 0 : 1;
+ copiedSize = 0;
- result.offset = cmp_offset;
- assert(_callback.pending());
- cmp_offset = copyInto.offset + bSz;
+ cmp_offset = result.offset + result.length;
STCB *temphandler = _callback.callback_handler;
void *cbdata = _callback.callback_data;
_callback = Callback(NULL, NULL);
@@ -131,18 +144,24 @@ store_client::callback(ssize_t sz, bool error)
cbdataReferenceDone(cbdata);
}
-static void
-storeClientCopyEvent(void *data)
+/// schedules asynchronous STCB call to relay a successful disk or memory read
+/// \param bytesCopied the number of response bytes copied into copyInto
+void
+store_client::noteCopiedBytes(const size_t bytesCopied)
{
- store_client *sc = (store_client *)data;
- debugs(90, 3, "storeClientCopyEvent: Running");
- assert (sc->flags.copy_event_pending);
- sc->flags.copy_event_pending = false;
-
- if (!sc->_callback.pending())
- return;
+ debugs(90, 5, bytesCopied);
+ Assure(bytesCopied > 0);
+ Assure(!copiedSize);
+ copiedSize = bytesCopied;
+ noteNews();
+}
- storeClientCopy2(sc->entry, sc);
+void
+store_client::noteEof()
+{
+ debugs(90, 5, copiedSize);
+ Assure(!copiedSize);
+ noteNews();
}
store_client::store_client(StoreEntry *e) :
@@ -152,11 +171,11 @@ store_client::store_client(StoreEntry *e) :
#endif
entry(e),
type(e->storeClientType()),
- object_ok(true)
+ object_ok(true),
+ copiedSize(0)
{
flags.disk_io_pending = false;
flags.store_copying = false;
- flags.copy_event_pending = false;
++ entry->refcount;
if (getType() == STORE_DISK_CLIENT) {
@@ -272,17 +291,11 @@ static void
storeClientCopy2(StoreEntry * e, store_client * sc)
{
/* reentrancy not allowed - note this could lead to
- * dropped events
+ * dropped notifications about response data availability
*/
- if (sc->flags.copy_event_pending) {
- return;
- }
-
if (sc->flags.store_copying) {
- sc->flags.copy_event_pending = true;
- debugs(90, 3, "storeClientCopy2: Queueing storeClientCopyEvent()");
- eventAdd("storeClientCopyEvent", storeClientCopyEvent, sc, 0.0, 0);
+ debugs(90, 3, "prevented recursive copying for " << *e);
return;
}
@@ -295,21 +308,16 @@ storeClientCopy2(StoreEntry * e, store_client * sc)
* if the peer aborts, we want to give the client(s)
* everything we got before the abort condition occurred.
*/
- /* Warning: doCopy may indirectly free itself in callbacks,
- * hence the lock to keep it active for the duration of
- * this function
- * XXX: Locking does not prevent calling sc destructor (it only prevents
- * freeing sc memory) so sc may become invalid from C++ p.o.v.
- */
- CbcPointer<store_client> tmpLock = sc;
- assert (!sc->flags.store_copying);
sc->doCopy(e);
- assert(!sc->flags.store_copying);
}
void
store_client::doCopy(StoreEntry *anEntry)
{
+ Assure(_callback.pending());
+ Assure(!flags.disk_io_pending);
+ Assure(!flags.store_copying);
+
assert (anEntry == entry);
flags.store_copying = true;
MemObject *mem = entry->mem_obj;
@@ -321,7 +329,7 @@ store_client::doCopy(StoreEntry *anEntry)
if (!moreToSend()) {
/* There is no more to send! */
debugs(33, 3, HERE << "There is no more to send!");
- callback(0);
+ noteEof();
flags.store_copying = false;
return;
}
@@ -382,6 +390,16 @@ store_client::startSwapin()
}
}
+void
+store_client::noteSwapInDone(const bool error)
+{
+ Assure(_callback.pending());
+ if (error)
+ fail();
+ else
+ noteEof();
+}
+
void
store_client::scheduleRead()
{
@@ -421,7 +439,7 @@ store_client::scheduleMemRead()
/* What the client wants is in memory */
/* Old style */
debugs(90, 3, "store_client::doCopy: Copying normal from memory");
- size_t sz = entry->mem_obj->data_hdr.copy(copyInto);
+ const auto sz = entry->mem_obj->data_hdr.copy(copyInto); // may be <= 0 per copy() API
callback(sz);
flags.store_copying = false;
}
@@ -493,7 +511,19 @@ store_client::readBody(const char *, ssize_t len)
void
store_client::fail()
{
+ debugs(90, 3, (object_ok ? "once" : "again"));
+ if (!object_ok)
+ return; // we failed earlier; nothing to do now
+
object_ok = false;
+
+ noteNews();
+}
+
+/// if necessary and possible, informs the Store reader about copy() result
+void
+store_client::noteNews()
+{
/* synchronous open failures callback from the store,
* before startSwapin detects the failure.
* TODO: fix this inconsistent behaviour - probably by
@@ -501,8 +531,20 @@ store_client::fail()
* not synchronous
*/
- if (_callback.pending())
- callback(0, true);
+ if (!_callback.callback_handler) {
+ debugs(90, 5, "client lost interest");
+ return;
+ }
+
+ if (_callback.notifier) {
+ debugs(90, 5, "earlier news is being delivered by " << _callback.notifier);
+ return;
+ }
+
+ _callback.notifier = asyncCall(90, 4, "store_client::FinishCallback", cbdataDialer(store_client::FinishCallback, this));
+ ScheduleCallHere(_callback.notifier);
+
+ Assure(!_callback.pending());
}
static void
@@ -673,10 +715,12 @@ storeUnregister(store_client * sc, StoreEntry * e, void *data)
++statCounter.swap.ins;
}
- if (sc->_callback.pending()) {
- /* callback with ssize = -1 to indicate unexpected termination */
- debugs(90, 3, "store_client for " << *e << " has a callback");
- sc->fail();
+ if (sc->_callback.callback_handler || sc->_callback.notifier) {
+ debugs(90, 3, "forgetting store_client callback for " << *e);
+ // Do not notify: Callers want to stop copying and forget about this
+ // pending copy request. Some would mishandle a notification from here.
+ if (sc->_callback.notifier)
+ sc->_callback.notifier->cancel("storeUnregister");
}
#if STORE_CLIENT_LIST_DEBUG
@@ -684,6 +728,8 @@ storeUnregister(store_client * sc, StoreEntry * e, void *data)
#endif
+ // XXX: We might be inside sc store_client method somewhere up the call
+ // stack. TODO: Convert store_client to AsyncJob to make destruction async.
delete sc;
assert(e->locked());
@@ -740,6 +786,16 @@ StoreEntry::invokeHandlers()
if (sc->flags.disk_io_pending)
continue;
+ if (sc->flags.store_copying)
+ continue;
+
+ // XXX: If invokeHandlers() is (indirectly) called from a store_client
+ // method, then the above three conditions may not be sufficient to
+ // prevent us from reentering the same store_client object! This
+ // probably does not happen in the current code, but no observed
+ // invariant prevents this from (accidentally) happening in the future.
+
+ // TODO: Convert store_client into AsyncJob; make this call asynchronous
storeClientCopy2(this, sc);
}
@@ -864,8 +920,8 @@ store_client::dumpStats(MemBuf * output, int clientNumber) const
if (flags.store_copying)
output->append(" store_copying", 14);
- if (flags.copy_event_pending)
- output->append(" copy_event_pending", 19);
+ if (_callback.notifier)
+ output->append(" notifying", 10);
output->append("\n",1);
}
@@ -873,12 +929,19 @@ store_client::dumpStats(MemBuf * output, int clientNumber) const
bool
store_client::Callback::pending() const
{
- return callback_handler && callback_data;
+ return callback_handler && !notifier;
}
store_client::Callback::Callback(STCB *function, void *data) : callback_handler(function), callback_data (data) {}
#if USE_DELAY_POOLS
+int
+store_client::bytesWanted() const
+{
+ // TODO: To avoid using stale copyInto, return zero if !_callback.pending()?
+ return delayId.bytesWanted(0, copyInto.length);
+}
+
void
store_client::setDelayId(DelayId delay_id)
{
diff --git a/src/store_swapin.cc b/src/store_swapin.cc
index a05d7e3..cd32e94 100644
--- a/src/store_swapin.cc
+++ b/src/store_swapin.cc
@@ -56,7 +56,7 @@ storeSwapInFileClosed(void *data, int errflag, StoreIOState::Pointer)
if (sc->_callback.pending()) {
assert (errflag <= 0);
- sc->callback(0, errflag ? true : false);
+ sc->noteSwapInDone(errflag);
}
++statCounter.swap.ins;
diff --git a/src/tests/stub_store_client.cc b/src/tests/stub_store_client.cc
index 2a13874..4a73863 100644
--- a/src/tests/stub_store_client.cc
+++ b/src/tests/stub_store_client.cc
@@ -34,7 +34,10 @@ void storeLogOpen(void) STUB
void storeDigestInit(void) STUB
void storeRebuildStart(void) STUB
void storeReplSetup(void) STUB
-bool store_client::memReaderHasLowerOffset(int64_t anOffset) const STUB_RETVAL(false)
+void store_client::noteSwapInDone(bool) STUB
+#if USE_DELAY_POOLS
+int store_client::bytesWanted() const STUB_RETVAL(0)
+#endif
void store_client::dumpStats(MemBuf * output, int clientNumber) const STUB
int store_client::getType() const STUB_RETVAL(0)
--
2.39.3

View File

@ -0,0 +1,119 @@
From af18cb04f07555f49daef982c8c21459bfbe388c Mon Sep 17 00:00:00 2001
From: Alex Rousskov <rousskov@measurement-factory.com>
Date: Thu, 23 Nov 2023 18:27:24 +0000
Subject: [PATCH 3/7] Bug 5309: frequent "lowestOffset () <= target_offset"
assertion (#1561)
Recent commit 122a6e3 left store_client::readOffset() unchanged but
should have adjusted it to match changed copyInto.offset semantics:
Starting with that commit, storeClientCopy() callers supply HTTP
response _body_ offset rather than HTTP response offset.
....
This bug decreased readOffset() values (by the size of stored HTTP
response headers), effectively telling Store that we are not yet done
with some of the MemObject/mem_hdr bytes. This bug could cause slightly
higher transaction memory usage because the same response bytes are
trimmed later. This bug should not have caused any assertions.
....
However, the old mem_hdr::freeDataUpto() code that uses readOffset() is
also broken -- the assertion in that method only "works" when
readOffset() returns values matching a memory node boundary. The smaller
values returned by buggy readOffset() triggered buggy assertions.
....
This minimal fix removes the recent store_client::readOffset() bug
described above. We will address old mem_hdr problems separately.
Modified-by: Alex Burmashev <alexander.burmashev@oracle.com>
Signed-off-by: Alex Burmashev <alexander.burmashev@oracle.com>
---
src/MemObject.cc | 2 +-
src/StoreClient.h | 19 ++++++++++---------
src/store_client.cc | 13 +++++++++++++
3 files changed, 24 insertions(+), 10 deletions(-)
diff --git a/src/MemObject.cc b/src/MemObject.cc
index d7aaf5e..650d3fd 100644
--- a/src/MemObject.cc
+++ b/src/MemObject.cc
@@ -197,7 +197,7 @@ struct LowestMemReader : public unary_function<store_client, void> {
void operator() (store_client const &x) {
if (x.getType() == STORE_MEM_CLIENT)
- current = std::min(current, x.readOffset());
+ current = std::min(current, x.discardableHttpEnd());
}
int64_t current;
diff --git a/src/StoreClient.h b/src/StoreClient.h
index 1d90e5a..0524776 100644
--- a/src/StoreClient.h
+++ b/src/StoreClient.h
@@ -54,15 +54,8 @@ public:
store_client(StoreEntry *);
~store_client();
- /// An offset into the stored response bytes, including the HTTP response
- /// headers (if any). Note that this offset does not include Store entry
- /// metadata, because it is not a part of the stored response.
- /// \retval 0 means the client wants to read HTTP response headers.
- /// \retval +N the response byte that the client wants to read next.
- /// \retval -N should not occur.
- // TODO: Callers do not expect negative offset. Verify that the return
- // value cannot be negative and convert to unsigned in this case.
- int64_t readOffset() const { return copyInto.offset; }
+ /// the client will not use HTTP response bytes with lower offsets (if any)
+ auto discardableHttpEnd() const { return discardableHttpEnd_; }
int getType() const;
@@ -156,8 +149,16 @@ private:
/// Storage and metadata associated with the current copy() request. Ought
/// to be ignored when not answering a copy() request.
+ /// * copyInto.offset is the requested HTTP response body offset;
+ /// * copyInto.data is the client-owned, client-provided result buffer;
+ /// * copyInto.length is the size of the .data result buffer;
+ /// * copyInto.flags are unused by this class.
StoreIOBuffer copyInto;
+ // TODO: Convert to uint64_t after fixing mem_hdr::endOffset() and friends.
+ /// \copydoc discardableHttpEnd()
+ int64_t discardableHttpEnd_ = 0;
+
/// the total number of finishCallback() calls
uint64_t answers;
diff --git a/src/store_client.cc b/src/store_client.cc
index 1731c4c..383aac8 100644
--- a/src/store_client.cc
+++ b/src/store_client.cc
@@ -122,6 +122,16 @@ store_client::finishCallback()
result = parsingBuffer->packBack();
result.flags.error = object_ok ? 0 : 1;
+ // TODO: Move object_ok handling above into this `if` statement.
+ if (object_ok) {
+ // works for zero hdr_sz cases as well; see also: nextHttpReadOffset()
+ discardableHttpEnd_ = NaturalSum<int64_t>(entry->mem().baseReply().hdr_sz, result.offset, result.length).value();
+ } else {
+ // object_ok is sticky, so we will not be able to use any response bytes
+ discardableHttpEnd_ = entry->mem().endOffset();
+ }
+ debugs(90, 7, "with " << result << "; discardableHttpEnd_=" << discardableHttpEnd_);
+
// no HTTP headers and no body bytes (but not because there was no space)
atEof_ = !sendingHttpHeaders() && !result.length && copyInto.length;
@@ -220,6 +230,9 @@ store_client::copy(StoreEntry * anEntry,
parsingBuffer.emplace(copyInto);
+ discardableHttpEnd_ = nextHttpReadOffset();
+ debugs(90, 7, "discardableHttpEnd_=" << discardableHttpEnd_);
+
static bool copying (false);
assert (!copying);
copying = true;
--
2.39.3

View File

@ -0,0 +1,67 @@
From 422272d78399d5fb2fc340281611961fc7c528e7 Mon Sep 17 00:00:00 2001
From: Alex Rousskov <rousskov@measurement-factory.com>
Date: Thu, 23 Nov 2023 18:27:45 +0000
Subject: [PATCH 4/7] Remove mem_hdr::freeDataUpto() assertion (#1562)
stmem.cc:98: "lowestOffset () <= target_offset"
....
The assertion is conceptually wrong: The given target_offset parameter
may have any value; that value does not have to correlate with mem_hdr
state in any way. It is freeDataUpto() job to preserve nodes at or above
the given offset and (arguably optionally) remove nodes below it, but
the assertion does not actually validate that freeDataUpdo() did that.
....
The old mem_hdr::freeDataUpto() assertion incorrectly assumed that,
after zero or more unneeded memory nodes were freed, the remaining
memory area never starts after the given target_offset parameter. That
assumption fails in at least two use cases, both using target_offset
values that do not belong to any existing or future mem_hdr node:
....
1. target_offset is points to the left of the first node. freeDataUpto()
correctly keeps all memory nodes in such calls, but then asserts. For
example, calling freeDataUpto(0) when mem_hdr has bytes [100,199)
triggers this incorrect assertion.
....
2. target_offset is in the gap between two nodes. For example, calling
freeDataUpto(2000) when mem_hdr contains two nodes: [0,1000) and
[3000,3003) will trigger this assertion (as happened in Bug 5309).
Such gaps are very common for HTTP 206 responses with a Content-Range
header because such responses often specify a range that does not
start with zero and create a gap after the node(s) with HTTP headers.
....
Bugs notwithstanding, it is unlikely that relevant calls exist today,
but they certainly could be added, especially when freeDataUpto() stops
preserving the last unused node. The current "avoid change to [some
unidentified] part of code" hoarding excuse should not last forever.
....
Prior to commit 122a6e3, Squid did not (frequently) assert in gap cases:
Callers first give target_offset 0 (which results in freeDataUpto()
doing nothing, keeping the header node(s)) and then they give
target_offset matching the beginning of the first body node (which
results in freeDataUpto() freeing the header nodes(s) and increasing
lowerOffset() from zero to target_offset). A bug in commit 122a6e3
lowered target_offset a bit, placing target_offset in the gap and
triggering frequent (and incorrect) assertions (Bug 5309).
Modified-by: Alex Burmashev <alexander.burmashev@oracle.com>
Signed-off-by: Alex Burmashev <alexander.burmashev@oracle.com>
---
src/stmem.cc | 2 --
1 file changed, 2 deletions(-)
diff --git a/src/stmem.cc b/src/stmem.cc
index d117c15..b627005 100644
--- a/src/stmem.cc
+++ b/src/stmem.cc
@@ -95,8 +95,6 @@ mem_hdr::freeDataUpto(int64_t target_offset)
break;
}
- assert (lowestOffset () <= target_offset);
-
return lowestOffset ();
}
--
2.39.3

View File

@ -0,0 +1,210 @@
From 5df95b5923de244eaf2ddccf980d5f28d7114b1f Mon Sep 17 00:00:00 2001
From: Alex Burmashev <alexander.burmashev@oracle.com>
Date: Thu, 7 Dec 2023 18:01:47 +0000
Subject: [PATCH 5/7] Backport Add Assure() as a replacement for problematic
Must()
This is a partial backport of
b9a1bbfbc531359a87647271a282edff9ccdd206
b8ae064d94784934b3402e5db015246d1b1ca658
Needed for CVE CVE-2023-5824 fix
Signed-off-by: Alex Burmashev <alexander.burmashev@oracle.com>
---
src/HttpReply.cc | 1 +
src/acl/Asn.cc | 1 +
src/base/Assure.cc | 23 ++++++++++++++++++
src/base/Assure.h | 51 ++++++++++++++++++++++++++++++++++++++++
src/base/Makefile.am | 2 ++
src/base/Makefile.in | 8 +++++--
src/client_side_reply.cc | 1 +
7 files changed, 85 insertions(+), 2 deletions(-)
create mode 100644 src/base/Assure.cc
create mode 100644 src/base/Assure.h
diff --git a/src/HttpReply.cc b/src/HttpReply.cc
index af2bd4d..df5bcef 100644
--- a/src/HttpReply.cc
+++ b/src/HttpReply.cc
@@ -10,6 +10,7 @@
#include "squid.h"
#include "acl/AclSizeLimit.h"
+#include "base/Assure.h"
#include "acl/FilledChecklist.h"
#include "base/EnumIterator.h"
#include "globals.h"
diff --git a/src/acl/Asn.cc b/src/acl/Asn.cc
index ad450c0..bcedc82 100644
--- a/src/acl/Asn.cc
+++ b/src/acl/Asn.cc
@@ -17,6 +17,7 @@
#include "acl/SourceAsn.h"
#include "acl/Strategised.h"
#include "base/CharacterSet.h"
+#include "base/Assure.h"
#include "FwdState.h"
#include "HttpReply.h"
#include "HttpRequest.h"
diff --git a/src/base/Assure.cc b/src/base/Assure.cc
new file mode 100644
index 0000000..b09b848
--- /dev/null
+++ b/src/base/Assure.cc
@@ -0,0 +1,23 @@
+/*
+ * Copyright (C) 1996-2023 The Squid Software Foundation and contributors
+ *
+ * Squid software is distributed under GPLv2+ license and includes
+ * contributions from numerous individuals and organizations.
+ * Please see the COPYING and CONTRIBUTORS files for details.
+ */
+
+#include "squid.h"
+#include "base/Assure.h"
+#include "base/TextException.h"
+#include "sbuf/Stream.h"
+
+[[ noreturn ]] void
+ReportAndThrow_(const int debugLevel, const char *description, const SourceLocation &location)
+{
+ const TextException ex(description, location);
+ const auto label = debugLevel <= DBG_IMPORTANT ? "ERROR: Squid BUG: " : "";
+ // TODO: Consider also printing the number of BUGs reported so far. It would
+ // require GC, but we could even print the number of same-location reports.
+ debugs(0, debugLevel, label << ex);
+ throw ex;
+}
diff --git a/src/base/Assure.h b/src/base/Assure.h
new file mode 100644
index 0000000..650c204
--- /dev/null
+++ b/src/base/Assure.h
@@ -0,0 +1,51 @@
+/*
+ * Copyright (C) 1996-2023 The Squid Software Foundation and contributors
+ *
+ * Squid software is distributed under GPLv2+ license and includes
+ * contributions from numerous individuals and organizations.
+ * Please see the COPYING and CONTRIBUTORS files for details.
+ */
+
+#ifndef SQUID_SRC_BASE_ASSURE_H
+#define SQUID_SRC_BASE_ASSURE_H
+
+#include "base/Here.h"
+
+/// Reports the description (at the given debugging level) and throws
+/// the corresponding exception. Reduces compiled code size of Assure() and
+/// Must() callers. Do not call directly; use Assure() instead.
+/// \param description explains the condition (i.e. what MUST happen)
+[[ noreturn ]] void ReportAndThrow_(int debugLevel, const char *description, const SourceLocation &);
+
+/// Calls ReportAndThrow() if needed. Reduces caller code duplication.
+/// Do not call directly; use Assure() instead.
+/// \param description c-string explaining the condition (i.e. what MUST happen)
+#define Assure_(debugLevel, condition, description, location) \
+ while (!(condition)) \
+ ReportAndThrow_((debugLevel), (description), (location))
+
+#if !defined(NDEBUG)
+
+/// Like assert() but throws an exception instead of aborting the process. Use
+/// this macro to detect code logic mistakes (i.e. bugs) where aborting the
+/// current AsyncJob or a similar task is unlikely to jeopardize Squid service
+/// integrity. For example, this macro is _not_ appropriate for detecting bugs
+/// that indicate a dangerous global state corruption which may go unnoticed by
+/// other jobs after the current job or task is aborted.
+#define Assure(condition) \
+ Assure2((condition), #condition)
+
+/// Like Assure() but allows the caller to customize the exception message.
+/// \param description string literal describing the condition (i.e. what MUST happen)
+#define Assure2(condition, description) \
+ Assure_(0, (condition), ("assurance failed: " description), Here())
+
+#else
+
+/* do-nothing implementations for NDEBUG builds */
+#define Assure(condition) ((void)0)
+#define Assure2(condition, description) ((void)0)
+
+#endif /* NDEBUG */
+
+#endif /* SQUID_SRC_BASE_ASSURE_H */
diff --git a/src/base/Makefile.am b/src/base/Makefile.am
index 9b0f4cf..c22dd0e 100644
--- a/src/base/Makefile.am
+++ b/src/base/Makefile.am
@@ -19,6 +19,8 @@ libbase_la_SOURCES = \
AsyncJob.cc \
AsyncJob.h \
AsyncJobCalls.h \
+ Assure.cc \
+ Assure.h \
ByteCounter.h \
CbcPointer.h \
CbDataList.h \
diff --git a/src/base/Makefile.in b/src/base/Makefile.in
index 90a4f5b..f43e098 100644
--- a/src/base/Makefile.in
+++ b/src/base/Makefile.in
@@ -163,7 +163,7 @@ CONFIG_CLEAN_FILES =
CONFIG_CLEAN_VPATH_FILES =
LTLIBRARIES = $(noinst_LTLIBRARIES)
libbase_la_LIBADD =
-am_libbase_la_OBJECTS = AsyncCall.lo AsyncCallQueue.lo AsyncJob.lo \
+am_libbase_la_OBJECTS = AsyncCall.lo AsyncCallQueue.lo AsyncJob.lo Assure.lo \
CharacterSet.lo File.lo Here.lo RegexPattern.lo \
RunnersRegistry.lo TextException.lo
libbase_la_OBJECTS = $(am_libbase_la_OBJECTS)
@@ -187,7 +187,7 @@ DEFAULT_INCLUDES =
depcomp = $(SHELL) $(top_srcdir)/cfgaux/depcomp
am__maybe_remake_depfiles = depfiles
am__depfiles_remade = ./$(DEPDIR)/AsyncCall.Plo \
- ./$(DEPDIR)/AsyncCallQueue.Plo ./$(DEPDIR)/AsyncJob.Plo \
+ ./$(DEPDIR)/AsyncCallQueue.Plo ./$(DEPDIR)/AsyncJob.Plo ./$(DEPDIR)/Assure.Plo \
./$(DEPDIR)/CharacterSet.Plo ./$(DEPDIR)/File.Plo \
./$(DEPDIR)/Here.Plo ./$(DEPDIR)/RegexPattern.Plo \
./$(DEPDIR)/RunnersRegistry.Plo ./$(DEPDIR)/TextException.Plo
@@ -737,6 +737,8 @@ libbase_la_SOURCES = \
AsyncJob.cc \
AsyncJob.h \
AsyncJobCalls.h \
+ Assure.cc \
+ Assure.h \
ByteCounter.h \
CbcPointer.h \
CbDataList.h \
@@ -830,6 +832,7 @@ distclean-compile:
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/AsyncCall.Plo@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/AsyncCallQueue.Plo@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/AsyncJob.Plo@am__quote@ # am--include-marker
+@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Assure.Plo@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/CharacterSet.Plo@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/File.Plo@am__quote@ # am--include-marker
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/Here.Plo@am__quote@ # am--include-marker
@@ -1224,6 +1227,7 @@ maintainer-clean: maintainer-clean-am
-rm -f ./$(DEPDIR)/AsyncCall.Plo
-rm -f ./$(DEPDIR)/AsyncCallQueue.Plo
-rm -f ./$(DEPDIR)/AsyncJob.Plo
+ -rm -f ./$(DEPDIR)/Assure.Plo
-rm -f ./$(DEPDIR)/CharacterSet.Plo
-rm -f ./$(DEPDIR)/File.Plo
-rm -f ./$(DEPDIR)/Here.Plo
diff --git a/src/client_side_reply.cc b/src/client_side_reply.cc
index 861f4b4..470f4bc 100644
--- a/src/client_side_reply.cc
+++ b/src/client_side_reply.cc
@@ -12,6 +12,7 @@
#include "acl/FilledChecklist.h"
#include "acl/Gadgets.h"
#include "anyp/PortCfg.h"
+#include "base/Assure.h"
#include "client_side_reply.h"
#include "errorpage.h"
#include "ETag.h"
--
2.39.3

View File

@ -0,0 +1,200 @@
From c24b9507e35fa43ddb40211a50fae9d58a0381bc Mon Sep 17 00:00:00 2001
From: Alex Burmashev <alexander.burmashev@oracle.com>
Date: Mon, 27 Nov 2023 11:47:40 +0000
Subject: [PATCH 6/7] Backport additional functions for SquidMath
This includes some cherry-picks from
b308d7e2ad02ae6622f380d94d2303446f5831a9 and later commits.
This is needed for CVE-2023-5824 fix
Signed-off-by: Alex Burmashev <alexander.burmashev@oracle.com>
---
src/SquidMath.h | 164 ++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 164 insertions(+)
diff --git a/src/SquidMath.h b/src/SquidMath.h
index c70acd1..e5b6e58 100644
--- a/src/SquidMath.h
+++ b/src/SquidMath.h
@@ -8,7 +8,11 @@
#ifndef _SQUID_SRC_SQUIDMATH_H
#define _SQUID_SRC_SQUIDMATH_H
+#include "base/forward.h"
+#include "base/TypeTraits.h"
+#include <limits>
+#include <optional>
/* Math functions we define locally for Squid */
namespace Math
{
@@ -21,5 +25,165 @@ double doubleAverage(const double, const double, int, const int);
} // namespace Math
+// If Sum() performance becomes important, consider using GCC and clang
+// built-ins like __builtin_add_overflow() instead of manual overflow checks.
+
+/// detects a pair of unsigned types
+/// reduces code duplication in declarations further below
+template <typename T, typename U>
+using AllUnsigned = typename std::conditional<
+ std::is_unsigned<T>::value && std::is_unsigned<U>::value,
+ std::true_type,
+ std::false_type
+ >::type;
+
+// TODO: Replace with std::cmp_less() after migrating to C++20.
+/// whether integer a is less than integer b, with correct overflow handling
+template <typename A, typename B>
+constexpr bool
+Less(const A a, const B b) {
+ // The casts below make standard C++ integer conversions explicit. They
+ // quell compiler warnings about signed/unsigned comparison. The first two
+ // lines exclude different-sign a and b, making the casts/comparison safe.
+ using AB = typename std::common_type<A, B>::type;
+ return
+ (a >= 0 && b < 0) ? false :
+ (a < 0 && b >= 0) ? true :
+ /* (a >= 0) == (b >= 0) */ static_cast<AB>(a) < static_cast<AB>(b);
+}
+
+/// ensure that T is supported by NaturalSum() and friends
+template<typename T>
+constexpr void
+AssertNaturalType()
+{
+ static_assert(std::numeric_limits<T>::is_bounded, "std::numeric_limits<T>::max() is meaningful");
+ static_assert(std::numeric_limits<T>::is_exact, "no silent loss of precision");
+ static_assert(!std::is_enum<T>::value, "no silent creation of non-enumerated values");
+}
+
+// TODO: Investigate whether this optimization can be expanded to [signed] types
+// A and B when std::numeric_limits<decltype(A(0)+B(0))>::is_modulo is true.
+/// This IncreaseSumInternal() overload is optimized for speed.
+/// \returns a non-overflowing sum of the two unsigned arguments (or nothing)
+/// \prec both argument types are unsigned
+template <typename S, typename A, typename B, std::enable_if_t<AllUnsigned<A,B>::value, int> = 0>
+std::optional<S>
+IncreaseSumInternal(const A a, const B b) {
+ // paranoid: AllUnsigned<A,B> precondition established that already
+ static_assert(std::is_unsigned<A>::value, "AllUnsigned dispatch worked for A");
+ static_assert(std::is_unsigned<B>::value, "AllUnsigned dispatch worked for B");
+
+ AssertNaturalType<S>();
+ AssertNaturalType<A>();
+ AssertNaturalType<B>();
+
+ // we should only be called by IncreaseSum(); it forces integer promotion
+ static_assert(std::is_same<A, decltype(+a)>::value, "a will not be promoted");
+ static_assert(std::is_same<B, decltype(+b)>::value, "b will not be promoted");
+ // and without integer promotions, a sum of unsigned integers is unsigned
+ static_assert(std::is_unsigned<decltype(a+b)>::value, "a+b is unsigned");
+
+ // with integer promotions ruled out, a or b can only undergo integer
+ // conversion to the higher rank type (A or B, we do not know which)
+ using AB = typename std::common_type<A, B>::type;
+ static_assert(std::is_same<AB, A>::value || std::is_same<AB, B>::value, "no unexpected conversions");
+ static_assert(std::is_same<AB, decltype(a+b)>::value, "lossless assignment");
+ const AB sum = a + b;
+
+ static_assert(std::numeric_limits<AB>::is_modulo, "we can detect overflows");
+ // 1. modulo math: overflowed sum is smaller than any of its operands
+ // 2. the sum may overflow S (i.e. the return base type)
+ // We do not need Less() here because we compare promoted unsigned types.
+ return (sum >= a && sum <= std::numeric_limits<S>::max()) ?
+ std::optional<S>(sum) : std::optional<S>();
+}
+
+/// This IncreaseSumInternal() overload supports a larger variety of types.
+/// \returns a non-overflowing sum of the two arguments (or nothing)
+/// \returns nothing if at least one of the arguments is negative
+/// \prec at least one of the argument types is signed
+template <typename S, typename A, typename B, std::enable_if_t<!AllUnsigned<A,B>::value, int> = 0>
+std::optional<S> constexpr
+IncreaseSumInternal(const A a, const B b) {
+ AssertNaturalType<S>();
+ AssertNaturalType<A>();
+ AssertNaturalType<B>();
+
+ // we should only be called by IncreaseSum() that does integer promotion
+ static_assert(std::is_same<A, decltype(+a)>::value, "a will not be promoted");
+ static_assert(std::is_same<B, decltype(+b)>::value, "b will not be promoted");
+
+ return
+ // We could support a non-under/overflowing sum of negative numbers, but
+ // our callers use negative values specially (e.g., for do-not-use or
+ // do-not-limit settings) and are not supposed to do math with them.
+ (a < 0 || b < 0) ? std::optional<S>() :
+ // To avoid undefined behavior of signed overflow, we must not compute
+ // the raw a+b sum if it may overflow. When A is not B, a or b undergoes
+ // (safe for non-negatives) integer conversion in these expressions, so
+ // we do not know the resulting a+b type AB and its maximum. We must
+ // also detect subsequent casting-to-S overflows.
+ // Overflow condition: (a + b > maxAB) or (a + b > maxS).
+ // A is an integer promotion of S, so maxS <= maxA <= maxAB.
+ // Since maxS <= maxAB, it is sufficient to just check: a + b > maxS,
+ // which is the same as the overflow-safe condition here: maxS - a < b.
+ // Finally, (maxS - a) cannot overflow because a is not negative and
+ // cannot underflow because a is a promotion of s: 0 <= a <= maxS.
+ Less(std::numeric_limits<S>::max() - a, b) ? std::optional<S>() :
+ std::optional<S>(a + b);
+}
+
+/// argument pack expansion termination for IncreaseSum<S, T, Args...>()
+template <typename S, typename T>
+std::optional<S>
+IncreaseSum(const S s, const T t)
+{
+ // Force (always safe) integer promotions now, to give std::enable_if_t<>
+ // promoted types instead of entering IncreaseSumInternal<AllUnsigned>(s,t)
+ // but getting a _signed_ promoted value of s or t in s + t.
+ return IncreaseSumInternal<S>(+s, +t);
+}
+
+/// \returns a non-overflowing sum of the arguments (or nothing)
+template <typename S, typename T, typename... Args>
+std::optional<S>
+IncreaseSum(const S sum, const T t, const Args... args) {
+ if (const auto head = IncreaseSum(sum, t)) {
+ return IncreaseSum(head.value(), args...);
+ } else {
+ // std::optional<S>() triggers bogus -Wmaybe-uninitialized warnings in GCC v10.3
+ return std::nullopt;
+ }
+}
+
+/// \returns an exact, non-overflowing sum of the arguments (or nothing)
+template <typename SummationType, typename... Args>
+std::optional<SummationType>
+NaturalSum(const Args... args) {
+ return IncreaseSum<SummationType>(0, args...);
+}
+
+/// Safely resets the given variable to NaturalSum() of the given arguments.
+/// If the sum overflows, resets to variable's maximum possible value.
+/// \returns the new variable value (like an assignment operator would)
+template <typename S, typename... Args>
+S
+SetToNaturalSumOrMax(S &var, const Args... args)
+{
+ var = NaturalSum<S>(args...).value_or(std::numeric_limits<S>::max());
+ return var;
+}
+
+/// converts a given non-negative integer into an integer of a given type
+/// without loss of information or undefined behavior
+template <typename Result, typename Source>
+Result
+NaturalCast(const Source s)
+{
+ return NaturalSum<Result>(s).value();
+}
+
+
#endif /* _SQUID_SRC_SQUIDMATH_H */
--
2.39.3

View File

@ -0,0 +1,763 @@
From 37de4ce82f7f8906606d0625774d856ffd3a9453 Mon Sep 17 00:00:00 2001
From: Alex Burmashev <alexander.burmashev@oracle.com>
Date: Thu, 7 Dec 2023 20:51:39 +0000
Subject: [PATCH] Adapt to older gcc, cleanup
Fix code that is not applicable to older codebase of squidv4.
On top do some work to adapt code to older gcc.
most of that is std::optional to std::pair conversion
Signed-off-by: Alex Burmashev <alexander.burmashev@oracle.com>
---
src/HttpReply.cc | 4 +-
src/MemObject.h | 3 ++
src/MemStore.cc | 6 +--
src/SquidMath.h | 27 ++++++------
src/Store.h | 3 ++
src/StoreClient.h | 2 +-
src/acl/Asn.cc | 14 +------
src/base/Assure.cc | 8 ++++
src/client_side_reply.cc | 64 ++++++++++++-----------------
src/peer_digest.cc | 1 +
src/store/ParsingBuffer.cc | 47 ++++++++++-----------
src/store/ParsingBuffer.h | 2 +-
src/store_client.cc | 84 +++++++++++++++++---------------------
src/urn.cc | 2 +-
14 files changed, 123 insertions(+), 144 deletions(-)
diff --git a/src/HttpReply.cc b/src/HttpReply.cc
index df5bcef..21c62c2 100644
--- a/src/HttpReply.cc
+++ b/src/HttpReply.cc
@@ -534,13 +534,13 @@ HttpReply::parseTerminatedPrefix(const char * const terminatedBuf, const size_t
const bool eof = false; // TODO: Remove after removing atEnd from HttpHeader::parse()
if (parse(terminatedBuf, bufSize, eof, &error)) {
debugs(58, 7, "success after accumulating " << bufSize << " bytes and parsing " << hdr_sz);
- Assure(pstate == Http::Message::psParsed);
+ Assure(pstate == psParsed);
Assure(hdr_sz > 0);
Assure(!Less(bufSize, hdr_sz)); // cannot parse more bytes than we have
return hdr_sz; // success
}
- Assure(pstate != Http::Message::psParsed);
+ Assure(pstate != psParsed);
hdr_sz = 0;
if (error) {
diff --git a/src/MemObject.h b/src/MemObject.h
index ba6646f..5a7590a 100644
--- a/src/MemObject.h
+++ b/src/MemObject.h
@@ -56,6 +56,9 @@ public:
void write(const StoreIOBuffer &buf);
void unlinkRequest();
+
+ HttpReply &baseReply() const { return *_reply; }
+
HttpReply const *getReply() const;
void replaceHttpReply(HttpReply *newrep);
void stat (MemBuf * mb) const;
diff --git a/src/MemStore.cc b/src/MemStore.cc
index fe7af2f..6762c4f 100644
--- a/src/MemStore.cc
+++ b/src/MemStore.cc
@@ -511,8 +511,8 @@ MemStore::copyFromShm(StoreEntry &e, const sfileno index, const Ipc::StoreMapAnc
" from " << extra.page << '+' << prefixSize);
// parse headers if needed; they might span multiple slices!
- auto &reply = e.mem().adjustableBaseReply();
- if (reply.pstate != Http::Message::psParsed) {
+ auto &reply = e.mem().baseReply();
+ if (reply.pstate != psParsed) {
httpHeaderParsingBuffer.append(sliceBuf.data, sliceBuf.length);
if (reply.parseTerminatedPrefix(httpHeaderParsingBuffer.c_str(), httpHeaderParsingBuffer.length()))
httpHeaderParsingBuffer = SBuf(); // we do not need these bytes anymore
@@ -542,7 +542,7 @@ MemStore::copyFromShm(StoreEntry &e, const sfileno index, const Ipc::StoreMapAnc
debugs(20, 5, "mem-loaded all " << e.mem_obj->endOffset() << '/' <<
anchor.basics.swap_file_sz << " bytes of " << e);
- if (e.mem().adjustableBaseReply().pstate != Http::Message::psParsed)
+ if (e.mem().baseReply().pstate != psParsed)
throw TextException(ToSBuf("truncated mem-cached headers; accumulated: ", httpHeaderParsingBuffer.length()), Here());
// from StoreEntry::complete()
diff --git a/src/SquidMath.h b/src/SquidMath.h
index e5b6e58..538833b 100644
--- a/src/SquidMath.h
+++ b/src/SquidMath.h
@@ -8,8 +8,6 @@
#ifndef _SQUID_SRC_SQUIDMATH_H
#define _SQUID_SRC_SQUIDMATH_H
-#include "base/forward.h"
-#include "base/TypeTraits.h"
#include <limits>
#include <optional>
@@ -68,7 +66,7 @@ AssertNaturalType()
/// \returns a non-overflowing sum of the two unsigned arguments (or nothing)
/// \prec both argument types are unsigned
template <typename S, typename A, typename B, std::enable_if_t<AllUnsigned<A,B>::value, int> = 0>
-std::optional<S>
+std::pair<S, bool>
IncreaseSumInternal(const A a, const B b) {
// paranoid: AllUnsigned<A,B> precondition established that already
static_assert(std::is_unsigned<A>::value, "AllUnsigned dispatch worked for A");
@@ -96,7 +94,7 @@ IncreaseSumInternal(const A a, const B b) {
// 2. the sum may overflow S (i.e. the return base type)
// We do not need Less() here because we compare promoted unsigned types.
return (sum >= a && sum <= std::numeric_limits<S>::max()) ?
- std::optional<S>(sum) : std::optional<S>();
+ std::make_pair(sum, true) : std::make_pair(S(), false);
}
/// This IncreaseSumInternal() overload supports a larger variety of types.
@@ -104,7 +102,7 @@ IncreaseSumInternal(const A a, const B b) {
/// \returns nothing if at least one of the arguments is negative
/// \prec at least one of the argument types is signed
template <typename S, typename A, typename B, std::enable_if_t<!AllUnsigned<A,B>::value, int> = 0>
-std::optional<S> constexpr
+std::pair<S, bool>
IncreaseSumInternal(const A a, const B b) {
AssertNaturalType<S>();
AssertNaturalType<A>();
@@ -118,7 +116,7 @@ IncreaseSumInternal(const A a, const B b) {
// We could support a non-under/overflowing sum of negative numbers, but
// our callers use negative values specially (e.g., for do-not-use or
// do-not-limit settings) and are not supposed to do math with them.
- (a < 0 || b < 0) ? std::optional<S>() :
+ (a < 0 || b < 0) ? std::make_pair(S(), false) :
// To avoid undefined behavior of signed overflow, we must not compute
// the raw a+b sum if it may overflow. When A is not B, a or b undergoes
// (safe for non-negatives) integer conversion in these expressions, so
@@ -130,13 +128,13 @@ IncreaseSumInternal(const A a, const B b) {
// which is the same as the overflow-safe condition here: maxS - a < b.
// Finally, (maxS - a) cannot overflow because a is not negative and
// cannot underflow because a is a promotion of s: 0 <= a <= maxS.
- Less(std::numeric_limits<S>::max() - a, b) ? std::optional<S>() :
- std::optional<S>(a + b);
+ Less(std::numeric_limits<S>::max() - a, b) ? std::make_pair(S(), false) :
+ std::make_pair(S(a + b), true);
}
/// argument pack expansion termination for IncreaseSum<S, T, Args...>()
template <typename S, typename T>
-std::optional<S>
+std::pair<S, bool>
IncreaseSum(const S s, const T t)
{
// Force (always safe) integer promotions now, to give std::enable_if_t<>
@@ -147,19 +145,20 @@ IncreaseSum(const S s, const T t)
/// \returns a non-overflowing sum of the arguments (or nothing)
template <typename S, typename T, typename... Args>
-std::optional<S>
+std::pair<S, bool>
IncreaseSum(const S sum, const T t, const Args... args) {
- if (const auto head = IncreaseSum(sum, t)) {
- return IncreaseSum(head.value(), args...);
+ const auto head = IncreaseSum(sum, t);
+ if (head.second) {
+ return IncreaseSum(head.first, args...);
} else {
// std::optional<S>() triggers bogus -Wmaybe-uninitialized warnings in GCC v10.3
- return std::nullopt;
+ return std::make_pair(S(), false);
}
}
/// \returns an exact, non-overflowing sum of the arguments (or nothing)
template <typename SummationType, typename... Args>
-std::optional<SummationType>
+std::pair<SummationType, bool>
NaturalSum(const Args... args) {
return IncreaseSum<SummationType>(0, args...);
}
diff --git a/src/Store.h b/src/Store.h
index 3eb6b84..2475fe0 100644
--- a/src/Store.h
+++ b/src/Store.h
@@ -49,6 +49,9 @@ public:
StoreEntry();
virtual ~StoreEntry();
+ MemObject &mem() { assert(mem_obj); return *mem_obj; }
+ const MemObject &mem() const { assert(mem_obj); return *mem_obj; }
+
virtual HttpReply const *getReply() const;
virtual void write (StoreIOBuffer);
diff --git a/src/StoreClient.h b/src/StoreClient.h
index 0524776..ba5e669 100644
--- a/src/StoreClient.h
+++ b/src/StoreClient.h
@@ -166,7 +166,7 @@ private:
/// request. Buffer contents depends on the source and parsing stage; it may
/// hold (parts of) swap metadata, HTTP response headers, and/or HTTP
/// response body bytes.
- std::optional<Store::ParsingBuffer> parsingBuffer;
+ std::pair<Store::ParsingBuffer, bool> parsingBuffer = std::make_pair(Store::ParsingBuffer(), false);
StoreIOBuffer lastDiskRead; ///< buffer used for the last storeRead() call
diff --git a/src/acl/Asn.cc b/src/acl/Asn.cc
index bcedc82..67e453f 100644
--- a/src/acl/Asn.cc
+++ b/src/acl/Asn.cc
@@ -73,7 +73,7 @@ class ASState
CBDATA_CLASS(ASState);
public:
- ASState();
+ ASState() = default;
~ASState();
StoreEntry *entry;
@@ -87,18 +87,6 @@ public:
CBDATA_CLASS_INIT(ASState);
-ASState::ASState() :
- entry(NULL),
- sc(NULL),
- request(NULL),
- as_number(0),
- offset(0),
- reqofs(0),
- dataRead(false)
-{
- memset(reqbuf, 0, AS_REQBUF_SZ);
-}
-
ASState::~ASState()
{
debugs(53, 3, entry->url());
diff --git a/src/base/Assure.cc b/src/base/Assure.cc
index b09b848..b4cf3e5 100644
--- a/src/base/Assure.cc
+++ b/src/base/Assure.cc
@@ -11,6 +11,14 @@
#include "base/TextException.h"
#include "sbuf/Stream.h"
+std::ostream &
+operator <<(std::ostream &os, const TextException &ex)
+{
+ ex.print(os);
+ return os;
+}
+
+
[[ noreturn ]] void
ReportAndThrow_(const int debugLevel, const char *description, const SourceLocation &location)
{
diff --git a/src/client_side_reply.cc b/src/client_side_reply.cc
index 470f4bc..64fd489 100644
--- a/src/client_side_reply.cc
+++ b/src/client_side_reply.cc
@@ -1142,8 +1142,8 @@ clientReplyContext::storeNotOKTransferDone() const
MemObject *mem = http->storeEntry()->mem_obj;
assert(mem != NULL);
assert(http->request != NULL);
-
- if (mem->baseReply().pstate != Http::Message::psParsed)
+ const auto expectedBodySize = mem->baseReply().content_length;
+ if (mem->baseReply().pstate != psParsed)
return 0;
/*
@@ -1808,32 +1808,6 @@ clientReplyContext::SendMoreData(void *data, StoreIOBuffer result)
context->sendMoreData (result);
}
-/// Whether the given body area describes the start of our Client Stream buffer.
-/// An empty area does.
-bool
-clientReplyContext::matchesStreamBodyBuffer(const StoreIOBuffer &their) const
-{
- // the answer is undefined for errors; they are not really "body buffers"
- Assure(!their.flags.error);
-
- if (!their.length)
- return true; // an empty body area always matches our body area
-
- if (their.data != next()->readBuffer.data) {
- debugs(88, 7, "no: " << their << " vs. " << next()->readBuffer);
- return false;
- }
-
- return true;
-}
-
-void
-clientReplyContext::noteStreamBufferredBytes(const StoreIOBuffer &result)
-{
- Assure(matchesStreamBodyBuffer(result));
- lastStreamBufferedBytes = result; // may be unchanged and/or zero-length
-}
-
void
clientReplyContext::makeThisHead()
{
@@ -2180,21 +2154,33 @@ clientReplyContext::sendMoreData (StoreIOBuffer result)
sc->setDelayId(DelayId::DelayClient(http,reply));
#endif
- /* handle headers */
+ holdingBuffer = result;
+ processReplyAccess();
+ return;
+}
- if (Config.onoff.log_mime_hdrs) {
- size_t k;
+/// Whether the given body area describes the start of our Client Stream buffer.
+/// An empty area does.
+bool
+clientReplyContext::matchesStreamBodyBuffer(const StoreIOBuffer &their) const
+{
+ // the answer is undefined for errors; they are not really "body buffers"
+ Assure(!their.flags.error);
+ if (!their.length)
+ return true; // an empty body area always matches our body area
+ if (their.data != next()->readBuffer.data) {
+ debugs(88, 7, "no: " << their << " vs. " << next()->readBuffer);
+ return false;
- if ((k = headersEnd(buf, reqofs))) {
- safe_free(http->al->headers.reply);
- http->al->headers.reply = (char *)xcalloc(k + 1, 1);
- xstrncpy(http->al->headers.reply, buf, k);
- }
}
+ return true;
+}
- holdingBuffer = result;
- processReplyAccess();
- return;
+void
+clientReplyContext::noteStreamBufferredBytes(const StoreIOBuffer &result)
+{
+ Assure(matchesStreamBodyBuffer(result));
+ lastStreamBufferedBytes = result; // may be unchanged and/or zero-length
}
/* Using this breaks the client layering just a little!
diff --git a/src/peer_digest.cc b/src/peer_digest.cc
index abfea4a..89ea73e 100644
--- a/src/peer_digest.cc
+++ b/src/peer_digest.cc
@@ -588,6 +588,7 @@ peerDigestFetchReply(void *data, char *buf, ssize_t size)
return 0; // we consumed/used no buffered bytes
}
+}
int
peerDigestSwapInCBlock(void *data, char *buf, ssize_t size)
diff --git a/src/store/ParsingBuffer.cc b/src/store/ParsingBuffer.cc
index e948fe2..affbe9e 100644
--- a/src/store/ParsingBuffer.cc
+++ b/src/store/ParsingBuffer.cc
@@ -28,19 +28,19 @@ Store::ParsingBuffer::ParsingBuffer(StoreIOBuffer &initialSpace):
const char *
Store::ParsingBuffer::memory() const
{
- return extraMemory_ ? extraMemory_->rawContent() : readerSuppliedMemory_.data;
+ return extraMemory_.second ? extraMemory_.first.rawContent() : readerSuppliedMemory_.data;
}
size_t
Store::ParsingBuffer::capacity() const
{
- return extraMemory_ ? (extraMemory_->length() + extraMemory_->spaceSize()) : readerSuppliedMemory_.length;
+ return extraMemory_.second ? (extraMemory_.first.length() + extraMemory_.first.spaceSize()) : readerSuppliedMemory_.length;
}
size_t
Store::ParsingBuffer::contentSize() const
{
- return extraMemory_ ? extraMemory_->length() : readerSuppliedMemoryContentSize_;
+ return extraMemory_.second ? extraMemory_.first.length() : readerSuppliedMemoryContentSize_;
}
void
@@ -56,10 +56,10 @@ Store::ParsingBuffer::appended(const char * const newBytes, const size_t newByte
assert(memory() + contentSize() == newBytes); // the new bytes start in our space
// and now we know that newBytes is not nil either
- if (extraMemory_)
- extraMemory_->rawAppendFinish(newBytes, newByteCount);
+ if (extraMemory_.second)
+ extraMemory_.first.rawAppendFinish(newBytes, newByteCount);
else
- readerSuppliedMemoryContentSize_ = *IncreaseSum(readerSuppliedMemoryContentSize_, newByteCount);
+ readerSuppliedMemoryContentSize_ = IncreaseSum(readerSuppliedMemoryContentSize_, newByteCount).first;
assert(contentSize() <= capacity()); // paranoid
}
@@ -68,8 +68,8 @@ void
Store::ParsingBuffer::consume(const size_t parsedBytes)
{
Assure(contentSize() >= parsedBytes); // more conservative than extraMemory_->consume()
- if (extraMemory_) {
- extraMemory_->consume(parsedBytes);
+ if (extraMemory_.second) {
+ extraMemory_.first.consume(parsedBytes);
} else {
readerSuppliedMemoryContentSize_ -= parsedBytes;
if (parsedBytes && readerSuppliedMemoryContentSize_)
@@ -81,8 +81,8 @@ StoreIOBuffer
Store::ParsingBuffer::space()
{
const auto size = spaceSize();
- const auto start = extraMemory_ ?
- extraMemory_->rawAppendStart(size) :
+ const auto start = extraMemory_.second ?
+ extraMemory_.first.rawAppendStart(size) :
(readerSuppliedMemory_.data + readerSuppliedMemoryContentSize_);
return StoreIOBuffer(spaceSize(), 0, start);
}
@@ -110,22 +110,23 @@ void
Store::ParsingBuffer::growSpace(const size_t minimumSpaceSize)
{
const auto capacityIncreaseAttempt = IncreaseSum(contentSize(), minimumSpaceSize);
- if (!capacityIncreaseAttempt)
+ if (!capacityIncreaseAttempt.second)
throw TextException(ToSBuf("no support for a single memory block of ", contentSize(), '+', minimumSpaceSize, " bytes"), Here());
- const auto newCapacity = *capacityIncreaseAttempt;
+ const auto newCapacity = capacityIncreaseAttempt.first;
if (newCapacity <= capacity())
return; // already have enough space; no reallocation is needed
debugs(90, 7, "growing to provide " << minimumSpaceSize << " in " << *this);
- if (extraMemory_) {
- extraMemory_->reserveCapacity(newCapacity);
+ if (extraMemory_.second) {
+ extraMemory_.first.reserveCapacity(newCapacity);
} else {
SBuf newStorage;
newStorage.reserveCapacity(newCapacity);
newStorage.append(readerSuppliedMemory_.data, readerSuppliedMemoryContentSize_);
- extraMemory_ = std::move(newStorage);
+ extraMemory_.first = std::move(newStorage);
+ extraMemory_.second = true;
}
Assure(spaceSize() >= minimumSpaceSize);
}
@@ -133,14 +134,14 @@ Store::ParsingBuffer::growSpace(const size_t minimumSpaceSize)
SBuf
Store::ParsingBuffer::toSBuf() const
{
- return extraMemory_ ? *extraMemory_ : SBuf(content().data, content().length);
+ return extraMemory_.second ? extraMemory_.first : SBuf(content().data, content().length);
}
size_t
Store::ParsingBuffer::spaceSize() const
{
- if (extraMemory_)
- return extraMemory_->spaceSize();
+ if (extraMemory_.second)
+ return extraMemory_.first.spaceSize();
assert(readerSuppliedMemoryContentSize_ <= readerSuppliedMemory_.length);
return readerSuppliedMemory_.length - readerSuppliedMemoryContentSize_;
@@ -169,12 +170,12 @@ Store::ParsingBuffer::packBack()
result.length = bytesToPack;
Assure(result.data);
- if (!extraMemory_) {
+ if (!extraMemory_.second) {
// no accumulated bytes copying because they are in readerSuppliedMemory_
debugs(90, 7, "quickly exporting " << result.length << " bytes via " << readerSuppliedMemory_);
} else {
- debugs(90, 7, "slowly exporting " << result.length << " bytes from " << extraMemory_->id << " back into " << readerSuppliedMemory_);
- memmove(result.data, extraMemory_->rawContent(), result.length);
+ debugs(90, 7, "slowly exporting " << result.length << " bytes from " << extraMemory_.first.id << " back into " << readerSuppliedMemory_);
+ memmove(result.data, extraMemory_.first.rawContent(), result.length);
}
return result;
@@ -185,9 +186,9 @@ Store::ParsingBuffer::print(std::ostream &os) const
{
os << "size=" << contentSize();
- if (extraMemory_) {
+ if (extraMemory_.second) {
os << " capacity=" << capacity();
- os << " extra=" << extraMemory_->id;
+ os << " extra=" << extraMemory_.first.id;
}
// report readerSuppliedMemory_ (if any) even if we are no longer using it
diff --git a/src/store/ParsingBuffer.h b/src/store/ParsingBuffer.h
index b8aa957..b473ac6 100644
--- a/src/store/ParsingBuffer.h
+++ b/src/store/ParsingBuffer.h
@@ -112,7 +112,7 @@ private:
/// our internal buffer that takes over readerSuppliedMemory_ when the
/// latter becomes full and more memory is needed
- std::optional<SBuf> extraMemory_;
+ std::pair<SBuf, bool> extraMemory_ = std::make_pair(SBuf(), false);
};
inline std::ostream &
diff --git a/src/store_client.cc b/src/store_client.cc
index 383aac8..0236274 100644
--- a/src/store_client.cc
+++ b/src/store_client.cc
@@ -10,6 +10,7 @@
#include "squid.h"
#include "base/AsyncCbdataCalls.h"
+#include "base/Assure.h"
#include "event.h"
#include "globals.h"
#include "HttpReply.h"
@@ -118,24 +119,14 @@ store_client::finishCallback()
// pointers. Some other legacy code expects "correct" result.offset even
// when there is no body to return. Accommodate all those expectations.
auto result = StoreIOBuffer(0, copyInto.offset, nullptr);
- if (object_ok && parsingBuffer && parsingBuffer->contentSize())
- result = parsingBuffer->packBack();
+ if (object_ok && parsingBuffer.second && parsingBuffer.first.contentSize())
+ result = parsingBuffer.first.packBack();
result.flags.error = object_ok ? 0 : 1;
- // TODO: Move object_ok handling above into this `if` statement.
- if (object_ok) {
- // works for zero hdr_sz cases as well; see also: nextHttpReadOffset()
- discardableHttpEnd_ = NaturalSum<int64_t>(entry->mem().baseReply().hdr_sz, result.offset, result.length).value();
- } else {
- // object_ok is sticky, so we will not be able to use any response bytes
- discardableHttpEnd_ = entry->mem().endOffset();
- }
- debugs(90, 7, "with " << result << "; discardableHttpEnd_=" << discardableHttpEnd_);
-
// no HTTP headers and no body bytes (but not because there was no space)
atEof_ = !sendingHttpHeaders() && !result.length && copyInto.length;
- parsingBuffer.reset();
+ parsingBuffer.second = false;
++answers;
STCB *temphandler = _callback.callback_handler;
@@ -228,7 +219,9 @@ store_client::copy(StoreEntry * anEntry,
// when we already can respond with HTTP headers.
Assure(!copyInto.offset || answeredOnce());
- parsingBuffer.emplace(copyInto);
+ parsingBuffer.first = Store::ParsingBuffer(copyInto);
+ parsingBuffer.second = true;
+
discardableHttpEnd_ = nextHttpReadOffset();
debugs(90, 7, "discardableHttpEnd_=" << discardableHttpEnd_);
@@ -454,14 +447,14 @@ store_client::canReadFromMemory() const
const auto &mem = entry->mem();
const auto memReadOffset = nextHttpReadOffset();
return mem.inmem_lo <= memReadOffset && memReadOffset < mem.endOffset() &&
- parsingBuffer->spaceSize();
+ parsingBuffer.first.spaceSize();
}
/// The offset of the next stored HTTP response byte wanted by the client.
int64_t
store_client::nextHttpReadOffset() const
{
- Assure(parsingBuffer);
+ Assure(parsingBuffer.second);
const auto &mem = entry->mem();
const auto hdr_sz = mem.baseReply().hdr_sz;
// Certain SMP cache manager transactions do not store HTTP headers in
@@ -469,7 +462,7 @@ store_client::nextHttpReadOffset() const
// In such cases, hdr_sz ought to be zero. In all other (known) cases,
// mem_hdr contains HTTP response headers (positive hdr_sz if parsed)
// followed by HTTP response body. This code math accommodates all cases.
- return NaturalSum<int64_t>(hdr_sz, copyInto.offset, parsingBuffer->contentSize()).value();
+ return NaturalSum<int64_t>(hdr_sz, copyInto.offset, parsingBuffer.first.contentSize()).first;
}
/// Copies at least some of the requested body bytes from MemObject memory,
@@ -478,13 +471,13 @@ store_client::nextHttpReadOffset() const
void
store_client::readFromMemory()
{
- Assure(parsingBuffer);
- const auto readInto = parsingBuffer->space().positionAt(nextHttpReadOffset());
+ Assure(parsingBuffer.second);
+ const auto readInto = parsingBuffer.first.space().positionAt(nextHttpReadOffset());
debugs(90, 3, "copying HTTP body bytes from memory into " << readInto);
const auto sz = entry->mem_obj->data_hdr.copy(readInto);
Assure(sz > 0); // our canReadFromMemory() precondition guarantees that
- parsingBuffer->appended(readInto.data, sz);
+ parsingBuffer.first.appended(readInto.data, sz);
}
void
@@ -497,7 +490,7 @@ store_client::fileRead()
flags.disk_io_pending = true;
// mem->swap_hdr_sz is zero here during initial read(s)
- const auto nextStoreReadOffset = NaturalSum<int64_t>(mem->swap_hdr_sz, nextHttpReadOffset()).value();
+ const auto nextStoreReadOffset = NaturalSum<int64_t>(mem->swap_hdr_sz, nextHttpReadOffset()).first;
// XXX: If fileRead() is called when we do not yet know mem->swap_hdr_sz,
// then we must start reading from disk offset zero to learn it: we cannot
@@ -522,10 +515,10 @@ store_client::fileRead()
// * performance effects of larger disk reads may be negative somewhere.
const decltype(StoreIOBuffer::length) maxReadSize = SM_PAGE_SIZE;
- Assure(parsingBuffer);
+ Assure(parsingBuffer.second);
// also, do not read more than we can return (via a copyInto.length buffer)
const auto readSize = std::min(copyInto.length, maxReadSize);
- lastDiskRead = parsingBuffer->makeSpace(readSize).positionAt(nextStoreReadOffset);
+ lastDiskRead = parsingBuffer.first.makeSpace(readSize).positionAt(nextStoreReadOffset);
debugs(90, 5, "into " << lastDiskRead);
storeRead(swapin_sio,
@@ -540,13 +533,12 @@ store_client::fileRead()
void
store_client::readBody(const char * const buf, const ssize_t lastIoResult)
{
- int parsed_header = 0;
Assure(flags.disk_io_pending);
flags.disk_io_pending = false;
assert(_callback.pending());
- Assure(parsingBuffer);
- debugs(90, 3, "got " << lastIoResult << " using " << *parsingBuffer);
+ Assure(parsingBuffer.second);
+ debugs(90, 3, "got " << lastIoResult << " using " << parsingBuffer.first);
if (lastIoResult < 0)
return fail();
@@ -560,7 +552,7 @@ store_client::readBody(const char * const buf, const ssize_t lastIoResult)
assert(lastDiskRead.data == buf);
lastDiskRead.length = lastIoResult;
- parsingBuffer->appended(buf, lastIoResult);
+ parsingBuffer.first.appended(buf, lastIoResult);
// we know swap_hdr_sz by now and were reading beyond swap metadata because
// readHead() would have been called otherwise (to read swap metadata)
@@ -589,13 +581,12 @@ store_client::handleBodyFromDisk()
if (!answeredOnce()) {
// All on-disk responses have HTTP headers. First disk body read(s)
// include HTTP headers that we must parse (if needed) and skip.
- const auto haveHttpHeaders = entry->mem_obj->baseReply().pstate == Http::Message::psParsed;
+ const auto haveHttpHeaders = entry->mem_obj->baseReply().pstate == psParsed;
if (!haveHttpHeaders && !parseHttpHeadersFromDisk())
return;
skipHttpHeadersFromDisk();
}
- const HttpReply *rep = entry->getReply();
noteNews();
}
@@ -626,8 +617,6 @@ store_client::maybeWriteFromDiskToMemory(const StoreIOBuffer &httpResponsePart)
}
}
-}
-
void
store_client::fail()
{
@@ -735,20 +724,20 @@ store_client::readHeader(char const *buf, ssize_t len)
if (!object_ok)
return;
- Assure(parsingBuffer);
- debugs(90, 3, "got " << len << " using " << *parsingBuffer);
+ Assure(parsingBuffer.second);
+ debugs(90, 3, "got " << len << " using " << parsingBuffer.first);
if (len < 0)
return fail();
- Assure(!parsingBuffer->contentSize());
- parsingBuffer->appended(buf, len);
+ Assure(!parsingBuffer.first.contentSize());
+ parsingBuffer.first.appended(buf, len);
if (!unpackHeader(buf, len)) {
fail();
return;
}
- parsingBuffer->consume(mem->swap_hdr_sz);
- maybeWriteFromDiskToMemory(parsingBuffer->content());
+ parsingBuffer.first.consume(mem->swap_hdr_sz);
+ maybeWriteFromDiskToMemory(parsingBuffer.first.content());
handleBodyFromDisk();
}
@@ -1020,8 +1009,9 @@ store_client::parseHttpHeadersFromDisk()
// cache a header that we cannot parse and get here. Same for MemStore.
debugs(90, DBG_CRITICAL, "ERROR: Cannot parse on-disk HTTP headers" <<
Debug::Extra << "exception: " << CurrentException <<
- Debug::Extra << "raw input size: " << parsingBuffer->contentSize() << " bytes" <<
- Debug::Extra << "current buffer capacity: " << parsingBuffer->capacity() << " bytes");
+ Debug::Extra << "raw input size: " << parsingBuffer.first.contentSize() << " bytes" <<
+ Debug::Extra << "current buffer capacity: " << parsingBuffer.first.capacity() << " bytes");
+
fail();
return false;
}
@@ -1032,10 +1022,10 @@ store_client::parseHttpHeadersFromDisk()
bool
store_client::tryParsingHttpHeaders()
{
- Assure(parsingBuffer);
+ Assure(parsingBuffer.second);
Assure(!copyInto.offset); // otherwise, parsingBuffer cannot have HTTP response headers
- auto &adjustableReply = entry->mem().adjustableBaseReply();
- if (adjustableReply.parseTerminatedPrefix(parsingBuffer->c_str(), parsingBuffer->contentSize()))
+ auto &adjustableReply = entry->mem().baseReply();
+ if (adjustableReply.parseTerminatedPrefix(parsingBuffer.first.c_str(), parsingBuffer.first.contentSize()))
return true;
// TODO: Optimize by checking memory as well. For simplicity sake, we
@@ -1052,12 +1042,12 @@ store_client::skipHttpHeadersFromDisk()
{
const auto hdr_sz = entry->mem_obj->baseReply().hdr_sz;
Assure(hdr_sz > 0); // all on-disk responses have HTTP headers
- if (Less(parsingBuffer->contentSize(), hdr_sz)) {
- debugs(90, 5, "discovered " << hdr_sz << "-byte HTTP headers in memory after reading some of them from disk: " << *parsingBuffer);
- parsingBuffer->consume(parsingBuffer->contentSize()); // skip loaded HTTP header prefix
+ if (Less(parsingBuffer.first.contentSize(), hdr_sz)) {
+ debugs(90, 5, "discovered " << hdr_sz << "-byte HTTP headers in memory after reading some of them from disk: " << parsingBuffer.first);
+ parsingBuffer.first.consume(parsingBuffer.first.contentSize()); // skip loaded HTTP header prefix
} else {
- parsingBuffer->consume(hdr_sz); // skip loaded HTTP headers
- const auto httpBodyBytesAfterHeader = parsingBuffer->contentSize(); // may be zero
+ parsingBuffer.first.consume(hdr_sz); // skip loaded HTTP headers
+ const auto httpBodyBytesAfterHeader = parsingBuffer.first.contentSize(); // may be zero
Assure(httpBodyBytesAfterHeader <= copyInto.length);
debugs(90, 5, "read HTTP body prefix: " << httpBodyBytesAfterHeader);
}
diff --git a/src/urn.cc b/src/urn.cc
index 9f5e89d..ad42b74 100644
--- a/src/urn.cc
+++ b/src/urn.cc
@@ -238,7 +238,7 @@ urnHandleReply(void *data, StoreIOBuffer result)
return;
}
-+ urnState->parsingBuffer.appended(result.data, result.length);
+ urnState->parsingBuffer.appended(result.data, result.length);
/* If we haven't received the entire object (urn), copy more */
if (!urnState->sc->atEof()) {
--
2.39.3

View File

@ -0,0 +1,46 @@
From b6c01a2031944125b8cc6974f598c2cd66f0cee4 Mon Sep 17 00:00:00 2001
From: Alex Rousskov <rousskov@measurement-factory.com>
Date: Mon, 20 Nov 2023 23:05:00 +0000
Subject: [PATCH] Bug 5318: peer_digest.cc:399: "fetch->pd &&
receivedData.data" (#1584)
Recent commit 122a6e3 removed HTTP response headers from store_client
responses. That removal created the possibility of an empty
StoreIOBuffer at the beginning of the feeding sequence. Pending Bug 5317
fix will make such buffers even more frequent. Existing store_client
recipients have varying requirements with regard to empty response
buffers, as documented in store_client::finishCallback(). We missed this
requirement conflict in Cache Digest code. This fix adjusts Cache
Digests code to be compatible with empty StoreIOBuffer representation in
current store_client code.
---
src/peer_digest.cc | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/peer_digest.cc b/src/peer_digest.cc
index e29614afd2c..7d290cc9013 100644
--- a/src/peer_digest.cc
+++ b/src/peer_digest.cc
@@ -349,11 +349,11 @@ peerDigestHandleReply(void *data, StoreIOBuffer receivedData)
return;
}
- assert(fetch->pd && receivedData.data);
+ assert(fetch->pd);
/* The existing code assumes that the received pointer is
* where we asked the data to be put
*/
- assert(fetch->buf + fetch->bufofs == receivedData.data);
+ assert(!receivedData.data || fetch->buf + fetch->bufofs == receivedData.data);
/* Update the buffer size */
fetch->bufofs += receivedData.length;
@@ -635,7 +635,7 @@ peerDigestFetchedEnough(DigestFetchState * fetch, char *buf, ssize_t size, const
}
/* continue checking (maybe-successful eof case) */
- if (!reason && !size) {
+ if (!reason && !size && fetch->state != DIGEST_READ_REPLY) {
if (!pd->cd)
reason = "null digest?!";
else if (fetch->mask_offset != pd->cd->mask_size)

0
SOURCES/perl-requires-squid.sh Executable file → Normal file
View File

View File

@ -1,5 +1,17 @@
From 792ef23e6e1c05780fe17f733859eef6eb8c8be3 Mon Sep 17 00:00:00 2001
From: Andreas Weigel <andreas.weigel@securepoint.de>
Date: Wed, 18 Oct 2023 04:14:31 +0000
Subject: [PATCH] Fix validation of certificates with CN=* (#1523)
The bug was discovered and detailed by Joshua Rogers at
https://megamansec.github.io/Squid-Security-Audit/
where it was filed as "Buffer UnderRead in SSL CN Parsing".
---
src/anyp/Uri.cc | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/src/anyp/Uri.cc b/src/anyp/Uri.cc
index 20b9bf1..81ebb18 100644
index 77b6f0c92..a6a5d5d9e 100644
--- a/src/anyp/Uri.cc
+++ b/src/anyp/Uri.cc
@@ -173,6 +173,10 @@ urlInitialize(void)
@ -22,3 +34,5 @@ index 20b9bf1..81ebb18 100644
/*
* Start at the ends of the two strings and work towards the
--
2.25.1

File diff suppressed because it is too large Load Diff

View File

@ -1,16 +1,22 @@
commit 77b3fb4df0f126784d5fd4967c28ed40eb8d521b
commit deee944f9a12c9fd399ce52f3e2526bb573a9470
Author: Alex Rousskov <rousskov@measurement-factory.com>
Date: Wed Oct 25 19:41:45 2023 +0000
RFC 1123: Fix date parsing (#1538)
The bug was discovered and detailed by Joshua Rogers at
https://megamansec.github.io/Squid-Security-Audit/datetime-overflow.html
where it was filed as "1-Byte Buffer OverRead in RFC 1123 date/time
Handling".
Back port upstream patch
Signed-Off-By: tianyue.lan@oracle.com
---
lib/rfc1123.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/lib/rfc1123.c b/lib/rfc1123.c
index e5bf9a4d7..cb484cc00 100644
index 2d889cc..add63f0 100644
--- a/lib/rfc1123.c
+++ b/lib/rfc1123.c
@@ -50,7 +50,13 @@ make_month(const char *s)
@ -27,4 +33,6 @@ index e5bf9a4d7..cb484cc00 100644
month[2] = xtolower(*(s + 2));
for (i = 0; i < 12; i++)
--
2.39.3

View File

@ -1,5 +1,28 @@
commit 6014c6648a2a54a4ecb7f952ea1163e0798f9264
Author: Alex Rousskov <rousskov@measurement-factory.com>
Date: Fri Oct 27 21:27:20 2023 +0000
Exit without asserting when helper process startup fails (#1543)
... to dup() after fork() and before execvp().
Assertions are for handling program logic errors. Helper initialization
code already handled system call errors correctly (i.e. by exiting the
newly created helper process with an error), except for a couple of
assert()s that could be triggered by dup(2) failures.
This bug was discovered and detailed by Joshua Rogers at
https://megamansec.github.io/Squid-Security-Audit/ipc-assert.html
where it was filed as 'Assertion in Squid "Helper" Process Creator'.
Back port upstream patch
Signed-Off-By: tianyue.lan@oracle.com
---
src/ipc.cc | 32 ++++++++++++++++++++++++++------
1 file changed, 26 insertions(+), 6 deletions(-)
diff --git a/src/ipc.cc b/src/ipc.cc
index 42e11e6..a68e623 100644
index e92a27f..3ddae70 100644
--- a/src/ipc.cc
+++ b/src/ipc.cc
@@ -19,6 +19,11 @@
@ -60,3 +83,6 @@ index 42e11e6..a68e623 100644
assert(t1 > 2 && t2 > 2 && t3 > 2);
--
2.39.3

View File

@ -1,50 +0,0 @@
diff --git a/src/ClientRequestContext.h b/src/ClientRequestContext.h
index fe2edf6..47aa935 100644
--- a/src/ClientRequestContext.h
+++ b/src/ClientRequestContext.h
@@ -81,6 +81,10 @@ public:
#endif
ErrorState *error; ///< saved error page for centralized/delayed processing
bool readNextRequest; ///< whether Squid should read after error handling
+
+#if FOLLOW_X_FORWARDED_FOR
+ size_t currentXffHopNumber = 0; ///< number of X-Forwarded-For header values processed so far
+#endif
};
#endif /* SQUID_CLIENTREQUESTCONTEXT_H */
diff --git a/src/client_side_request.cc b/src/client_side_request.cc
index 1c6ff62..b758f6f 100644
--- a/src/client_side_request.cc
+++ b/src/client_side_request.cc
@@ -78,6 +78,11 @@
static const char *const crlf = "\r\n";
#if FOLLOW_X_FORWARDED_FOR
+
+#if !defined(SQUID_X_FORWARDED_FOR_HOP_MAX)
+#define SQUID_X_FORWARDED_FOR_HOP_MAX 64
+#endif
+
static void clientFollowXForwardedForCheck(allow_t answer, void *data);
#endif /* FOLLOW_X_FORWARDED_FOR */
@@ -485,8 +490,16 @@ clientFollowXForwardedForCheck(allow_t answer, void *data)
/* override the default src_addr tested if we have to go deeper than one level into XFF */
Filled(calloutContext->acl_checklist)->src_addr = request->indirect_client_addr;
}
- calloutContext->acl_checklist->nonBlockingCheck(clientFollowXForwardedForCheck, data);
- return;
+ if (++calloutContext->currentXffHopNumber < SQUID_X_FORWARDED_FOR_HOP_MAX) {
+ calloutContext->acl_checklist->nonBlockingCheck(clientFollowXForwardedForCheck, data);
+ return;
+ }
+ const auto headerName = Http::HeaderLookupTable.lookup(Http::HdrType::X_FORWARDED_FOR).name;
+ debugs(28, DBG_CRITICAL, "ERROR: Ignoring trailing " << headerName << " addresses" <<
+ Debug::Extra << "addresses allowed by follow_x_forwarded_for: " << calloutContext->currentXffHopNumber <<
+ Debug::Extra << "last/accepted address: " << request->indirect_client_addr <<
+ Debug::Extra << "ignored trailing addresses: " << request->x_forwarded_for_iterator);
+ // fall through to resume clientAccessCheck() processing
}
}

View File

@ -1,31 +0,0 @@
commit 8fcff9c09824b18628f010d26a04247f6a6cbcb8
Author: Alex Rousskov <rousskov@measurement-factory.com>
Date: Sun Nov 12 09:33:20 2023 +0000
Do not update StoreEntry expiration after errorAppendEntry() (#1580)
errorAppendEntry() is responsible for setting entry expiration times,
which it does by calling StoreEntry::storeErrorResponse() that calls
StoreEntry::negativeCache().
This change was triggered by a vulnerability report by Joshua Rogers at
https://megamansec.github.io/Squid-Security-Audit/cache-uaf.html where
it was filed as "Use-After-Free in Cache Manager Errors". The reported
"use after free" vulnerability was unknowingly addressed by 2022 commit
1fa761a that removed excessively long "reentrant" store_client calls
responsible for the disappearance of the properly locked StoreEntry in
this (and probably other) contexts.
diff --git a/src/cache_manager.cc b/src/cache_manager.cc
index 8055ece..fdcc9cf 100644
--- a/src/cache_manager.cc
+++ b/src/cache_manager.cc
@@ -323,7 +323,6 @@ CacheManager::Start(const Comm::ConnectionPointer &client, HttpRequest * request
const auto err = new ErrorState(ERR_INVALID_URL, Http::scNotFound, request);
err->url = xstrdup(entry->url());
errorAppendEntry(entry, err);
- entry->expires = squid_curtime;
return;
}

View File

@ -1,193 +0,0 @@
diff --git a/src/http.cc b/src/http.cc
index b006300..023e411 100644
--- a/src/http.cc
+++ b/src/http.cc
@@ -52,6 +52,7 @@
#include "rfc1738.h"
#include "SquidConfig.h"
#include "SquidTime.h"
+#include "SquidMath.h"
#include "StatCounters.h"
#include "Store.h"
#include "StrList.h"
@@ -1150,18 +1151,26 @@ HttpStateData::readReply(const CommIoCbParams &io)
* Plus, it breaks our lame *HalfClosed() detection
*/
- Must(maybeMakeSpaceAvailable(true));
- CommIoCbParams rd(this); // will be expanded with ReadNow results
- rd.conn = io.conn;
- rd.size = entry->bytesWanted(Range<size_t>(0, inBuf.spaceSize()));
+ size_t moreDataPermission = 0;
+ if ((!canBufferMoreReplyBytes(&moreDataPermission) || !moreDataPermission)) {
+ abortTransaction("ready to read required data, but the read buffer is full and cannot be drained");
+ return;
+ }
+
+ const auto readSizeMax = maybeMakeSpaceAvailable(moreDataPermission);
+ // TODO: Move this logic inside maybeMakeSpaceAvailable():
+ const auto readSizeWanted = readSizeMax ? entry->bytesWanted(Range<size_t>(0, readSizeMax)) : 0;
- if (rd.size <= 0) {
+ if (readSizeWanted <= 0) {
assert(entry->mem_obj);
AsyncCall::Pointer nilCall;
entry->mem_obj->delayRead(DeferredRead(readDelayed, this, CommRead(io.conn, NULL, 0, nilCall)));
return;
}
+ CommIoCbParams rd(this); // will be expanded with ReadNow results
+ rd.conn = io.conn;
+ rd.size = readSizeWanted;
switch (Comm::ReadNow(rd, inBuf)) {
case Comm::INPROGRESS:
if (inBuf.isEmpty())
@@ -1520,8 +1529,11 @@ HttpStateData::maybeReadVirginBody()
if (!Comm::IsConnOpen(serverConnection) || fd_table[serverConnection->fd].closing())
return;
- if (!maybeMakeSpaceAvailable(false))
+ size_t moreDataPermission = 0;
+ if ((!canBufferMoreReplyBytes(&moreDataPermission)) || !moreDataPermission) {
+ abortTransaction("more response bytes required, but the read buffer is full and cannot be drained");
return;
+ }
// XXX: get rid of the do_next_read flag
// check for the proper reasons preventing read(2)
@@ -1539,40 +1551,79 @@ HttpStateData::maybeReadVirginBody()
Comm::Read(serverConnection, call);
}
+/// Desired inBuf capacity based on various capacity preferences/limits:
+/// * a smaller buffer may not hold enough for look-ahead header/body parsers;
+/// * a smaller buffer may result in inefficient tiny network reads;
+/// * a bigger buffer may waste memory;
+/// * a bigger buffer may exceed SBuf storage capabilities (SBuf::maxSize);
+size_t
+HttpStateData::calcReadBufferCapacityLimit() const
+{
+ if (!flags.headers_parsed)
+ return Config.maxReplyHeaderSize;
+
+ // XXX: Our inBuf is not used to maintain the read-ahead gap, and using
+ // Config.readAheadGap like this creates huge read buffers for large
+ // read_ahead_gap values. TODO: Switch to using tcp_recv_bufsize as the
+ // primary read buffer capacity factor.
+ //
+ // TODO: Cannot reuse throwing NaturalCast() here. Consider removing
+ // .value() dereference in NaturalCast() or add/use NaturalCastOrMax().
+ const auto configurationPreferences = NaturalSum<size_t>(Config.readAheadGap).second ? NaturalSum<size_t>(Config.readAheadGap).first : SBuf::maxSize;
+
+ // TODO: Honor TeChunkedParser look-ahead and trailer parsing requirements
+ // (when explicit configurationPreferences are set too low).
+
+ return std::min<size_t>(configurationPreferences, SBuf::maxSize);
+}
+
+/// The maximum number of virgin reply bytes we may buffer before we violate
+/// the currently configured response buffering limits.
+/// \retval std::nullopt means that no more virgin response bytes can be read
+/// \retval 0 means that more virgin response bytes may be read later
+/// \retval >0 is the number of bytes that can be read now (subject to other constraints)
bool
-HttpStateData::maybeMakeSpaceAvailable(bool doGrow)
+HttpStateData::canBufferMoreReplyBytes(size_t *maxReadSize) const
{
- // how much we are allowed to buffer
- const int limitBuffer = (flags.headers_parsed ? Config.readAheadGap : Config.maxReplyHeaderSize);
-
- if (limitBuffer < 0 || inBuf.length() >= (SBuf::size_type)limitBuffer) {
- // when buffer is at or over limit already
- debugs(11, 7, "will not read up to " << limitBuffer << ". buffer has (" << inBuf.length() << "/" << inBuf.spaceSize() << ") from " << serverConnection);
- debugs(11, DBG_DATA, "buffer has {" << inBuf << "}");
- // Process next response from buffer
- processReply();
- return false;
+#if USE_ADAPTATION
+ // If we do not check this now, we may say the final "no" prematurely below
+ // because inBuf.length() will decrease as adaptation drains buffered bytes.
+ if (responseBodyBuffer) {
+ debugs(11, 3, "yes, but waiting for adaptation to drain read buffer");
+ *maxReadSize = 0; // yes, we may be able to buffer more (but later)
+ return true;
+ }
+#endif
+
+ const auto maxCapacity = calcReadBufferCapacityLimit();
+ if (inBuf.length() >= maxCapacity) {
+ debugs(11, 3, "no, due to a full buffer: " << inBuf.length() << '/' << inBuf.spaceSize() << "; limit: " << maxCapacity);
+ return false; // no, configuration prohibits buffering more
}
+ *maxReadSize = (maxCapacity - inBuf.length()); // positive
+ debugs(11, 7, "yes, may read up to " << *maxReadSize << " into " << inBuf.length() << '/' << inBuf.spaceSize());
+ return true; // yes, can read up to this many bytes (subject to other constraints)
+}
+
+/// prepare read buffer for reading
+/// \return the maximum number of bytes the caller should attempt to read
+/// \retval 0 means that the caller should delay reading
+size_t
+HttpStateData::maybeMakeSpaceAvailable(const size_t maxReadSize)
+{
// how much we want to read
- const size_t read_size = calcBufferSpaceToReserve(inBuf.spaceSize(), (limitBuffer - inBuf.length()));
+ const size_t read_size = calcBufferSpaceToReserve(inBuf.spaceSize(), maxReadSize);
- if (!read_size) {
+ if (read_size < 2) {
debugs(11, 7, "will not read up to " << read_size << " into buffer (" << inBuf.length() << "/" << inBuf.spaceSize() << ") from " << serverConnection);
- return false;
+ return 0;
}
- // just report whether we could grow or not, do not actually do it
- if (doGrow)
- return (read_size >= 2);
-
// we may need to grow the buffer
inBuf.reserveSpace(read_size);
- debugs(11, 8, (!flags.do_next_read ? "will not" : "may") <<
- " read up to " << read_size << " bytes info buf(" << inBuf.length() << "/" << inBuf.spaceSize() <<
- ") from " << serverConnection);
-
- return (inBuf.spaceSize() >= 2); // only read if there is 1+ bytes of space available
+ debugs(11, 7, "may read up to " << read_size << " bytes info buffer (" << inBuf.length() << "/" << inBuf.spaceSize() << ") from " << serverConnection);
+ return read_size;
}
/// called after writing the very last request byte (body, last-chunk, etc)
diff --git a/src/http.h b/src/http.h
index 8965b77..007d2e6 100644
--- a/src/http.h
+++ b/src/http.h
@@ -15,6 +15,8 @@
#include "http/StateFlags.h"
#include "sbuf/SBuf.h"
+#include <optional>
+
class FwdState;
class HttpHeader;
@@ -107,16 +109,9 @@ private:
void abortTransaction(const char *reason) { abortAll(reason); } // abnormal termination
- /**
- * determine if read buffer can have space made available
- * for a read.
- *
- * \param grow whether to actually expand the buffer
- *
- * \return whether the buffer can be grown to provide space
- * regardless of whether the grow actually happened.
- */
- bool maybeMakeSpaceAvailable(bool grow);
+ size_t calcReadBufferCapacityLimit() const;
+ bool canBufferMoreReplyBytes(size_t *maxReadSize) const;
+ size_t maybeMakeSpaceAvailable(size_t maxReadSize);
// consuming request body
virtual void handleMoreRequestBodyAvailable();

View File

@ -1,105 +0,0 @@
diff --git a/src/SquidString.h b/src/SquidString.h
index a791885..b9aef38 100644
--- a/src/SquidString.h
+++ b/src/SquidString.h
@@ -114,7 +114,16 @@ private:
size_type len_; /* current length */
- static const size_type SizeMax_ = 65535; ///< 64K limit protects some fixed-size buffers
+ /// An earlier 64KB limit was meant to protect some fixed-size buffers, but
+ /// (a) we do not know where those buffers are (or whether they still exist)
+ /// (b) too many String users unknowingly exceeded that limit and asserted.
+ /// We are now using a larger limit to reduce the number of (b) cases,
+ /// especially cases where "compact" lists of items grow 50% in size when we
+ /// convert them to canonical form. The new limit is selected to withstand
+ /// concatenation and ~50% expansion of two HTTP headers limited by default
+ /// request_header_max_size and reply_header_max_size settings.
+ static const size_type SizeMax_ = 3*64*1024 - 1;
+
/// returns true after increasing the first argument by extra if the sum does not exceed SizeMax_
static bool SafeAdd(size_type &base, size_type extra) { if (extra <= SizeMax_ && base <= SizeMax_ - extra) { base += extra; return true; } return false; }
diff --git a/src/cache_cf.cc b/src/cache_cf.cc
index a9c1b7e..46f07bb 100644
--- a/src/cache_cf.cc
+++ b/src/cache_cf.cc
@@ -935,6 +935,18 @@ configDoConfigure(void)
(uint32_t)Config.maxRequestBufferSize, (uint32_t)Config.maxRequestHeaderSize);
}
+ // Warn about the dangers of exceeding String limits when manipulating HTTP
+ // headers. Technically, we do not concatenate _requests_, so we could relax
+ // their check, but we keep the two checks the same for simplicity sake.
+ const auto safeRawHeaderValueSizeMax = (String::SizeMaxXXX()+1)/3;
+ // TODO: static_assert(safeRawHeaderValueSizeMax >= 64*1024); // no WARNINGs for default settings
+ if (Config.maxRequestHeaderSize > safeRawHeaderValueSizeMax)
+ debugs(3, DBG_CRITICAL, "WARNING: Increasing request_header_max_size beyond " << safeRawHeaderValueSizeMax <<
+ " bytes makes Squid more vulnerable to denial-of-service attacks; configured value: " << Config.maxRequestHeaderSize << " bytes");
+ if (Config.maxReplyHeaderSize > safeRawHeaderValueSizeMax)
+ debugs(3, DBG_CRITICAL, "WARNING: Increasing reply_header_max_size beyond " << safeRawHeaderValueSizeMax <<
+ " bytes makes Squid more vulnerable to denial-of-service attacks; configured value: " << Config.maxReplyHeaderSize << " bytes");
+
/*
* Disable client side request pipelining if client_persistent_connections OFF.
* Waste of resources queueing any pipelined requests when the first will close the connection.
diff --git a/src/cf.data.pre b/src/cf.data.pre
index bc2ddcd..d55b870 100644
--- a/src/cf.data.pre
+++ b/src/cf.data.pre
@@ -6196,11 +6196,14 @@ TYPE: b_size_t
DEFAULT: 64 KB
LOC: Config.maxRequestHeaderSize
DOC_START
- This specifies the maximum size for HTTP headers in a request.
- Request headers are usually relatively small (about 512 bytes).
- Placing a limit on the request header size will catch certain
- bugs (for example with persistent connections) and possibly
- buffer-overflow or denial-of-service attacks.
+ This directives limits the header size of a received HTTP request
+ (including request-line). Increasing this limit beyond its 64 KB default
+ exposes certain old Squid code to various denial-of-service attacks. This
+ limit also applies to received FTP commands.
+
+ This limit has no direct affect on Squid memory consumption.
+
+ Squid does not check this limit when sending requests.
DOC_END
NAME: reply_header_max_size
@@ -6209,11 +6212,14 @@ TYPE: b_size_t
DEFAULT: 64 KB
LOC: Config.maxReplyHeaderSize
DOC_START
- This specifies the maximum size for HTTP headers in a reply.
- Reply headers are usually relatively small (about 512 bytes).
- Placing a limit on the reply header size will catch certain
- bugs (for example with persistent connections) and possibly
- buffer-overflow or denial-of-service attacks.
+ This directives limits the header size of a received HTTP response
+ (including status-line). Increasing this limit beyond its 64 KB default
+ exposes certain old Squid code to various denial-of-service attacks. This
+ limit also applies to FTP command responses.
+
+ Squid also checks this limit when loading hit responses from disk cache.
+
+ Squid does not check this limit when sending responses.
DOC_END
NAME: request_body_max_size
diff --git a/src/http.cc b/src/http.cc
index 877172d..b006300 100644
--- a/src/http.cc
+++ b/src/http.cc
@@ -1820,8 +1820,9 @@ HttpStateData::httpBuildRequestHeader(HttpRequest * request,
String strFwd = hdr_in->getList(Http::HdrType::X_FORWARDED_FOR);
- // if we cannot double strFwd size, then it grew past 50% of the limit
- if (!strFwd.canGrowBy(strFwd.size())) {
+ // Detect unreasonably long header values. And paranoidly check String
+ // limits: a String ought to accommodate two reasonable-length values.
+ if (strFwd.size() > 32*1024 || !strFwd.canGrowBy(strFwd.size())) {
// There is probably a forwarding loop with Via detection disabled.
// If we do nothing, String will assert on overflow soon.
// TODO: Terminate all transactions with huge XFF?

View File

@ -1,367 +0,0 @@
From 8d0ee420a4d91ac7fd97316338f1e28b4b060cbf Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Lubo=C5=A1=20Uhliarik?= <luhliari@redhat.com>
Date: Thu, 10 Oct 2024 19:26:27 +0200
Subject: [PATCH 1/6] Ignore whitespace chars after chunk-size
Previously (before #1498 change), squid was accepting TE-chunked replies
with whitespaces after chunk-size and missing chunk-ext data. After
It turned out that replies with such whitespace chars are pretty
common and other webservers which can act as forward proxies (e.g.
nginx, httpd...) are accepting them.
This change will allow to proxy chunked responses from origin server,
which had whitespaces inbetween chunk-size and CRLF.
---
src/http/one/TeChunkedParser.cc | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/http/one/TeChunkedParser.cc b/src/http/one/TeChunkedParser.cc
index 9cce10fdc91..04753395e16 100644
--- a/src/http/one/TeChunkedParser.cc
+++ b/src/http/one/TeChunkedParser.cc
@@ -125,6 +125,7 @@ Http::One::TeChunkedParser::parseChunkMetadataSuffix(Tokenizer &tok)
// Code becomes much simpler when incremental parsing functions throw on
// bad or insufficient input, like in the code below. TODO: Expand up.
try {
+ tok.skipAll(CharacterSet::WSP); // Some servers send SP/TAB after chunk-size
parseChunkExtensions(tok); // a possibly empty chunk-ext list
tok.skipRequired("CRLF after [chunk-ext]", Http1::CrLf());
buf_ = tok.remaining();
From 9c8d35f899035fa06021ab3fe6919f892c2f0c6b Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Lubo=C5=A1=20Uhliarik?= <luhliari@redhat.com>
Date: Fri, 11 Oct 2024 02:06:31 +0200
Subject: [PATCH 2/6] Added new argument to Http::One::ParseBws()
Depending on new wsp_only argument in ParseBws() it will be decided
which set of whitespaces characters will be parsed. If wsp_only is set
to true, only SP and HTAB chars will be parsed.
Also optimized number of ParseBws calls.
---
src/http/one/Parser.cc | 4 ++--
src/http/one/Parser.h | 3 ++-
src/http/one/TeChunkedParser.cc | 13 +++++++++----
src/http/one/TeChunkedParser.h | 2 +-
4 files changed, 14 insertions(+), 8 deletions(-)
diff --git a/src/http/one/Parser.cc b/src/http/one/Parser.cc
index b1908316a0b..01d7e3bc0e8 100644
--- a/src/http/one/Parser.cc
+++ b/src/http/one/Parser.cc
@@ -273,9 +273,9 @@ Http::One::ErrorLevel()
// BWS = *( SP / HTAB ) ; WhitespaceCharacters() may relax this RFC 7230 rule
void
-Http::One::ParseBws(Parser::Tokenizer &tok)
+Http::One::ParseBws(Parser::Tokenizer &tok, const bool wsp_only)
{
- const auto count = tok.skipAll(Parser::WhitespaceCharacters());
+ const auto count = tok.skipAll(wsp_only ? CharacterSet::WSP : Parser::WhitespaceCharacters());
if (tok.atEnd())
throw InsufficientInput(); // even if count is positive
diff --git a/src/http/one/Parser.h b/src/http/one/Parser.h
index d9a0ac8c273..08200371cd6 100644
--- a/src/http/one/Parser.h
+++ b/src/http/one/Parser.h
@@ -163,8 +163,9 @@ class Parser : public RefCountable
};
/// skips and, if needed, warns about RFC 7230 BWS ("bad" whitespace)
+/// \param wsp_only force skipping of whitespaces only, don't consider skipping relaxed delimeter chars
/// \throws InsufficientInput when the end of BWS cannot be confirmed
-void ParseBws(Parser::Tokenizer &);
+void ParseBws(Parser::Tokenizer &, const bool wsp_only = false);
/// the right debugs() level for logging HTTP violation messages
int ErrorLevel();
diff --git a/src/http/one/TeChunkedParser.cc b/src/http/one/TeChunkedParser.cc
index 04753395e16..41e1e5ddaea 100644
--- a/src/http/one/TeChunkedParser.cc
+++ b/src/http/one/TeChunkedParser.cc
@@ -125,8 +125,11 @@ Http::One::TeChunkedParser::parseChunkMetadataSuffix(Tokenizer &tok)
// Code becomes much simpler when incremental parsing functions throw on
// bad or insufficient input, like in the code below. TODO: Expand up.
try {
- tok.skipAll(CharacterSet::WSP); // Some servers send SP/TAB after chunk-size
- parseChunkExtensions(tok); // a possibly empty chunk-ext list
+ // A possibly empty chunk-ext list. If no chunk-ext has been found,
+ // try to skip trailing BWS, because some servers send "chunk-size BWS CRLF".
+ if (!parseChunkExtensions(tok))
+ ParseBws(tok, true);
+
tok.skipRequired("CRLF after [chunk-ext]", Http1::CrLf());
buf_ = tok.remaining();
parsingStage_ = theChunkSize ? Http1::HTTP_PARSE_CHUNK : Http1::HTTP_PARSE_MIME;
@@ -140,20 +143,22 @@ Http::One::TeChunkedParser::parseChunkMetadataSuffix(Tokenizer &tok)
/// Parses the chunk-ext list (RFC 9112 section 7.1.1:
/// chunk-ext = *( BWS ";" BWS chunk-ext-name [ BWS "=" BWS chunk-ext-val ] )
-void
+bool
Http::One::TeChunkedParser::parseChunkExtensions(Tokenizer &callerTok)
{
+ bool foundChunkExt = false;
do {
auto tok = callerTok;
ParseBws(tok); // Bug 4492: IBM_HTTP_Server sends SP after chunk-size
if (!tok.skip(';'))
- return; // reached the end of extensions (if any)
+ return foundChunkExt; // reached the end of extensions (if any)
parseOneChunkExtension(tok);
buf_ = tok.remaining(); // got one extension
callerTok = tok;
+ foundChunkExt = true;
} while (true);
}
diff --git a/src/http/one/TeChunkedParser.h b/src/http/one/TeChunkedParser.h
index 02eacd1bb89..8c5d4bb4cba 100644
--- a/src/http/one/TeChunkedParser.h
+++ b/src/http/one/TeChunkedParser.h
@@ -71,7 +71,7 @@ class TeChunkedParser : public Http1::Parser
private:
bool parseChunkSize(Tokenizer &tok);
bool parseChunkMetadataSuffix(Tokenizer &);
- void parseChunkExtensions(Tokenizer &);
+ bool parseChunkExtensions(Tokenizer &);
void parseOneChunkExtension(Tokenizer &);
bool parseChunkBody(Tokenizer &tok);
bool parseChunkEnd(Tokenizer &tok);
From 81e67f97f9c386bdd0bb4a5e182395c46adb70ad Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Lubo=C5=A1=20Uhliarik?= <luhliari@redhat.com>
Date: Fri, 11 Oct 2024 02:44:33 +0200
Subject: [PATCH 3/6] Fix typo in Parser.h
---
src/http/one/Parser.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/http/one/Parser.h b/src/http/one/Parser.h
index 08200371cd6..3ef4c5f7752 100644
--- a/src/http/one/Parser.h
+++ b/src/http/one/Parser.h
@@ -163,7 +163,7 @@ class Parser : public RefCountable
};
/// skips and, if needed, warns about RFC 7230 BWS ("bad" whitespace)
-/// \param wsp_only force skipping of whitespaces only, don't consider skipping relaxed delimeter chars
+/// \param wsp_only force skipping of whitespaces only, don't consider skipping relaxed delimiter chars
/// \throws InsufficientInput when the end of BWS cannot be confirmed
void ParseBws(Parser::Tokenizer &, const bool wsp_only = false);
From a0d4fe1794e605f8299a5c118c758a807453f016 Mon Sep 17 00:00:00 2001
From: Alex Rousskov <rousskov@measurement-factory.com>
Date: Thu, 10 Oct 2024 22:39:42 -0400
Subject: [PATCH 4/6] Bug 5449 is a regression of Bug 4492!
Both bugs deal with "chunk-size SP+ CRLF" use cases. Bug 4492 had _two_
spaces after chunk-size, which answers one of the PR review questions:
Should we skip just one space? No, we should not.
The lines moved around in many commits, but I believe this regression
was introduced in commit 951013d0 because that commit stopped consuming
partially parsed chunk-ext sequences. That consumption was wrong, but it
had a positive side effect -- fixing Bug 4492...
---
src/http/one/TeChunkedParser.cc | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/http/one/TeChunkedParser.cc b/src/http/one/TeChunkedParser.cc
index 41e1e5ddaea..aa4a840fdcf 100644
--- a/src/http/one/TeChunkedParser.cc
+++ b/src/http/one/TeChunkedParser.cc
@@ -125,10 +125,10 @@ Http::One::TeChunkedParser::parseChunkMetadataSuffix(Tokenizer &tok)
// Code becomes much simpler when incremental parsing functions throw on
// bad or insufficient input, like in the code below. TODO: Expand up.
try {
- // A possibly empty chunk-ext list. If no chunk-ext has been found,
- // try to skip trailing BWS, because some servers send "chunk-size BWS CRLF".
- if (!parseChunkExtensions(tok))
- ParseBws(tok, true);
+ // Bug 4492: IBM_HTTP_Server sends SP after chunk-size
+ ParseBws(tok, true);
+
+ parseChunkExtensions(tok);
tok.skipRequired("CRLF after [chunk-ext]", Http1::CrLf());
buf_ = tok.remaining();
@@ -150,7 +150,7 @@ Http::One::TeChunkedParser::parseChunkExtensions(Tokenizer &callerTok)
do {
auto tok = callerTok;
- ParseBws(tok); // Bug 4492: IBM_HTTP_Server sends SP after chunk-size
+ ParseBws(tok);
if (!tok.skip(';'))
return foundChunkExt; // reached the end of extensions (if any)
From f837f5ff61301a17008f16ce1fb793c2abf19786 Mon Sep 17 00:00:00 2001
From: Alex Rousskov <rousskov@measurement-factory.com>
Date: Thu, 10 Oct 2024 23:06:42 -0400
Subject: [PATCH 5/6] fixup: Fewer conditionals/ifs and more explicit spelling
... to draw code reader attention when something unusual is going on.
---
src/http/one/Parser.cc | 22 ++++++++++++++++++----
src/http/one/Parser.h | 10 ++++++++--
src/http/one/TeChunkedParser.cc | 14 ++++++--------
src/http/one/TeChunkedParser.h | 2 +-
4 files changed, 33 insertions(+), 15 deletions(-)
diff --git a/src/http/one/Parser.cc b/src/http/one/Parser.cc
index 01d7e3bc0e8..d3937e5e96b 100644
--- a/src/http/one/Parser.cc
+++ b/src/http/one/Parser.cc
@@ -271,11 +271,12 @@ Http::One::ErrorLevel()
return Config.onoff.relaxed_header_parser < 0 ? DBG_IMPORTANT : 5;
}
-// BWS = *( SP / HTAB ) ; WhitespaceCharacters() may relax this RFC 7230 rule
-void
-Http::One::ParseBws(Parser::Tokenizer &tok, const bool wsp_only)
+/// common part of ParseBws() and ParseStrctBws()
+namespace Http::One {
+static void
+ParseBws_(Parser::Tokenizer &tok, const CharacterSet &bwsChars)
{
- const auto count = tok.skipAll(wsp_only ? CharacterSet::WSP : Parser::WhitespaceCharacters());
+ const auto count = tok.skipAll(bwsChars);
if (tok.atEnd())
throw InsufficientInput(); // even if count is positive
@@ -290,4 +291,17 @@ Http::One::ParseBws(Parser::Tokenizer &tok, const bool wsp_only)
// success: no more BWS characters expected
}
+} // namespace Http::One
+
+void
+Http::One::ParseBws(Parser::Tokenizer &tok)
+{
+ ParseBws_(tok, CharacterSet::WSP);
+}
+
+void
+Http::One::ParseStrictBws(Parser::Tokenizer &tok)
+{
+ ParseBws_(tok, Parser::WhitespaceCharacters());
+}
diff --git a/src/http/one/Parser.h b/src/http/one/Parser.h
index 3ef4c5f7752..49e399de546 100644
--- a/src/http/one/Parser.h
+++ b/src/http/one/Parser.h
@@ -163,9 +163,15 @@ class Parser : public RefCountable
};
/// skips and, if needed, warns about RFC 7230 BWS ("bad" whitespace)
-/// \param wsp_only force skipping of whitespaces only, don't consider skipping relaxed delimiter chars
/// \throws InsufficientInput when the end of BWS cannot be confirmed
-void ParseBws(Parser::Tokenizer &, const bool wsp_only = false);
+/// \sa WhitespaceCharacters() for the definition of BWS characters
+/// \sa ParseStrictBws() that avoids WhitespaceCharacters() uncertainties
+void ParseBws(Parser::Tokenizer &);
+
+/// Like ParseBws() but only skips CharacterSet::WSP characters. This variation
+/// must be used if the next element may start with CR or any other character
+/// from RelaxedDelimiterCharacters().
+void ParseStrictBws(Parser::Tokenizer &);
/// the right debugs() level for logging HTTP violation messages
int ErrorLevel();
diff --git a/src/http/one/TeChunkedParser.cc b/src/http/one/TeChunkedParser.cc
index aa4a840fdcf..859471b8c77 100644
--- a/src/http/one/TeChunkedParser.cc
+++ b/src/http/one/TeChunkedParser.cc
@@ -125,11 +125,11 @@ Http::One::TeChunkedParser::parseChunkMetadataSuffix(Tokenizer &tok)
// Code becomes much simpler when incremental parsing functions throw on
// bad or insufficient input, like in the code below. TODO: Expand up.
try {
- // Bug 4492: IBM_HTTP_Server sends SP after chunk-size
- ParseBws(tok, true);
-
- parseChunkExtensions(tok);
+ // Bug 4492: IBM_HTTP_Server sends SP after chunk-size.
+ // No ParseBws() here because it may consume CR required further below.
+ ParseStrictBws(tok);
+ parseChunkExtensions(tok); // a possibly empty chunk-ext list
tok.skipRequired("CRLF after [chunk-ext]", Http1::CrLf());
buf_ = tok.remaining();
parsingStage_ = theChunkSize ? Http1::HTTP_PARSE_CHUNK : Http1::HTTP_PARSE_MIME;
@@ -143,22 +143,20 @@ Http::One::TeChunkedParser::parseChunkMetadataSuffix(Tokenizer &tok)
/// Parses the chunk-ext list (RFC 9112 section 7.1.1:
/// chunk-ext = *( BWS ";" BWS chunk-ext-name [ BWS "=" BWS chunk-ext-val ] )
-bool
+void
Http::One::TeChunkedParser::parseChunkExtensions(Tokenizer &callerTok)
{
- bool foundChunkExt = false;
do {
auto tok = callerTok;
ParseBws(tok);
if (!tok.skip(';'))
- return foundChunkExt; // reached the end of extensions (if any)
+ return; // reached the end of extensions (if any)
parseOneChunkExtension(tok);
buf_ = tok.remaining(); // got one extension
callerTok = tok;
- foundChunkExt = true;
} while (true);
}
diff --git a/src/http/one/TeChunkedParser.h b/src/http/one/TeChunkedParser.h
index 8c5d4bb4cba..02eacd1bb89 100644
--- a/src/http/one/TeChunkedParser.h
+++ b/src/http/one/TeChunkedParser.h
@@ -71,7 +71,7 @@ class TeChunkedParser : public Http1::Parser
private:
bool parseChunkSize(Tokenizer &tok);
bool parseChunkMetadataSuffix(Tokenizer &);
- bool parseChunkExtensions(Tokenizer &);
+ void parseChunkExtensions(Tokenizer &);
void parseOneChunkExtension(Tokenizer &);
bool parseChunkBody(Tokenizer &tok);
bool parseChunkEnd(Tokenizer &tok);
From f79936a234e722adb2dd08f31cf6019d81ee712c Mon Sep 17 00:00:00 2001
From: Alex Rousskov <rousskov@measurement-factory.com>
Date: Thu, 10 Oct 2024 23:31:08 -0400
Subject: [PATCH 6/6] fixup: Deadly typo
---
src/http/one/Parser.cc | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/http/one/Parser.cc b/src/http/one/Parser.cc
index d3937e5e96b..7403a9163a2 100644
--- a/src/http/one/Parser.cc
+++ b/src/http/one/Parser.cc
@@ -296,12 +296,12 @@ ParseBws_(Parser::Tokenizer &tok, const CharacterSet &bwsChars)
void
Http::One::ParseBws(Parser::Tokenizer &tok)
{
- ParseBws_(tok, CharacterSet::WSP);
+ ParseBws_(tok, Parser::WhitespaceCharacters());
}
void
Http::One::ParseStrictBws(Parser::Tokenizer &tok)
{
- ParseBws_(tok, Parser::WhitespaceCharacters());
+ ParseBws_(tok, CharacterSet::WSP);
}

View File

@ -1,25 +0,0 @@
File: squid-4.15.tar.xz
Date: Mon 10 May 2021 10:50:22 UTC
Size: 2454176
MD5 : a593de9dc888dfeca4f1f7db2cd7d3b9
SHA1: 60bda34ba39657e2d870c8c1d2acece8a69c3075
Key : CD6DBF8EF3B17D3E <squid3@treenet.co.nz>
B068 84ED B779 C89B 044E 64E3 CD6D BF8E F3B1 7D3E
keyring = http://www.squid-cache.org/pgp.asc
keyserver = pool.sks-keyservers.net
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEsGiE7bd5yJsETmTjzW2/jvOxfT4FAmCZD/UACgkQzW2/jvOx
fT6zZg/+N8JMIYpmVJ7jm4lF0Ub2kEHGTOrc+tnlA3LGnlMQuTm61+BYk58g0SKW
96NbJ0cycW215Q34L+Y0tWuxEbIU01vIc3AA7rQd0LKy+fQU0OtBuhk5Vf4bKilW
uHEVIQZs9HmY6bqC+kgtCf49tVZvR8FZYNuilg/68+i/pQdwaDDmVb+j2oF7w+y2
dgkTFWtM5NTL6bqUVC0E7lLFPjzMefKfxkkpWFdV/VrAhU25jN24kpnjcfotQhdW
LDFy5okduz3ljso9pBYJfLeMXM1FZPpceC91zj32x3tcUyrD3yIoXob58rEKvfe4
RDXN4SuClsNe4UQ4oNoGIES9XtaYlOzPR1PlbqPUrdp1cDnhgLJ+1fkAixlMqCml
wuI1VIKSEY+nvRzQzFHnXJK9otV8QwMF76AHaytO9y+X6JuZmu/CcV1pq61qY9qv
t1/8z99wWSxpu17zthZgq64J225GF/hkBedaFlYoS5k5YUMDLPlRSCC0yPmb8JBF
Cns5i/aq2PmOx2ZhQ2RQIF416J3HK8Galw8ytFOjnEcn4ux9yzKNjL38p4+PJJA0
7GCMAqYYNjok3LSkGbiR7cPgbHnkqRfYbPFLMj4FtruoFlZ9L5MIU3oFvqA3ZR6l
Az6LaKLsAYPUmukAOPUSIrqpKXZHc7hdBWkT+7RYA4qaoU+9oIo=
=1Re1
-----END PGP SIGNATURE-----

0
SOURCES/squid.nm Executable file → Normal file
View File

View File

@ -2,7 +2,7 @@
Name: squid
Version: 4.15
Release: 10%{?dist}.3
Release: 7%{?dist}.5.alma.1
Summary: The Squid proxy caching server
Epoch: 7
# See CREDITS for breakdown of non GPLv2+ code
@ -53,27 +53,22 @@ Patch302: squid-4.15-CVE-2022-41318.patch
Patch303: squid-4.15-CVE-2023-46846.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2245916
Patch304: squid-4.15-CVE-2023-46847.patch
# https://issues.redhat.com/browse/RHEL-14792
Patch305: squid-4.15-CVE-2023-5824.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2248521
Patch306: squid-4.15-CVE-2023-46728.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2247567
Patch307: squid-4.15-CVE-2023-46724.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2252926
Patch308: squid-4.15-CVE-2023-49285.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2252923
Patch309: squid-4.15-CVE-2023-49286.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2254663
Patch310: squid-4.15-CVE-2023-50269.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2264309
Patch311: squid-4.15-CVE-2024-25617.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2268366
Patch312: squid-4.15-CVE-2024-25111.patch
# Regression caused by squid-4.15-CVE-2023-46846.patch
# Upstream PR: https://github.com/squid-cache/squid/pull/1914
Patch313: squid-4.15-ignore-wsp-after-chunk-size.patch
# https://bugzilla.redhat.com/show_bug.cgi?id=2260051
Patch314: squid-4.15-CVE-2024-23638.patch
#Oracle patches
Patch1001: 0001-Break-long-store_client-call-chains-with-async-calls.patch
Patch1002: 0002-Remove-serialized-HTTP-headers-from-storeClientCopy.patch
Patch1003: 0003-Bug-5309-frequent-lowestOffset-target_offset-asserti.patch
Patch1004: 0004-Remove-mem_hdr-freeDataUpto-assertion-1562.patch
Patch1005: 0005-Backport-Add-Assure-as-a-replacement-for-problematic.patch
Patch1006: 0006-Backport-additional-functions-for-SquidMath.patch
Patch1007: 0007-Adapt-to-older-gcc-cleanup.patch
Patch1008: squid-4.15-CVE-2023-46724.patch
Patch1009: squid-4.15-CVE-2023-46728.patch
Patch1010: squid-4.15-CVE-2023-49285.patch
Patch1011: squid-4.15-CVE-2023-49286.patch
# https://github.com/squid-cache/squid/commit/b6c01a2031944125b8cc6974f598c2cd66f0cee4
Patch1012: Bug-5318-fetch-pdreceivedData-data.patch
Requires: bash >= 2.0
Requires(pre): shadow-utils
@ -90,6 +85,8 @@ BuildRequires: openssl-devel
BuildRequires: krb5-devel
# time_quota requires DB
BuildRequires: libdb-devel
# ESI support requires Expat & libxml2
BuildRequires: expat-devel libxml2-devel
# TPROXY requires libcap, and also increases security somewhat
BuildRequires: libcap-devel
# eCAP support
@ -141,16 +138,21 @@ lookup program (dnsserver), a program for retrieving FTP data
%patch302 -p1 -b .CVE-2022-41318
%patch303 -p1 -b .CVE-2023-46846
%patch304 -p1 -b .CVE-2023-46847
%patch305 -p1 -b .CVE-2023-5824
%patch306 -p1 -b .CVE-2023-46728
%patch307 -p1 -b .CVE-2023-46724
%patch308 -p1 -b .CVE-2023-49285
%patch309 -p1 -b .CVE-2023-49286
%patch310 -p1 -b .CVE-2023-50269
%patch311 -p1 -b .CVE-2024-25617
%patch312 -p1 -b .CVE-2024-25111
%patch313 -p1 -b .ignore-wsp-chunk-sz
%patch314 -p1 -b .CVE-2024-23638
# Oracle patches
%patch1001 -p1
%patch1002 -p1
%patch1003 -p1
%patch1004 -p1
%patch1005 -p1
%patch1006 -p1
%patch1007 -p1
%patch1008 -p1
%patch1009 -p1
%patch1010 -p1
%patch1011 -p1
%patch1012 -p1
# https://bugzilla.redhat.com/show_bug.cgi?id=1679526
# Patch in the vendor documentation and used different location for documentation
@ -195,7 +197,7 @@ autoconf
--enable-storeio="aufs,diskd,ufs,rock" \
--enable-diskio \
--enable-wccpv2 \
--disable-esi \
--enable-esi \
--enable-ecap \
--with-aio \
--with-default-user="squid" \
@ -367,45 +369,24 @@ fi
%changelog
* Wed Nov 13 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:4.15-10.3
- Resolves: RHEL-22593 - CVE-2024-23638 squid:4/squid: vulnerable to
a Denial of Service attack against Cache Manager error responses
* Wed Mar 06 2024 Eduard Abdullin <eabdullin@almalinux.org> - 7:4.15-7.5.alma.1
- Fix Bug 5318: peer_digest.cc:399: "fetch->pd &&
receivedData.data" (#1584)
* Thu Nov 07 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:4.15-10.2
- Disable ESI support
- Resolves: RHEL-65075 - CVE-2024-45802 squid:4/squid: Denial of Service
processing ESI response content
* Wed Jan 03 2024 Tianyue Lan <tianyue.lan@oracle.com> - 7:4.15-7.5
- Fix squid: Denial of Service in SSL Certificate validation (CVE-2023-46724)
- Fix squid: NULL pointer dereference in the gopher protocol code (CVE-2023-46728)
- Fix squid: Buffer over-read in the HTTP Message processing feature (CVE-2023-49285)
- Fix squid: Incorrect Check of Function Return Value In Helper Process management(CVE-2023-49286)
* Mon Oct 14 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:4.15-10.1
- Resolves: RHEL-56024 - (Regression) Transfer-encoding:chunked data is not sent
to the client in its complementary
* Sun Dec 09 2023 Alex Burmashev <alexander.burmashev@oracle.com> - 7:4.15-7.3
- Fix squid: DoS against HTTP and HTTPS (CVE-2023-5824)
* Tue Mar 19 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:4.15-10
- Resolves: RHEL-28529 - squid:4/squid: Denial of Service in HTTP Chunked
Decoding (CVE-2024-25111)
- Resolves: RHEL-26088 - squid:4/squid: denial of service in HTTP header
parser (CVE-2024-25617)
* Fri Feb 02 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:4.15-9
- Resolves: RHEL-19552 - squid:4/squid: denial of service in HTTP request
parsing (CVE-2023-50269)
* Fri Feb 02 2024 Luboš Uhliarik <luhliari@redhat.com> - 7:4.15-8
- Resolves: RHEL-18351 - squid:4/squid: Buffer over-read in the HTTP Message
processing feature (CVE-2023-49285)
- Resolves: RHEL-18342 - squid:4/squid: Incorrect Check of Function Return
Value In Helper Process management (CVE-2023-49286)
- Resolves: RHEL-18230 - squid:4/squid: Denial of Service in SSL Certificate
validation (CVE-2023-46724)
- Resolves: RHEL-15911 - squid:4/squid: NULL pointer dereference in the gopher
protocol code (CVE-2023-46728)
- Resolves: RHEL-18251 - squid crashes in assertion when a parent peer exists
- Resolves: RHEL-14794 - squid: squid multiple issues in HTTP response caching
(CVE-2023-5824)
- Resolves: RHEL-14803 - squid: squid: Denial of Service in HTTP Digest
Authentication (CVE-2023-46847)
- Resolves: RHEL-14777 - squid: squid: Request/Response smuggling in HTTP/1.1
and ICAP (CVE-2023-46846)
* Mon Oct 30 2023 Luboš Uhliarik <luhliari@redhat.com> - 7:4.15-7.1
- Resolves: RHEL-14801 - squid: squid: Denial of Service in HTTP Digest
Authentication
- Resolves: RHEL-14776 - squid: squid: Request/Response smuggling in HTTP/1.1
and ICAP
* Wed Aug 16 2023 Luboš Uhliarik <luhliari@redhat.com> - 7:4.15-7
- Resolves: #2076717 - Crash with half_closed_client on