6404. [security] Remove SIG(0) support from named as a countermeasure
for CVE-2024-1975. [GL #4480]
Upstream commit 225f2861920b8f8d42a0ea6c34dd1faa93aa8726
Resolves: RHEL-49919
Non-thread builds are used by us for dhcp package. Make it working
again.
Related to [GL #4424] and [GL #4459].
; Resolves: CVE-2023-50387 CVE-2023-50868
Resolves: RHEL-25681 RHEL-25649
dns__cacherbt_expireheader can unlink / free header_prev underneath
it. Use ISC_LIST_TAIL after calling dns__cacherbt_expireheader
instead to get the next pointer to be processed.
(cherry picked from commit 7ce2e86024f022decb2678963538515ca39ab4ab)
(cherry picked from commit f88f21b7d890eb80097f4bd434fedb29c2f9ff63)
This is related to CVE-2023-2828 fix change and fixes small part of it.
; Related: CVE-2023-4408
Related: RHEL-25691
Be more strict when encountering DNSSEC validation failures - fail on
the first failure. This will break domains that have DNSSEC signing
keys with duplicate key ids, but this is something that's much easier
to fix on the authoritative side, so we are just going to be strict
on the resolver side where it is causing performance problems.
(cherry picked from commit 8b7ecba9885e163c07c2dd3e1ceab79b2ba89e34)
Add normal and slow task queues
Split the task manager queues into normal and slow task queues, so we
can move the tasks that blocks processing for a long time (like DNSSEC
validation) into the slow queue which doesn't block fast
operations (like responding from the cache). This mitigates the whole
class of KeyTrap-like issues.
(cherry picked from commit db083a21726300916fa0b9fd8a433a796fedf636)
Don't iterate from start every time we select new signing key
Improve the selecting of the new signing key by remembering where
we stopped the iteration and just continue from that place instead
of iterating from the start over and over again each time.
(cherry picked from commit 75faeefcab47e4f1e12b358525190b4be90f97de)
Optimize selecting the signing key
Don't parse the crypto data before parsing and matching the id and the
algorithm.
(cherry picked from commit b38552cca7200a72658e482f8407f57516efc5db)
6322. [security] Specific DNS answers could cause a denial-of-service
condition due to DNS validation taking a long time.
(CVE-2023-50387) [GL #4424]
The same code change also addresses another problem:
preparing NSEC3 closest encloser proofs could exhaust
available CPU resources. (CVE-2023-50868) [GL #4459]
; Resolves: CVE-2023-50387 CVE-2023-50868
Resolves: RHEL-25681 RHEL-25649
When parsing messages use a hashtable instead of a linear search to reduce
the amount of work done in findname when there's more than one name in
the section.
There are two hashtables:
1) hashtable for owner names - that's constructed for each section when we
hit the second name in the section and destroyed right after parsing
that section;
2) per-name hashtable - for each name in the section, we construct a new
hashtable for that name if there are more than one rdataset for that
particular name.
(cherry picked from commit b8a96317544c7b310b4f74360825a87b6402ddc2)
(cherry picked from commit 0ceed03ebea395da1a39ad1cb39205ce75a3f768)
Backport isc_ht API changes from BIND 9.18
To prevent allocating large hashtable in dns_message, we need to
backport the improvements to isc_ht API from BIND 9.18+ that includes
support for case insensitive keys and incremental rehashing of the
hashtables.
(cherry picked from commit a4baf324159ec3764195c949cb56c861d9f173ff)
(cherry picked from commit 2fc28056b33018f7f78b625409eb44c32d5c9b11)
fix a message parsing regression
the fix for CVE-2023-4408 introduced a regression in the message
parser, which could cause a crash if duplicate rdatasets were found
in the question section. this commit ensures that rdatasets are
correctly disassociated and freed when this occurs.
(cherry picked from commit 4c19d35614f8cd80d8748156a5bad361e19abc28)
(cherry picked from commit 98ab8c81cc7739dc220aa3f50efa3061774de8ba)
fix another message parsing regression
The fix for CVE-2023-4408 introduced a regression in the message
parser, which could cause a crash if an rdata type that can only
occur in the question was found in another section.
(cherry picked from commit 510f1de8a6add516b842a55750366944701d3d9a)
(cherry picked from commit bbbcaf8b2ec17d2cad28841ea86078168072ae2f)
Apply various tweaks specific to BIND 9.11
(cherry picked from commit c6026cbbaa9d297910af350fa6cc45788cc9f397)
Fix case insensitive matching in isc_ht hash table implementation
The case insensitive matching in isc_ht was basically completely broken
as only the hashvalue computation was case insensitive, but the key
comparison was always case sensitive.
(cherry picked from commit c462d65b2fd0db362947db4a18a87df78f8d8e5d)
(cherry picked from commit 418b3793598a1e1c7e391bb317866d405cd03501)
Add a system test for mixed-case data for the same owner
We were missing a test where a single owner name would have multiple
types with a different case. The generated RRSIGs and NSEC records will
then have different case than the signed records and message parser have
to cope with that and treat everything as the same owner.
(cherry picked from commit c8b623d87f0fb8f9cba8dea5c6a4b600953895e7)
(cherry picked from commit 1f9bbe1fe34b7a2c9765431e8a86b460afc9b323)
6315. [security] Speed up parsing of DNS messages with many different
names. (CVE-2023-4408) [GL #4234]
; Resolves: CVE-2023-4408
Resolves: RHEL-25691
By default set max-stale-ttl to 0, unless stale-answer-enable yes. This
were enabled by mistake when backporting fix for CVE-2023-2828. It
causes increased cache usage on servers not wanting to serve stale
records. Fix that by setting smart defaults based on stale answers
enabled with possible manual tuning.
Resolves: RHEL-11785
6190. [security] Improve the overmem cleaning process to prevent the
cache going over the configured limit. (CVE-2023-2828)
[GL #4055]
Resolves: CVE-2023-2828
verify that updates are refused when the client is disallowed by
allow-query, and update forwarding is refused when the client is
is disallowed by update-forwarding.
verify that "too many DNS UPDATEs" appears in the log file when too
many simultaneous updates are processing.
Related: CVE-2022-3094
6064. [security] An UPDATE message flood could cause named to exhaust all
available memory. This flaw was addressed by adding a
new "update-quota" statement that controls the number of
simultaneous UPDATE messages that can be processed or
forwarded. The default is 100. A stats counter has been
added to record events when the update quota is
exceeded, and the XML and JSON statistics version
numbers have been updated. (CVE-2022-3094) [GL #3523]
Resolves: CVE-2022-3094