Update to libarchive 3.4.0

* New upstream release. Adds RAR5 and ZIPX support (readonly),
  64-bit ar, improved extraction and file-attribute support.
* Switch to HTTPS URLs
* Drop upstreamed security patches
* Add patch (submitted upstream) to fix zstd test when built with
  different libzstd releases
* Remove BuildRequires: on lzo-devel, as the release notes for 3.3.x
  included a notice regarding licensing issues when linking with liblzo.
  (See https://github.com/libarchive/libarchive/releases/tag/v3.3.0)
This commit is contained in:
FeRD (Frank Dana) 2019-06-15 06:30:15 -04:00
parent 2f97437d22
commit 2edf2a9b34
9 changed files with 130 additions and 548 deletions

View File

@ -1,58 +0,0 @@
From 65a23f5dbee4497064e9bb467f81138a62b0dae1 Mon Sep 17 00:00:00 2001
From: Daniel Axtens <dja@axtens.net>
Date: Tue, 1 Jan 2019 16:01:40 +1100
Subject: [PATCH 2/2] 7zip: fix crash when parsing certain archives
Fuzzing with CRCs disabled revealed that a call to get_uncompressed_data()
would sometimes fail to return at least 'minimum' bytes. This can cause
the crc32() invocation in header_bytes to read off into invalid memory.
A specially crafted archive can use this to cause a crash.
An ASAN trace is below, but ASAN is not required - an uninstrumented
binary will also crash.
==7719==ERROR: AddressSanitizer: SEGV on unknown address 0x631000040000 (pc 0x7fbdb3b3ec1d bp 0x7ffe77a51310 sp 0x7ffe77a51150 T0)
==7719==The signal is caused by a READ memory access.
#0 0x7fbdb3b3ec1c in crc32_z (/lib/x86_64-linux-gnu/libz.so.1+0x2c1c)
#1 0x84f5eb in header_bytes (/tmp/libarchive/bsdtar+0x84f5eb)
#2 0x856156 in read_Header (/tmp/libarchive/bsdtar+0x856156)
#3 0x84e134 in slurp_central_directory (/tmp/libarchive/bsdtar+0x84e134)
#4 0x849690 in archive_read_format_7zip_read_header (/tmp/libarchive/bsdtar+0x849690)
#5 0x5713b7 in _archive_read_next_header2 (/tmp/libarchive/bsdtar+0x5713b7)
#6 0x570e63 in _archive_read_next_header (/tmp/libarchive/bsdtar+0x570e63)
#7 0x6f08bd in archive_read_next_header (/tmp/libarchive/bsdtar+0x6f08bd)
#8 0x52373f in read_archive (/tmp/libarchive/bsdtar+0x52373f)
#9 0x5257be in tar_mode_x (/tmp/libarchive/bsdtar+0x5257be)
#10 0x51daeb in main (/tmp/libarchive/bsdtar+0x51daeb)
#11 0x7fbdb27cab96 in __libc_start_main /build/glibc-OTsEL5/glibc-2.27/csu/../csu/libc-start.c:310
#12 0x41dd09 in _start (/tmp/libarchive/bsdtar+0x41dd09)
This was primarly done with afl and FairFuzz. Some early corpus entries
may have been generated by qsym.
---
libarchive/archive_read_support_format_7zip.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/libarchive/archive_read_support_format_7zip.c b/libarchive/archive_read_support_format_7zip.c
index bccbf896..b6d1505d 100644
--- a/libarchive/archive_read_support_format_7zip.c
+++ b/libarchive/archive_read_support_format_7zip.c
@@ -2964,13 +2964,7 @@ get_uncompressed_data(struct archive_read *a, const void **buff, size_t size,
if (zip->codec == _7Z_COPY && zip->codec2 == (unsigned long)-1) {
/* Copy mode. */
- /*
- * Note: '1' here is a performance optimization.
- * Recall that the decompression layer returns a count of
- * available bytes; asking for more than that forces the
- * decompressor to combine reads by copying data.
- */
- *buff = __archive_read_ahead(a, 1, &bytes_avail);
+ *buff = __archive_read_ahead(a, minimum, &bytes_avail);
if (bytes_avail <= 0) {
archive_set_error(&a->archive,
ARCHIVE_ERRNO_FILE_FORMAT,
--
2.20.1

View File

@ -1,59 +0,0 @@
From 8312eaa576014cd9b965012af51bc1f967b12423 Mon Sep 17 00:00:00 2001
From: Daniel Axtens <dja@axtens.net>
Date: Tue, 1 Jan 2019 17:10:49 +1100
Subject: [PATCH 1/2] iso9660: Fail when expected Rockridge extensions is
missing
A corrupted or malicious ISO9660 image can cause read_CE() to loop
forever.
read_CE() calls parse_rockridge(), expecting a Rockridge extension
to be read. However, parse_rockridge() is structured as a while
loop starting with a sanity check, and if the sanity check fails
before the loop has run, the function returns ARCHIVE_OK without
advancing the position in the file. This causes read_CE() to retry
indefinitely.
Make parse_rockridge() return ARCHIVE_WARN if it didn't read an
extension. As someone with no real knowledge of the format, this
seems more apt than ARCHIVE_FATAL, but both the call-sites escalate
it to a fatal error immediately anyway.
Found with a combination of AFL, afl-rb (FairFuzz) and qsym.
---
libarchive/archive_read_support_format_iso9660.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/libarchive/archive_read_support_format_iso9660.c b/libarchive/archive_read_support_format_iso9660.c
index 28acfefb..bad8f1df 100644
--- a/libarchive/archive_read_support_format_iso9660.c
+++ b/libarchive/archive_read_support_format_iso9660.c
@@ -2102,6 +2102,7 @@ parse_rockridge(struct archive_read *a, struct file_info *file,
const unsigned char *p, const unsigned char *end)
{
struct iso9660 *iso9660;
+ int entry_seen = 0;
iso9660 = (struct iso9660 *)(a->format->data);
@@ -2257,8 +2258,16 @@ parse_rockridge(struct archive_read *a, struct file_info *file,
}
p += p[2];
+ entry_seen = 1;
+ }
+
+ if (entry_seen)
+ return (ARCHIVE_OK);
+ else {
+ archive_set_error(&a->archive, ARCHIVE_ERRNO_FILE_FORMAT,
+ "Tried to parse Rockridge extensions, but none found");
+ return (ARCHIVE_WARN);
}
- return (ARCHIVE_OK);
}
static int
--
2.20.1

View File

@ -1,34 +0,0 @@
From c7746e62d09b94ddcf98b36fa3ddcfdb20c4b40b Mon Sep 17 00:00:00 2001
From: Daniel Axtens <dja@axtens.net>
Date: Tue, 20 Nov 2018 17:56:29 +1100
Subject: [PATCH] Avoid a double-free when a window size of 0 is specified
new_size can be 0 with a malicious or corrupted RAR archive.
realloc(area, 0) is equivalent to free(area), so the region would
be free()d here and the free()d again in the cleanup function.
Found with a setup running AFL, afl-rb, and qsym.
---
libarchive/archive_read_support_format_rar.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/libarchive/archive_read_support_format_rar.c b/libarchive/archive_read_support_format_rar.c
index 23452222..6f419c27 100644
--- a/libarchive/archive_read_support_format_rar.c
+++ b/libarchive/archive_read_support_format_rar.c
@@ -2300,6 +2300,11 @@ parse_codes(struct archive_read *a)
new_size = DICTIONARY_MAX_SIZE;
else
new_size = rar_fls((unsigned int)rar->unp_size) << 1;
+ if (new_size == 0) {
+ archive_set_error(&a->archive, ARCHIVE_ERRNO_FILE_FORMAT,
+ "Zero window size is invalid.");
+ return (ARCHIVE_FATAL);
+ }
new_window = realloc(rar->lzss.window, new_size);
if (new_window == NULL) {
archive_set_error(&a->archive, ENOMEM,
--
2.17.1

View File

@ -1,75 +0,0 @@
From 22700942fec895b2d3e5ed6741756deb8666eaae Mon Sep 17 00:00:00 2001
From: Daniel Axtens <dja@axtens.net>
Date: Tue, 4 Dec 2018 00:55:22 +1100
Subject: [PATCH] rar: file split across multi-part archives must match
Fuzzing uncovered some UAF and memory overrun bugs where a file in a
single file archive reported that it was split across multiple
volumes. This was caused by ppmd7 operations calling
rar_br_fillup. This would invoke rar_read_ahead, which would in some
situations invoke archive_read_format_rar_read_header. That would
check the new file name against the old file name, and if they didn't
match up it would free the ppmd7 buffer and allocate a new
one. However, because the ppmd7 decoder wasn't actually done with the
buffer, it would continue to used the freed buffer. Both reads and
writes to the freed region can be observed.
This is quite tricky to solve: once the buffer has been freed it is
too late, as the ppmd7 decoder functions almost universally assume
success - there's no way for ppmd_read to signal error, nor are there
good ways for functions like Range_Normalise to propagate them. So we
can't detect after the fact that we're in an invalid state - e.g. by
checking rar->cursor, we have to prevent ourselves from ever ending up
there. So, when we are in the dangerous part or rar_read_ahead that
assumes a valid split, we set a flag force read_header to either go
down the path for split files or bail. This means that the ppmd7
decoder keeps a valid buffer and just runs out of data.
Found with a combination of AFL, afl-rb and qsym.
---
libarchive/archive_read_support_format_rar.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/libarchive/archive_read_support_format_rar.c b/libarchive/archive_read_support_format_rar.c
index 6f419c27..a8cc5c94 100644
--- a/libarchive/archive_read_support_format_rar.c
+++ b/libarchive/archive_read_support_format_rar.c
@@ -258,6 +258,7 @@ struct rar
struct data_block_offsets *dbo;
unsigned int cursor;
unsigned int nodes;
+ char filename_must_match;
/* LZSS members */
struct huffman_code maincode;
@@ -1560,6 +1561,12 @@ read_header(struct archive_read *a, struct archive_entry *entry,
}
return ret;
}
+ else if (rar->filename_must_match)
+ {
+ archive_set_error(&a->archive, ARCHIVE_ERRNO_FILE_FORMAT,
+ "Mismatch of file parts split across multi-volume archive");
+ return (ARCHIVE_FATAL);
+ }
rar->filename_save = (char*)realloc(rar->filename_save,
filename_size + 1);
@@ -2933,12 +2940,14 @@ rar_read_ahead(struct archive_read *a, size_t min, ssize_t *avail)
else if (*avail == 0 && rar->main_flags & MHD_VOLUME &&
rar->file_flags & FHD_SPLIT_AFTER)
{
+ rar->filename_must_match = 1;
ret = archive_read_format_rar_read_header(a, a->entry);
if (ret == (ARCHIVE_EOF))
{
rar->has_endarc_header = 1;
ret = archive_read_format_rar_read_header(a, a->entry);
}
+ rar->filename_must_match = 0;
if (ret != (ARCHIVE_OK))
return NULL;
return rar_read_ahead(a, min, avail);
--
2.17.1

View File

@ -1,46 +0,0 @@
From 3800cdbaf04b775b091b4b88a40933a2aa800a90 Mon Sep 17 00:00:00 2001
From: Daniel Axtens <dja@axtens.net>
Date: Tue, 4 Dec 2018 14:29:42 +1100
Subject: [PATCH] Skip 0-length ACL fields
Currently, it is possible to create an archive that crashes bsdtar
with a malformed ACL:
Program received signal SIGSEGV, Segmentation fault.
archive_acl_from_text_l (acl=<optimised out>, text=0x7e2e92 "", want_type=<optimised out>, sc=<optimised out>) at libarchive/archive_acl.c:1726
1726 switch (*s) {
(gdb) p n
$1 = 1
(gdb) p field[n]
$2 = {start = 0x0, end = 0x0}
Stop this by checking that the length is not zero before beginning
the switch statement.
I am pretty sure this is the bug mentioned in the qsym paper [1],
and I was able to replicate it with a qsym + AFL + afl-rb setup.
[1] https://www.usenix.org/conference/usenixsecurity18/presentation/yun
---
libarchive/archive_acl.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/libarchive/archive_acl.c b/libarchive/archive_acl.c
index fe42b9b8..cb23ad88 100644
--- a/libarchive/archive_acl.c
+++ b/libarchive/archive_acl.c
@@ -1711,6 +1711,11 @@ archive_acl_from_text_l(struct archive_acl *acl, const char *text,
st = field[n].start + 1;
len = field[n].end - field[n].start;
+ if (len == 0) {
+ ret = ARCHIVE_WARN;
+ continue;
+ }
+
switch (*s) {
case 'u':
if (len == 1 || (len == 4
--
2.17.1

View File

@ -1,40 +0,0 @@
From 1df1642b5f9fa94a8443457bd1b7112362082f6b Mon Sep 17 00:00:00 2001
From: Daniel Axtens <dja@axtens.net>
Date: Tue, 4 Dec 2018 16:33:42 +1100
Subject: [PATCH] warc: consume data once read
The warc decoder only used read ahead, it wouldn't actually consume
data that had previously been printed. This means that if you specify
an invalid content length, it will just reprint the same data over
and over and over again until it hits the desired length.
This means that a WARC resource with e.g.
Content-Length: 666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666665
but only a few hundred bytes of data, causes a quasi-infinite loop.
Consume data in subsequent calls to _warc_read.
Found with an AFL + afl-rb + qsym setup.
---
libarchive/archive_read_support_format_warc.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/libarchive/archive_read_support_format_warc.c b/libarchive/archive_read_support_format_warc.c
index e8753853..e8fc8428 100644
--- a/libarchive/archive_read_support_format_warc.c
+++ b/libarchive/archive_read_support_format_warc.c
@@ -386,6 +386,11 @@ _warc_read(struct archive_read *a, const void **buf, size_t *bsz, int64_t *off)
return (ARCHIVE_EOF);
}
+ if (w->unconsumed) {
+ __archive_read_consume(a, w->unconsumed);
+ w->unconsumed = 0U;
+ }
+
rab = __archive_read_ahead(a, 1U, &nrd);
if (nrd < 0) {
*bsz = 0U;
--
2.17.1

View File

@ -1,224 +0,0 @@
Backport three upstream patches related to covscan.
From d71b157c2f048f6c88bf9474743faabdc56f6015 Mon Sep 17 00:00:00 2001
From: Pavel Raiskup <praiskup@redhat.com>
Date: Fri, 23 Nov 2018 14:08:48 +0100
Subject: [PATCH] Fix use-after-free in delayed link processing (newc format)
During archiving, if some of the "delayed" hard link entries
happened to disappear on filesystem (or become unreadable) for
some reason (most probably race), the old code free()d the 'entry'
and continued with the loop; the next loop though dereferenced
'entry' and crashed the archiver.
Per report from Coverity.
---
tar/write.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/tar/write.c b/tar/write.c
index e15cc06c..c6e9fccc 100644
--- a/tar/write.c
+++ b/tar/write.c
@@ -540,8 +540,7 @@ write_archive(struct archive *a, struct bsdtar *bsdtar)
lafe_warnc(archive_errno(disk),
"%s", archive_error_string(disk));
bsdtar->return_value = 1;
- archive_entry_free(entry);
- continue;
+ goto next_entry;
}
/*
@@ -559,13 +558,13 @@ write_archive(struct archive *a, struct bsdtar *bsdtar)
bsdtar->return_value = 1;
else
archive_read_close(disk);
- archive_entry_free(entry);
- continue;
+ goto next_entry;
}
write_file(bsdtar, a, entry);
- archive_entry_free(entry);
archive_read_close(disk);
+next_entry:
+ archive_entry_free(entry);
entry = NULL;
archive_entry_linkify(bsdtar->resolver, &entry, &sparse_entry);
}
--
2.19.1
From ecfd245fbd1b0000540c75da56ad25201d5393b4 Mon Sep 17 00:00:00 2001
From: Pavel Raiskup <praiskup@redhat.com>
Date: Fri, 23 Nov 2018 13:48:34 +0100
Subject: [PATCH] Fix a few obvious resource leaks and strcpy() misuses
Per Coverity report.
---
cpio/cpio.c | 4 +++-
libarchive/archive_acl.c | 8 ++++++--
libarchive/archive_write_set_format_iso9660.c | 4 ++--
libarchive/archive_write_set_format_mtree.c | 4 ++--
libarchive/archive_write_set_format_pax.c | 6 ++++--
libarchive/archive_write_set_format_xar.c | 8 +++++---
6 files changed, 22 insertions(+), 12 deletions(-)
diff --git a/cpio/cpio.c b/cpio/cpio.c
index 9dddf417..4fd394de 100644
--- a/cpio/cpio.c
+++ b/cpio/cpio.c
@@ -755,8 +755,10 @@ file_to_archive(struct cpio *cpio, const char *srcpath)
}
if (cpio->option_rename)
destpath = cpio_rename(destpath);
- if (destpath == NULL)
+ if (destpath == NULL) {
+ archive_entry_free(entry);
return (0);
+ }
archive_entry_copy_pathname(entry, destpath);
/*
diff --git a/libarchive/archive_acl.c b/libarchive/archive_acl.c
index 9941d2f6..6ce7ab66 100644
--- a/libarchive/archive_acl.c
+++ b/libarchive/archive_acl.c
@@ -753,8 +753,10 @@ archive_acl_to_text_w(struct archive_acl *acl, ssize_t *text_len, int flags,
append_entry_w(&wp, prefix, ap->type, ap->tag, flags,
wname, ap->permset, id);
count++;
- } else if (r < 0 && errno == ENOMEM)
+ } else if (r < 0 && errno == ENOMEM) {
+ free(ws);
return (NULL);
+ }
}
/* Add terminating character */
@@ -975,8 +977,10 @@ archive_acl_to_text_l(struct archive_acl *acl, ssize_t *text_len, int flags,
prefix = NULL;
r = archive_mstring_get_mbs_l(
&ap->name, &name, &len, sc);
- if (r != 0)
+ if (r != 0) {
+ free(s);
return (NULL);
+ }
if (count > 0)
*p++ = separator;
if (name == NULL ||
diff --git a/libarchive/archive_write_set_format_iso9660.c b/libarchive/archive_write_set_format_iso9660.c
index c0ca435d..badc88ba 100644
--- a/libarchive/archive_write_set_format_iso9660.c
+++ b/libarchive/archive_write_set_format_iso9660.c
@@ -4899,10 +4899,10 @@ isofile_gen_utility_names(struct archive_write *a, struct isofile *file)
if (p[0] == '/') {
if (p[1] == '/')
/* Convert '//' --> '/' */
- strcpy(p, p+1);
+ memmove(p, p+1, strlen(p+1) + 1);
else if (p[1] == '.' && p[2] == '/')
/* Convert '/./' --> '/' */
- strcpy(p, p+2);
+ memmove(p, p+2, strlen(p+2) + 1);
else if (p[1] == '.' && p[2] == '.' && p[3] == '/') {
/* Convert 'dir/dir1/../dir2/'
* --> 'dir/dir2/'
diff --git a/libarchive/archive_write_set_format_mtree.c b/libarchive/archive_write_set_format_mtree.c
index 493d4735..0f2431e6 100644
--- a/libarchive/archive_write_set_format_mtree.c
+++ b/libarchive/archive_write_set_format_mtree.c
@@ -1810,10 +1810,10 @@ mtree_entry_setup_filenames(struct archive_write *a, struct mtree_entry *file,
if (p[0] == '/') {
if (p[1] == '/')
/* Convert '//' --> '/' */
- strcpy(p, p+1);
+ memmove(p, p+1, strlen(p+1) + 1);
else if (p[1] == '.' && p[2] == '/')
/* Convert '/./' --> '/' */
- strcpy(p, p+2);
+ memmove(p, p+2, strlen(p+2) + 1);
else if (p[1] == '.' && p[2] == '.' && p[3] == '/') {
/* Convert 'dir/dir1/../dir2/'
* --> 'dir/dir2/'
diff --git a/libarchive/archive_write_set_format_pax.c b/libarchive/archive_write_set_format_pax.c
index 6f78c48b..5a4c45a1 100644
--- a/libarchive/archive_write_set_format_pax.c
+++ b/libarchive/archive_write_set_format_pax.c
@@ -522,11 +522,13 @@ add_pax_acl(struct archive_write *a,
ARCHIVE_ERRNO_FILE_FORMAT, "%s %s %s",
"Can't translate ", attr, " to UTF-8");
return(ARCHIVE_WARN);
- } else if (*p != '\0') {
+ }
+
+ if (*p != '\0') {
add_pax_attr(&(pax->pax_header),
attr, p);
- free(p);
}
+ free(p);
return(ARCHIVE_OK);
}
diff --git a/libarchive/archive_write_set_format_xar.c b/libarchive/archive_write_set_format_xar.c
index 495f0d44..36d4a615 100644
--- a/libarchive/archive_write_set_format_xar.c
+++ b/libarchive/archive_write_set_format_xar.c
@@ -2120,10 +2120,10 @@ file_gen_utility_names(struct archive_write *a, struct file *file)
if (p[0] == '/') {
if (p[1] == '/')
/* Convert '//' --> '/' */
- strcpy(p, p+1);
+ memmove(p, p+1, strlen(p+1) + 1);
else if (p[1] == '.' && p[2] == '/')
/* Convert '/./' --> '/' */
- strcpy(p, p+2);
+ memmove(p, p+2, strlen(p+2) + 1);
else if (p[1] == '.' && p[2] == '.' && p[3] == '/') {
/* Convert 'dir/dir1/../dir2/'
* --> 'dir/dir2/'
@@ -3169,8 +3169,10 @@ save_xattrs(struct archive_write *a, struct file *file)
checksum_update(&(xar->a_sumwrk),
xar->wbuff, size);
if (write_to_temp(a, xar->wbuff, size)
- != ARCHIVE_OK)
+ != ARCHIVE_OK) {
+ free(heap);
return (ARCHIVE_FATAL);
+ }
if (r == ARCHIVE_OK) {
xar->stream.next_out = xar->wbuff;
xar->stream.avail_out = sizeof(xar->wbuff);
--
2.19.1
From ae1d1eeee425238b71ad1331309133eb9d3b88ee Mon Sep 17 00:00:00 2001
From: Martin Matuska <martin@matuska.org>
Date: Sat, 24 Nov 2018 01:31:40 +0100
Subject: [PATCH] tar/write.c: call missing archive_read_close() in
write_archive()
---
tar/write.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/tar/write.c b/tar/write.c
index c6e9fccc..09c44a3e 100644
--- a/tar/write.c
+++ b/tar/write.c
@@ -556,8 +556,7 @@ write_archive(struct archive *a, struct bsdtar *bsdtar)
"%s", archive_error_string(disk));
if (r == ARCHIVE_FATAL)
bsdtar->return_value = 1;
- else
- archive_read_close(disk);
+ archive_read_close(disk);
goto next_entry;
}
--
2.19.1

View File

@ -0,0 +1,114 @@
From aaacc8762fd8ced8823350edd8ce2e46b565582b Mon Sep 17 00:00:00 2001
From: "FeRD (Frank Dana)" <ferdnyc@gmail.com>
Date: Sun, 1 Sep 2019 02:46:55 -0400
Subject: [PATCH] test_write_filter_zstd: size @ lvl=20 < default < lvl=1
Raise compression on the second test to level=20, and perform a
third at level=1. Expect the output archive sizes to line up
based on compression level. Reduces test susceptibility to small
output size variations from different libzstd releases.
---
libarchive/test/test_write_filter_zstd.c | 66 +++++++++++++++++++++---
1 file changed, 60 insertions(+), 6 deletions(-)
diff --git a/libarchive/test/test_write_filter_zstd.c b/libarchive/test/test_write_filter_zstd.c
index 9fb01906..13de1344 100644
--- a/libarchive/test/test_write_filter_zstd.c
+++ b/libarchive/test/test_write_filter_zstd.c
@@ -34,7 +34,7 @@ DEFINE_TEST(test_write_filter_zstd)
char *buff, *data;
size_t buffsize, datasize;
char path[16];
- size_t used1, used2;
+ size_t used1, used2, used3;
int i, r;
buffsize = 2000000;
@@ -125,7 +125,7 @@ DEFINE_TEST(test_write_filter_zstd)
assertEqualIntA(a, ARCHIVE_OK,
archive_write_set_filter_option(a, NULL, "compression-level", "9"));
assertEqualIntA(a, ARCHIVE_OK,
- archive_write_set_filter_option(a, NULL, "compression-level", "6"));
+ archive_write_set_filter_option(a, NULL, "compression-level", "20"));
assertEqualIntA(a, ARCHIVE_OK, archive_write_open_memory(a, buff, buffsize, &used2));
for (i = 0; i < 100; i++) {
sprintf(path, "file%03d", i);
@@ -140,10 +140,6 @@ DEFINE_TEST(test_write_filter_zstd)
assertEqualIntA(a, ARCHIVE_OK, archive_write_close(a));
assertEqualInt(ARCHIVE_OK, archive_write_free(a));
- failure("compression-level=6 wrote %d bytes, default wrote %d bytes",
- (int)used2, (int)used1);
- assert(used2 < used1);
-
assert((a = archive_read_new()) != NULL);
assertEqualIntA(a, ARCHIVE_OK, archive_read_support_format_all(a));
r = archive_read_support_filter_zstd(a);
@@ -167,6 +163,64 @@ DEFINE_TEST(test_write_filter_zstd)
}
assertEqualInt(ARCHIVE_OK, archive_read_free(a));
+ /*
+ * One more time at level 1
+ */
+ assert((a = archive_write_new()) != NULL);
+ assertEqualIntA(a, ARCHIVE_OK, archive_write_set_format_ustar(a));
+ assertEqualIntA(a, ARCHIVE_OK,
+ archive_write_set_bytes_per_block(a, 10));
+ assertEqualIntA(a, ARCHIVE_OK, archive_write_add_filter_zstd(a));
+ assertEqualIntA(a, ARCHIVE_OK,
+ archive_write_set_filter_option(a, NULL, "compression-level", "1"));
+ assertEqualIntA(a, ARCHIVE_OK, archive_write_open_memory(a, buff, buffsize, &used3));
+ assert((ae = archive_entry_new()) != NULL);
+ archive_entry_set_filetype(ae, AE_IFREG);
+ archive_entry_set_size(ae, datasize);
+ for (i = 0; i < 100; i++) {
+ sprintf(path, "file%03d", i);
+ archive_entry_copy_pathname(ae, path);
+ assertEqualIntA(a, ARCHIVE_OK, archive_write_header(a, ae));
+ assertA(datasize == (size_t)archive_write_data(a, data, datasize));
+ }
+ archive_entry_free(ae);
+ assertEqualIntA(a, ARCHIVE_OK, archive_write_close(a));
+ assertEqualInt(ARCHIVE_OK, archive_write_free(a));
+
+ assert((a = archive_read_new()) != NULL);
+ assertEqualIntA(a, ARCHIVE_OK, archive_read_support_format_all(a));
+ r = archive_read_support_filter_zstd(a);
+ if (r == ARCHIVE_WARN) {
+ skipping("zstd reading not fully supported on this platform");
+ } else {
+ assertEqualIntA(a, ARCHIVE_OK,
+ archive_read_support_filter_all(a));
+ assertEqualIntA(a, ARCHIVE_OK,
+ archive_read_open_memory(a, buff, used3));
+ for (i = 0; i < 100; i++) {
+ sprintf(path, "file%03d", i);
+ failure("Trying to read %s", path);
+ if (!assertEqualIntA(a, ARCHIVE_OK,
+ archive_read_next_header(a, &ae)))
+ break;
+ assertEqualString(path, archive_entry_pathname(ae));
+ assertEqualInt((int)datasize, archive_entry_size(ae));
+ }
+ assertEqualIntA(a, ARCHIVE_OK, archive_read_close(a));
+ }
+ assertEqualInt(ARCHIVE_OK, archive_read_free(a));
+
+ /*
+ * Check output sizes for various compression levels, expectation
+ * is that archive size for level=20 < default < level=1
+ */
+ failure("compression-level=20 wrote %d bytes, default wrote %d bytes",
+ (int)used2, (int)used1);
+ assert(used2 < used1);
+ failure("compression-level=1 wrote %d bytes, default wrote %d bytes",
+ (int)used3, (int)used1);
+ assert(used1 < used3);
+
/*
* Test various premature shutdown scenarios to make sure we
* don't crash or leak memory.
--
2.21.0

View File

@ -1,21 +1,17 @@
%bcond_without check %bcond_without check
Name: libarchive Name: libarchive
Version: 3.3.3 Version: 3.4.0
Release: 8%{?dist} Release: 1%{?dist}
Summary: A library for handling streaming archive formats Summary: A library for handling streaming archive formats
License: BSD License: BSD
URL: http://www.libarchive.org/ URL: https://www.libarchive.org/
Source0: http://www.libarchive.org/downloads/%{name}-%{version}.tar.gz Source0: https://libarchive.org/downloads/%{name}-%{version}.tar.gz
Patch0: libarchive-3.3.3-covscan-2018.patch # Fix zstd test to be less susceptible to small variations in output size
Patch1: libarchive-3.1.2-CVE-2019-1000019.patch # Submitted upstream: https://github.com/libarchive/libarchive/pull/1240
Patch2: libarchive-3.1.2-CVE-2019-1000020.patch Patch0: libarchive-fix-zstd-test.patch
Patch3: libarchive-3.3.3-CVE-2018-1000877.patch
Patch4: libarchive-3.3.3-CVE-2018-1000878.patch
Patch5: libarchive-3.3.3-CVE-2018-1000879.patch
Patch6: libarchive-3.3.3-CVE-2018-1000880.patch
BuildRequires: automake BuildRequires: automake
BuildRequires: bison BuildRequires: bison
@ -27,7 +23,10 @@ BuildRequires: libattr-devel
BuildRequires: libxml2-devel BuildRequires: libxml2-devel
BuildRequires: libzstd-devel BuildRequires: libzstd-devel
BuildRequires: lz4-devel BuildRequires: lz4-devel
BuildRequires: lzo-devel # According to libarchive maintainer, linking against liblzo violates
# LZO license.
# See https://github.com/libarchive/libarchive/releases/tag/v3.3.0
#BuildRequires: lzo-devel
BuildRequires: openssl-devel BuildRequires: openssl-devel
BuildRequires: sharutils BuildRequires: sharutils
BuildRequires: xz-devel BuildRequires: xz-devel
@ -215,6 +214,11 @@ run_testsuite
%changelog %changelog
* Fri Aug 30 2019 FeRD (Frank Dana) <ferdnyc@gmail.com> - 3.4.0-1
- New upstream release, adds RAR5 and ZIPX support (readonly)
- Drop upstreamed patches
- Add upstreamed patch to fix test failure with libzstd-1.4.2
* Thu Jul 25 2019 Fedora Release Engineering <releng@fedoraproject.org> - 3.3.3-8 * Thu Jul 25 2019 Fedora Release Engineering <releng@fedoraproject.org> - 3.3.3-8
- Rebuilt for https://fedoraproject.org/wiki/Fedora_31_Mass_Rebuild - Rebuilt for https://fedoraproject.org/wiki/Fedora_31_Mass_Rebuild