nbdkit/0003-readahead-Fix-test.patch
Richard W.M. Jones ad784282b6 Rebase to new stable branch version 1.30.5
resolves: rhbz#2059289
Suppress excess messages from nbdkit-nbd-plugin
resolves: rhbz#2083498
Suppress incorrect VDDK error when converting guests from vCenter
resolves: rhbz#2083617
Backport new LUKS filter from 1.32.
Add new Python binding for nbdkit_parse_size from 1.32

Cherry-picked from Fedora:

Add new luks filter.
(Fedora commit 9588e5cbc7)
2022-05-12 18:51:12 +01:00

121 lines
3.6 KiB
Diff

From 5d679d01417a81a3a981520d2a0332e2370a2536 Mon Sep 17 00:00:00 2001
From: "Richard W.M. Jones" <rjones@redhat.com>
Date: Thu, 21 Apr 2022 16:14:46 +0100
Subject: [PATCH] readahead: Fix test
The previous test turned out to be pretty bad at testing the new
filter. A specific problem is that the filter starts a background
thread which issues .cache requests, while on the main connection
.pread requests are being passed through. The test used
--filter=readahead --filter=cache with the cache filter only caching
on .cache requests (since cache-on-read defaults to false), so only
caching requests made by the background thread.
main thread
client ---- .pread ----- delay-filter -------> plugin
\
\ background thread
.cache --- cache-filter
Under very high load, the background thread could be starved. This
means no requests were being cached at all, and all requests were
passing through the delay filter. It would appear that readahead was
failing (which it was, in a way).
It's not very easy to fix this since readahead is best-effort, but we
can go back to using a simpler plugin that logs reads and caches and
check that they look valid.
Update: commit 2ff548d66ad3eae87868402ec5b3319edd12090f
(cherry picked from commit db1e3311727c6ecab3264a1811d33db1aa45a4d0)
---
tests/test-readahead.sh | 61 +++++++++++++++++++++++------------------
1 file changed, 35 insertions(+), 26 deletions(-)
diff --git a/tests/test-readahead.sh b/tests/test-readahead.sh
index 17126e5a..37f4a06f 100755
--- a/tests/test-readahead.sh
+++ b/tests/test-readahead.sh
@@ -30,43 +30,52 @@
# OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGE.
-# Is the readahead filter faster? Copy a blank disk with a custom
-# plugin that sleeps on every request. Because the readahead filter
-# should result in fewer requests it should run faster.
-
source ./functions.sh
set -e
set -x
-requires_filter delay
+requires_plugin sh
requires nbdsh --version
requires dd iflag=count_bytes </dev/null
-files="readahead.img"
+files="readahead.out"
rm -f $files
cleanup_fn rm -f $files
-test ()
-{
- start_t=$SECONDS
- nbdkit -fv -U - "$@" null size=1M --filter=delay rdelay=5 \
- --run 'nbdsh --uri "$uri" -c "
+nbdkit -fv -U - "$@" sh - \
+ --filter=readahead \
+ --run 'nbdsh --uri "$uri" -c "
for i in range(0, 512*10, 512):
h.pread(512, i)
-"'
+"' <<'EOF'
+case "$1" in
+ thread_model)
+ echo parallel
+ ;;
+ can_cache)
+ echo native
+ ;;
+ get_size)
+ echo 1M
+ ;;
+ cache)
+ echo "$@" >> readahead.out
+ ;;
+ pread)
+ echo "$@" >> readahead.out
+ dd if=/dev/zero count=$3 iflag=count_bytes
+ ;;
+ *)
+ exit 2
+ ;;
+esac
+EOF
- end_t=$SECONDS
- echo $((end_t - start_t))
-}
+cat readahead.out
-t1=$(test --filter=readahead --filter=cache)
-t2=$(test)
-
-# In the t1 case we should make only 1 request into the plugin,
-# resulting in around 1 sleep period (5 seconds). In the t2 case we
-# make 10 requests so sleep for around 50 seconds. t1 should be < t2
-# is every reasonable scenario.
-if [ $t1 -ge $t2 ]; then
- echo "$0: readahead filter took longer, should be shorter"
- exit 1
-fi
+# We should see the pread requests, and additional cache requests for
+# the 32K region following each pread request.
+for i in `seq 0 512 $((512*10 - 512))` ; do
+ grep "pread 512 $i" readahead.out
+ grep "cache 32768 $((i+512))" readahead.out
+done
--
2.31.1