libvirt/SOURCES/libvirt-numa_conf-Properly-check-for-caches-in-virDomainNumaDefValidate.patch
2021-10-08 13:11:25 +00:00

92 lines
3.3 KiB
Diff

From 8521a431d3da3cc360eb8102eda1c0d649f1ecc3 Mon Sep 17 00:00:00 2001
Message-Id: <8521a431d3da3cc360eb8102eda1c0d649f1ecc3@dist-git>
From: Michal Privoznik <mprivozn@redhat.com>
Date: Wed, 7 Oct 2020 18:45:45 +0200
Subject: [PATCH] numa_conf: Properly check for caches in
virDomainNumaDefValidate()
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
When adding support for HMAT, in f0611fe8830 I've introduced a
check which aims to validate /domain/cpu/numa/interconnects. As a
part of that, there is a loop which checks whether all <latency/>
with @cache attribute refer to an existing cache level. For
instance:
<cpu mode='host-model' check='partial'>
<numa>
<cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
<cache level='1' associativity='direct' policy='writeback'>
<size value='8' unit='KiB'/>
<line value='5' unit='B'/>
</cache>
</cell>
<interconnects>
<latency initiator='0' target='0' cache='1' type='access' value='5'/>
<bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
</interconnects>
</numa>
</cpu>
This XML defines that accessing L1 cache of node #0 from node #0
has latency of 5ns.
However, the loop was not written properly. Well, the check in
it, as it was always checking for the first cache in the target
node and not the rest. Therefore, the following example errors
out:
<cpu mode='host-model' check='partial'>
<numa>
<cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
<cache level='3' associativity='direct' policy='writeback'>
<size value='10' unit='KiB'/>
<line value='8' unit='B'/>
</cache>
<cache level='1' associativity='direct' policy='writeback'>
<size value='8' unit='KiB'/>
<line value='5' unit='B'/>
</cache>
</cell>
<interconnects>
<latency initiator='0' target='0' cache='1' type='access' value='5'/>
<bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
</interconnects>
</numa>
</cpu>
This errors out even though it is a valid configuration. The L1
cache under node #0 is still present.
Fixes: f0611fe8830
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
(cherry picked from commit e41ac71fca309b50e2c8e6ec142d8fe1280ca2ad)
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1749518
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Message-Id: <4bb47f9e97ca097cee1259449da4739b55753751.1602087923.git.mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
---
src/conf/numa_conf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/conf/numa_conf.c b/src/conf/numa_conf.c
index 5a92eb35cc..a20398714e 100644
--- a/src/conf/numa_conf.c
+++ b/src/conf/numa_conf.c
@@ -1423,7 +1423,7 @@ virDomainNumaDefValidate(const virDomainNuma *def)
if (l->cache > 0) {
for (j = 0; j < def->mem_nodes[l->target].ncaches; j++) {
- const virDomainNumaCache *cache = def->mem_nodes[l->target].caches;
+ const virDomainNumaCache *cache = &def->mem_nodes[l->target].caches[j];
if (l->cache == cache->level)
break;
--
2.29.2