Add a whole intermediate template format ('FIF') and tools

I and @lruzicka (and I think @jskladan and @jsedlak and
@michelmno and everyone else who's ever touched it...) are being
gradually driven nuts by manually editing the test templates.
The bigger the files get the more awkward it is to keep them
straight and be sure we're doing it right. Upstream doesn't do
things the same way we do (they mostly edit in the web UI and
dump to file for the record), but we do still think making
changes in the repo and posting to the web UI is the right way
around to do it, we just wish the format was saner.

Upstream has actually recently introduced a YAML-based approach
to storing job templates which tries to condense things a bit,
and you can dump to that format with dump-templates --json, but
@lruzicka and I agree that that format is barely better for
hand editing in a text editor than the older one our templates
currently use.

So, this commit introduces...Fedora Intermediate Format (FIF) -
an alternative format for representing job templates - and some
tools for working with it. It also contains our existing
templates in this new format, and removes the old template files.
The format is documented in the docstrings of the tools, but
briefly, it keeps Machines, Products and TestSuites but improves
their format a bit (by turning dicts-of-lists into dicts-of-
dicts), and adds Profiles, which are combinations of Machines and
Products. TestSuites can indicate which Profiles they should be
run on.

The intermediate format converter (`fifconverter`) converts
existing template data (in JSON format; use tojson.pm to convert
our perl templates to JSON) to the intermediate format and
writes it out. As this was really intended only for one-time use
(the idea is that after one-time conversion, we will edit the
templates in the intermediate format from now on), its operation
is hardcoded and relies on specific filenames.

The intermediate format loader (`fifloader`) generates
JobTemplates from the TestSuites and Profiles, reverses the
quality-of-life improvements of the intermediate format, and
produces template data compatible with the upstream loader, then
can write it to disk and/or call the upstream loader directly.

The check script (`fifcheck`) runs existing template data through
both the converter and the loader, then checks that the result is
equivalent to the input. Again this was mostly written for one-
time use so is fairly rough and hard-coded, but I'm including it
in the commit so others can check the work and so on.

Signed-off-by: Adam Williamson <awilliam@redhat.com>
This commit is contained in:
Adam Williamson 2020-01-23 15:20:10 +01:00
parent 12b0fb04dc
commit 2c197d520c
8 changed files with 2849 additions and 6707 deletions

View File

@ -77,7 +77,7 @@ and `test_flags()` method, inheriting from one of the classes mentioned above.
3. Link your newly created Test suite to medium type in [WebUI -> Job groups](https://localhost:8080/admin/groups). 3. Link your newly created Test suite to medium type in [WebUI -> Job groups](https://localhost:8080/admin/groups).
4. Run test (see [openqa_fedora_tools repository](https://bitbucket.org/rajcze/openqa_fedora_tools)). 4. Run test (see [openqa_fedora_tools repository](https://bitbucket.org/rajcze/openqa_fedora_tools)).
5. Create needles (images) by using interactive mode and needles editor in WebUI. 5. Create needles (images) by using interactive mode and needles editor in WebUI.
6. Add new Job template and Test suite into `templates` file (and `templates-updates`, if the test is applicable to the update testing workflow) 6. Add new test suite and profiles into `templates.fif.json` file (and/or `templates-updates.fif.json`, if the test is applicable to the update testing workflow)
7. Add new Test suite and Test case into [`conf_test_suites.py`](https://pagure.io/fedora-qa/fedora_openqa/blob/master/f/fedora_openqa/conf_test_suites.py) file in fedora_openqa repository. 7. Add new Test suite and Test case into [`conf_test_suites.py`](https://pagure.io/fedora-qa/fedora_openqa/blob/master/f/fedora_openqa/conf_test_suites.py) file in fedora_openqa repository.
8. Open pull request for the os-autoinst-distri-fedora changes in [Pagure](https://pagure.io/fedora-qa/os-autoinst-distri-fedora). Pagure uses a Github-style workflow (summary: fork the project via the web interface, push your changes to a branch on your fork, then use the web interface to submit a pull request). See the [Pagure documentation](https://docs.pagure.org/pagure/usage/index.html) for more details. 8. Open pull request for the os-autoinst-distri-fedora changes in [Pagure](https://pagure.io/fedora-qa/os-autoinst-distri-fedora). Pagure uses a Github-style workflow (summary: fork the project via the web interface, push your changes to a branch on your fork, then use the web interface to submit a pull request). See the [Pagure documentation](https://docs.pagure.org/pagure/usage/index.html) for more details.
9. Open a pull request in [fedora_openqa Pagure](https://pagure.io/fedora-qa/fedora_openqa) for any necessary fedora_openqa changes. 9. Open a pull request in [fedora_openqa Pagure](https://pagure.io/fedora-qa/fedora_openqa) for any necessary fedora_openqa changes.

43
fifcheck Executable file
View File

@ -0,0 +1,43 @@
#!/bin/python3
"""This is a sanity check for the Fedora Intermediate Format (fif) converter and loader. It reads
in templates.old.json and templates-updates.old.json - which are expected to be our original-format
templates in JSON format - runs them through the converter to the intermediate format, then runs
them through the loader *from* the intermediate format, and (via DeepDiff, thanks jskladan!) checks
that the results are equivalent to the input, pace a couple of expected differences.
"""
from deepdiff import DeepDiff
import json
import subprocess
with open('templates.old.json', 'r') as tempfh:
origtemp = json.load(tempfh)
with open('templates-updates.old.json', 'r') as updfh:
origupd = json.load(updfh)
# run the converter
subprocess.run(['./fifconverter.py'])
# run the loader on the converted files
subprocess.run(['./fifloader.py', '--write', 'templates.fif.json', 'templates-updates.fif.json'])
with open('generated.json', 'r') as generatedfh:
generated = json.load(generatedfh)
# merge origs
origtemp['Products'].extend(origupd['Products'])
origtemp['TestSuites'].extend(origupd['TestSuites'])
origtemp['JobTemplates'].extend(origupd['JobTemplates'])
for item in generated['Products']:
# we generate the product names in the converter, our original
# templates don't have them
item['name'] = ""
for item in generated['JobTemplates']:
if item['group_name'] == 'fedora':
# we don't explicitly specify this in our original templates,
# but the converter adds it (rather than relying on openQA
# to guess when loading)
del item['group_name']
ddiff = DeepDiff(origtemp, generated, ignore_order=True, report_repetition=True)
# if this is just {}, we're good
print(ddiff)

104
fifconverter Executable file
View File

@ -0,0 +1,104 @@
#!/bin/python3
"""
This script takes JSON-formatted openQA template data (in the older format with a JobTemplates
dict, not the newer YAML-ish format organized by job group) and converts to an intermediate format
(Fedora Intermediate Format - 'fif') intended to be easier for human editing. It extracts all the
unique 'environment profiles' - a combination of machine and product - from the JobTemplates and
stores them in a 'Profiles' dict; it then adds a 'profiles' key to each test suite, indicating
which profiles that suite is run on. It is fairly easy to reverse this process to reproduce the
openQA loader-compatible data, but the intermediate format is more friendly to a human editor.
Adding a new test suite to run on existing 'profiles' only requires adding the suite and an
appropriate 'profiles' dict. Adding a new profile involves adding the machine and/or product,
manually adding the profile to the Profiles dict, and then adding the profile to all the test
suites which should be run on it. See also fifloader.py, which handles converting FIF input to
upstream format, and optionally can pass it through to the upstream loader.
"""
import json
with open('templates.old.json', 'r') as tempfh:
tempdata = json.load(tempfh)
with open('templates-updates.old.json', 'r') as updfh:
updata = json.load(updfh)
def _synthesize_product_name(product):
"""Synthesize a product name from a product dict. We do this when
reading the templates file and also when constructing the profiles
so use a function to make sure they both do it the same way.
"""
return "-".join((product['distri'], product['flavor'], product['arch'], product['version']))
def read_templates(templates):
newtemps = {}
if 'Machines' in templates:
newtemps['Machines'] = {}
for machine in templates['Machines']:
# condense the stupid settings format
machine['settings'] = {settdict['key']: settdict['value'] for settdict in machine['settings']}
# just use a dict, not a list of dicts with 'name' keys...
name = machine.pop('name')
newtemps['Machines'][name] = machine
if 'Products' in templates:
newtemps['Products'] = {}
for product in templates['Products']:
# condense the stupid settings format
product['settings'] = {settdict['key']: settdict['value'] for settdict in product['settings']}
# synthesize a name, as we don't have any in our templates
# and we can use them in the scenarios. however, note that
# openQA itself doesn't let you use the product name as a
# key when loading templates, unlike the machine name, our
# loader has to reverse this and provide the full product
# dict to the upstream loader
name = _synthesize_product_name(product)
# this is always an empty string in our templates
del product['name']
newtemps['Products'][name] = product
if 'TestSuites' in templates:
newtemps['TestSuites'] = {}
for testsuite in templates['TestSuites']:
# condense the stupid settings format
testsuite['settings'] = {settdict['key']: settdict['value'] for settdict in testsuite['settings']}
# just use a dict, not a list of dicts with 'name' keys...
name = testsuite.pop('name')
newtemps['TestSuites'][name] = testsuite
profiles = {}
for jobtemp in templates['JobTemplates']:
# figure out the profile for each job template and add it to
# the dict. For Fedora, the group name is predictable based on
# the arch and whether it's an update test; the intermediate
# loader figures that out
profile = {
'machine': jobtemp['machine']['name'],
'product': _synthesize_product_name(jobtemp['product']),
}
profname = '-'.join([profile['product'], profile['machine']])
# keep track of all the profiles we've hit
profiles[profname] = profile
test = jobtemp['test_suite']['name']
prio = jobtemp['prio']
try:
suite = newtemps['TestSuites'][test]
except KeyError:
# this is a templates-updates JobTemplate which refers to a
# TestSuite defined in templates. What we do here is define
# a partial TestSuite which contains only the name and the
# profiles; the loader for this format knows how to combine
# dicts (including incomplete ones) from multiple source
# files into one big final-format lump
suite = {}
newtemps['TestSuites'][test] = suite
if 'profiles' in suite:
suite['profiles'][profname] = prio
else:
suite['profiles'] = {profname: prio}
newtemps['Profiles'] = profiles
return newtemps
with open('templates.fif.json', 'w') as newtempfh:
json.dump(read_templates(tempdata), newtempfh, sort_keys=True, indent=4)
with open('templates-updates.fif.json', 'w') as newtempfh:
json.dump(read_templates(updata), newtempfh, sort_keys=True, indent=4)

265
fifloader Executable file
View File

@ -0,0 +1,265 @@
#!/bin/python3
"""This is an openQA template loader/converter for FIF, the Fedora Intermediate Format. It reads
from one or more files expected to contain FIF JSON-formatted template data; read on for details
on this format as it compares to the upstream format. It produces data in the upstream format; it
can write this data to a JSON file and/or call the upstream loader on it directly, depending on
the command-line arguments specified.
The input data must contain definitions of Machines, Products, TestSuites, and Profiles. The input
data *may* contain JobTemplates, but does not have to and is expected to contain none or only a few
oddballs.
The format for Machines, Products and TestSuites is based on the upstream format but with various
quality-of-life improvements. Upstream, each of these is a list-of-dicts, each dict containing a
'name' key. This loader expects each to be a dict-of-dicts, with the names as keys (this is both
easier to read and easier to access). In the upstream format, each Machine, Product and TestSuite
dict can contain an entry with the key 'settings' which defines variables. The value (for some
reason...) is a list of dicts, each dict of the format {"key": keyname, "value": value}. This
loader expects a more obvious and simple format where the value of the 'settings' key is simply a
dict of keys and values.
The expected format of the Profiles dict is a dict-of-dicts. For each entry, the key is a unique
name, and the value is a dict with keys 'machine' and 'product', each value being a valid name from
the Machines or Products dict respectively. The name of each profile can be anything as long as
it's unique.
For TestSuites, this loader then expects an additional 'profiles' key in each dict, whose value is
a dict indicating the Profiles from which we should generate one or more job templates for that
test suite. For each entry in the dict, the key is a profile name from the Profiles dict, and the
value is the priority to give the generated job template.
This loader will generate JobTemplates from the combination of TestSuites and Profiles. It means
that, for instance, if you want to add a new test suite and run it on the same set of images and
arches as several other tests are already run, you do not need to do a large amount of copying and
pasting to create a bunch of JobTemplates that look a lot like other existing JobTemplates but with
a different test_suite value; you can just specify an appropriate profiles dict, which is much
shorter and easier and less error-prone. Thus specifying JobTemplates directly is not usually
needed and is expected to be used only for some oddball case which the generation system does not
handle.
The loader will automatically set the group_name for each job template based on Fedora-specific
logic which we previously followed manually when creating job templates (e.g. it is set to 'Fedora
PowerPC' for compose tests run on the PowerPC arch); thus this loader is not really generic but
specific to Fedora conventions. This could possibly be changed (e.g. by allowing the logic for
deciding group names to be configurable) if anyone else wants to use it.
Multiple input files will be combined. Mostly this involves simply updating dicts, but there is
special handling for TestSuites to allow multiple input files to each include entries for 'the
same' test suite, but with different profile dicts. So for instance one input file may contain a
complete TestSuite definition, with the value of its `profiles` key as `{'foo': 10}`. Another input
file may contain a TestSuite entry with the same key (name) as the complete definition in the other
file, and the value as a dict with only a `profiles` key (with the value `{'bar': 20}`). This
loader will combine those into a single complete TestSuite entry with the `profiles` value
`{'foo': 10, 'bar': 20}`.
"""
import argparse
import json
import subprocess
import sys
def merge_inputs(inputs):
"""Merge multiple input files. Expects JSON file names. Returns
a 5-tuple of machines, products, profiles, testsuites and
jobtemplates (the first four as dicts, the fifth as a list).
"""
machines = {}
products = {}
profiles = {}
testsuites = {}
jobtemplates = []
for input in inputs:
try:
with open(input, 'r') as inputfh:
data = json.load(inputfh)
except err:
print("Reading input file {} failed!".format(input))
sys.exit(str(err))
# simple merges for all these
for (datatype, tgt) in (
('Machines', machines),
('Products', products),
('Profiles', profiles),
('JobTemplates', jobtemplates),
):
if datatype in data:
if datatype == 'JobTemplates':
tgt.extend(data[datatype])
else:
tgt.update(data[datatype])
# special testsuite merging as described in the docstring
if 'TestSuites' in data:
for (name, newsuite) in data['TestSuites'].items():
try:
existing = testsuites[name]
# combine and stash the profiles
existing['profiles'].update(newsuite['profiles'])
combinedprofiles = existing['profiles']
# now update the existing suite with the new one, this
# will overwrite the profiles
existing.update(newsuite)
# now restore the combined profiles
existing['profiles'] = combinedprofiles
except KeyError:
testsuites[name] = newsuite
return (machines, products, profiles, testsuites, jobtemplates)
def generate_job_templates(machines, products, profiles, testsuites):
"""Given machines, products, profiles and testsuites (after
merging, but still in intermediate format), generates job
templates and returns them as a list.
"""
jobtemplates = []
for (name, suite) in testsuites.items():
if 'profiles' not in suite:
print("Warning: no profiles for test suite {}".format(name))
continue
for (profile, prio) in suite['profiles'].items():
jobtemplate = {'test_suite': {'name': name}, 'prio': prio}
# x86_64 compose
jobtemplate['group_name'] = 'fedora'
jobtemplate['machine'] = {'name': profiles[profile]['machine']}
product = products[profiles[profile]['product']]
jobtemplate['product'] = {
'arch': product['arch'],
'flavor': product['flavor'],
'distri': product['distri'],
'version': product['version']
}
if jobtemplate['machine']['name'] == 'ppc64le':
if 'updates' in product['flavor']:
jobtemplate['group_name'] = "Fedora PowerPC Updates"
else:
jobtemplate['group_name'] = "Fedora PowerPC"
elif jobtemplate['machine']['name'] == 'aarch64':
if 'updates' in product['flavor']:
jobtemplate['group_name'] = "Fedora AArch64 Updates"
else:
jobtemplate['group_name'] = "Fedora AArch64"
elif 'updates' in product['flavor']:
# x86_64 updates
jobtemplate['group_name'] = "Fedora Updates"
jobtemplates.append(jobtemplate)
return jobtemplates
def reverse_qol(machines, products, testsuites):
"""Reverse all our quality-of-life improvements in Machines,
Products and TestSuites. We don't do profiles as only this loader
uses them, upstream loader does not. We don't do jobtemplates as
we don't do any QOL stuff for that. Returns the same tuple it's
passed.
"""
# first, some nested convenience functions
def to_list_of_dicts(datadict):
"""Convert our nice dicts to upstream's stupid list-of-dicts-with
-name-keys.
"""
converted = []
for (name, item) in datadict.items():
item['name'] = name
converted.append(item)
return converted
def dumb_settings(settdict):
"""Convert our sensible settings dicts to upstream's weird-ass
list-of-dicts format.
"""
converted = []
for (key, value) in settdict.items():
converted.append({'key': key, 'value': value})
return converted
machines = to_list_of_dicts(machines)
products = to_list_of_dicts(products)
testsuites = to_list_of_dicts(testsuites)
for datatype in (machines, products, testsuites):
for item in datatype:
item['settings'] = dumb_settings(item['settings'])
if 'profiles' in item:
# this is only part of the intermediate format, should
# not be in the final output
del item['profiles']
return (machines, products, testsuites)
def parse_args():
"""Parse arguments with argparse."""
parser = argparse.ArgumentParser(description=(
"Alternative openQA template loader/generator, using a more "
"convenient input format. See docstring for details. "))
parser.add_argument(
'-l', '--load', help="Load the generated templates into openQA.",
action='store_true')
parser.add_argument(
'--loader', help="Loader to use with --load",
default="/usr/share/openqa/script/load_templates")
parser.add_argument(
'-w', '--write', help="Write the generated templates in JSON "
"format.", action='store_true')
parser.add_argument(
'--filename', help="Filename to write with --write",
default="generated.json")
parser.add_argument(
'--host', help="If specified with --load, gives a host "
"to load the templates to. Is passed unmodified to upstream "
"loader.")
parser.add_argument(
'-c', '--clean', help="If specified with --load, passed to "
"upstream loader and behaves as documented there.",
action='store_true')
parser.add_argument(
'-u', '--update', help="If specified with --load, passed to "
"upstream loader and behaves as documented there.",
action='store_true')
parser.add_argument(
'files', help="Input JSON files", nargs='+')
return parser.parse_args()
def run():
"""Read in arguments and run the appropriate steps."""
args = parse_args()
if not args.write and not args.load:
sys.exit("Neither --write nor --load specified! Doing nothing.")
(machines, products, profiles, testsuites, jobtemplates) = merge_inputs(args.files)
jobtemplates.extend(generate_job_templates(machines, products, profiles, testsuites))
(machines, products, testsuites) = reverse_qol(machines, products, testsuites)
# now produce the output in upstream-compatible format
out = {
'JobTemplates': jobtemplates,
'Machines': machines,
'Products': products,
'TestSuites': testsuites
}
if args.write:
# write generated output to given filename
with open(args.filename, 'w') as outfh:
json.dump(out, outfh, indent=4)
if args.load:
# load generated output with given loader (defaults to
# /usr/share/openqa/script/load_templates)
loadargs = [args.loader]
if args.host:
loadargs.extend(['--host', args.host])
if args.clean:
loadargs.append('--clean')
if args.update:
loadargs.append('--update')
loadargs.append('-')
subprocess.run(loadargs, input=json.dumps(out), text=True)
def main():
"""Main loop."""
try:
run()
except KeyboardInterrupt:
sys.stderr.write("Interrupted, exiting...\n")
sys.exit(1)
if __name__ == '__main__':
main()
# vim: set textwidth=100 ts=8 et sw=4:

5667
templates

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

372
templates-updates.fif.json Normal file
View File

@ -0,0 +1,372 @@
{
"Products": {
"fedora-updates-everything-boot-iso-x86_64-*": {
"arch": "x86_64",
"distri": "fedora",
"flavor": "updates-everything-boot-iso",
"settings": {},
"version": "*"
},
"fedora-updates-kde-x86_64-*": {
"arch": "x86_64",
"distri": "fedora",
"flavor": "updates-kde",
"settings": {
"DESKTOP": "kde"
},
"version": "*"
},
"fedora-updates-server-aarch64-*": {
"arch": "aarch64",
"distri": "fedora",
"flavor": "updates-server",
"settings": {},
"version": "*"
},
"fedora-updates-server-ppc64le-*": {
"arch": "ppc64le",
"distri": "fedora",
"flavor": "updates-server",
"settings": {},
"version": "*"
},
"fedora-updates-server-upgrade-aarch64-*": {
"arch": "aarch64",
"distri": "fedora",
"flavor": "updates-server-upgrade",
"settings": {},
"version": "*"
},
"fedora-updates-server-upgrade-ppc64le-*": {
"arch": "ppc64le",
"distri": "fedora",
"flavor": "updates-server-upgrade",
"settings": {},
"version": "*"
},
"fedora-updates-server-upgrade-x86_64-*": {
"arch": "x86_64",
"distri": "fedora",
"flavor": "updates-server-upgrade",
"settings": {},
"version": "*"
},
"fedora-updates-server-x86_64-*": {
"arch": "x86_64",
"distri": "fedora",
"flavor": "updates-server",
"settings": {},
"version": "*"
},
"fedora-updates-workstation-live-iso-x86_64-*": {
"arch": "x86_64",
"distri": "fedora",
"flavor": "updates-workstation-live-iso",
"settings": {
"DESKTOP": "gnome",
"LIVE": "1",
"PACKAGE_SET": "default"
},
"version": "*"
},
"fedora-updates-workstation-upgrade-x86_64-*": {
"arch": "x86_64",
"distri": "fedora",
"flavor": "updates-workstation-upgrade",
"settings": {
"DESKTOP": "gnome"
},
"version": "*"
},
"fedora-updates-workstation-x86_64-*": {
"arch": "x86_64",
"distri": "fedora",
"flavor": "updates-workstation",
"settings": {
"DESKTOP": "gnome"
},
"version": "*"
}
},
"Profiles": {
"fedora-updates-everything-boot-iso-x86_64-*-64bit": {
"machine": "64bit",
"product": "fedora-updates-everything-boot-iso-x86_64-*"
},
"fedora-updates-everything-boot-iso-x86_64-*-uefi": {
"machine": "uefi",
"product": "fedora-updates-everything-boot-iso-x86_64-*"
},
"fedora-updates-kde-x86_64-*-64bit": {
"machine": "64bit",
"product": "fedora-updates-kde-x86_64-*"
},
"fedora-updates-server-aarch64-*-aarch64": {
"machine": "aarch64",
"product": "fedora-updates-server-aarch64-*"
},
"fedora-updates-server-ppc64le-*-ppc64le": {
"machine": "ppc64le",
"product": "fedora-updates-server-ppc64le-*"
},
"fedora-updates-server-upgrade-ppc64le-*-ppc64le": {
"machine": "ppc64le",
"product": "fedora-updates-server-upgrade-ppc64le-*"
},
"fedora-updates-server-upgrade-x86_64-*-64bit": {
"machine": "64bit",
"product": "fedora-updates-server-upgrade-x86_64-*"
},
"fedora-updates-server-x86_64-*-64bit": {
"machine": "64bit",
"product": "fedora-updates-server-x86_64-*"
},
"fedora-updates-workstation-live-iso-x86_64-*-64bit": {
"machine": "64bit",
"product": "fedora-updates-workstation-live-iso-x86_64-*"
},
"fedora-updates-workstation-live-iso-x86_64-*-uefi": {
"machine": "uefi",
"product": "fedora-updates-workstation-live-iso-x86_64-*"
},
"fedora-updates-workstation-upgrade-x86_64-*-64bit": {
"machine": "64bit",
"product": "fedora-updates-workstation-upgrade-x86_64-*"
},
"fedora-updates-workstation-x86_64-*-64bit": {
"machine": "64bit",
"product": "fedora-updates-workstation-x86_64-*"
}
},
"TestSuites": {
"advisory_boot": {
"profiles": {
"fedora-updates-server-aarch64-*-aarch64": 40,
"fedora-updates-server-ppc64le-*-ppc64le": 40,
"fedora-updates-server-x86_64-*-64bit": 40
},
"settings": {
"ADVISORY_BOOT_TEST": "1",
"BOOTFROM": "c",
"ROOT_PASSWORD": "weakpassword",
"USER_LOGIN": "false"
}
},
"base_selinux": {
"profiles": {
"fedora-updates-kde-x86_64-*-64bit": 42,
"fedora-updates-server-aarch64-*-aarch64": 40,
"fedora-updates-server-ppc64le-*-ppc64le": 40,
"fedora-updates-server-x86_64-*-64bit": 40,
"fedora-updates-workstation-x86_64-*-64bit": 40
}
},
"base_service_manipulation": {
"profiles": {
"fedora-updates-kde-x86_64-*-64bit": 42,
"fedora-updates-server-aarch64-*-aarch64": 40,
"fedora-updates-server-ppc64le-*-ppc64le": 40,
"fedora-updates-server-x86_64-*-64bit": 40,
"fedora-updates-workstation-x86_64-*-64bit": 40
}
},
"base_services_start": {
"profiles": {
"fedora-updates-kde-x86_64-*-64bit": 42,
"fedora-updates-server-aarch64-*-aarch64": 40,
"fedora-updates-server-ppc64le-*-ppc64le": 40,
"fedora-updates-server-x86_64-*-64bit": 40,
"fedora-updates-workstation-x86_64-*-64bit": 40
}
},
"base_update_cli": {
"profiles": {
"fedora-updates-kde-x86_64-*-64bit": 42,
"fedora-updates-server-aarch64-*-aarch64": 40,
"fedora-updates-server-ppc64le-*-ppc64le": 40,
"fedora-updates-server-x86_64-*-64bit": 40,
"fedora-updates-workstation-x86_64-*-64bit": 40
}
},
"desktop_background": {
"profiles": {
"fedora-updates-kde-x86_64-*-64bit": 32,
"fedora-updates-workstation-x86_64-*-64bit": 30
}
},
"desktop_browser": {
"profiles": {
"fedora-updates-kde-x86_64-*-64bit": 32,
"fedora-updates-workstation-x86_64-*-64bit": 30
}
},
"desktop_printing": {
"profiles": {
"fedora-updates-kde-x86_64-*-64bit": 32,
"fedora-updates-workstation-x86_64-*-64bit": 30
}
},
"desktop_terminal": {
"profiles": {
"fedora-updates-kde-x86_64-*-64bit": 32,
"fedora-updates-workstation-x86_64-*-64bit": 30
}
},
"desktop_update_graphical": {
"profiles": {
"fedora-updates-kde-x86_64-*-64bit": 32,
"fedora-updates-workstation-x86_64-*-64bit": 30
}
},
"install_default_update_live": {
"profiles": {
"fedora-updates-workstation-live-iso-x86_64-*-64bit": 40,
"fedora-updates-workstation-live-iso-x86_64-*-uefi": 41
},
"settings": {
"+START_AFTER_TEST": "live_build@%ARCH_BASE_MACHINE%",
"INSTALL": "1",
"ISO": "Fedora-%SUBVARIANT%-Live-%ARCH%-%ADVISORY_OR_TASK%.iso"
}
},
"install_default_update_netinst": {
"profiles": {
"fedora-updates-everything-boot-iso-x86_64-*-64bit": 40,
"fedora-updates-everything-boot-iso-x86_64-*-uefi": 41
},
"settings": {
"+START_AFTER_TEST": "installer_build@%ARCH_BASE_MACHINE%",
"ADD_REPOSITORY_VARIATION": "nfs://10.0.2.110:/opt/update_repo",
"INSTALL": "1",
"INSTALL_UNLOCK": "support_ready",
"ISO": "%ADVISORY_OR_TASK%-netinst-%ARCH%.iso",
"NICTYPE": "tap",
"PACKAGE_SET": "default",
"PARALLEL_WITH": "support_server@%ARCH_BASE_MACHINE%",
"WORKER_CLASS": "tap"
}
},
"installer_build": {
"profiles": {
"fedora-updates-everything-boot-iso-x86_64-*-64bit": 40
},
"settings": {
"BOOTFROM": "c",
"HDD_1": "disk_f%VERSION%_minimal_3_%ARCH%.img",
"NUMDISKS": "2",
"POSTINSTALL": "_installer_build",
"ROOT_PASSWORD": "weakpassword",
"USER_LOGIN": "false"
}
},
"live_build": {
"profiles": {
"fedora-updates-workstation-live-iso-x86_64-*-64bit": 40
},
"settings": {
"+DESKTOP": "",
"+LIVE": "",
"BOOTFROM": "c",
"GRUB_POSTINSTALL": "selinux=0",
"HDD_1": "disk_f%VERSION%_minimal_3_%ARCH%.img",
"NUMDISKS": "2",
"POSTINSTALL": "_live_build",
"ROOT_PASSWORD": "weakpassword",
"USER_LOGIN": "false"
}
},
"realmd_join_cockpit": {
"profiles": {
"fedora-updates-server-aarch64-*-aarch64": 40,
"fedora-updates-server-ppc64le-*-ppc64le": 40,
"fedora-updates-server-x86_64-*-64bit": 40
}
},
"realmd_join_sssd": {
"profiles": {
"fedora-updates-server-aarch64-*-aarch64": 30,
"fedora-updates-server-ppc64le-*-ppc64le": 30,
"fedora-updates-server-x86_64-*-64bit": 30
}
},
"server_cockpit_basic": {
"profiles": {
"fedora-updates-server-aarch64-*-aarch64": 40,
"fedora-updates-server-ppc64le-*-ppc64le": 40,
"fedora-updates-server-x86_64-*-64bit": 40
}
},
"server_cockpit_default": {
"profiles": {
"fedora-updates-server-aarch64-*-aarch64": 40,
"fedora-updates-server-ppc64le-*-ppc64le": 40,
"fedora-updates-server-x86_64-*-64bit": 40
}
},
"server_database_client": {
"profiles": {
"fedora-updates-server-aarch64-*-aarch64": 40,
"fedora-updates-server-ppc64le-*-ppc64le": 40,
"fedora-updates-server-x86_64-*-64bit": 40
}
},
"server_firewall_default": {
"profiles": {
"fedora-updates-server-aarch64-*-aarch64": 40,
"fedora-updates-server-ppc64le-*-ppc64le": 40,
"fedora-updates-server-x86_64-*-64bit": 40
}
},
"server_freeipa_replication_client": {
"profiles": {
"fedora-updates-server-x86_64-*-64bit": 40
}
},
"server_freeipa_replication_master": {
"profiles": {
"fedora-updates-server-x86_64-*-64bit": 40
}
},
"server_freeipa_replication_replica": {
"profiles": {
"fedora-updates-server-x86_64-*-64bit": 40
}
},
"server_role_deploy_database_server": {
"profiles": {
"fedora-updates-server-aarch64-*-aarch64": 40,
"fedora-updates-server-ppc64le-*-ppc64le": 40,
"fedora-updates-server-x86_64-*-64bit": 40
}
},
"server_role_deploy_domain_controller": {
"profiles": {
"fedora-updates-server-aarch64-*-aarch64": 40,
"fedora-updates-server-ppc64le-*-ppc64le": 40,
"fedora-updates-server-x86_64-*-64bit": 40
}
},
"support_server": {
"profiles": {
"fedora-updates-everything-boot-iso-x86_64-*-64bit": 40
}
},
"upgrade_desktop_encrypted_64bit": {
"profiles": {
"fedora-updates-workstation-upgrade-x86_64-*-64bit": 40
}
},
"upgrade_realmd_client": {
"profiles": {
"fedora-updates-server-upgrade-ppc64le-*-ppc64le": 30,
"fedora-updates-server-upgrade-x86_64-*-64bit": 30
}
},
"upgrade_server_domain_controller": {
"profiles": {
"fedora-updates-server-upgrade-ppc64le-*-ppc64le": 40,
"fedora-updates-server-upgrade-x86_64-*-64bit": 40
}
}
}
}

2064
templates.fif.json Normal file

File diff suppressed because it is too large Load Diff