mirror of
https://pagure.io/fedora-qa/os-autoinst-distri-fedora.git
synced 2024-12-22 10:23:09 +00:00
2c197d520c
I and @lruzicka (and I think @jskladan and @jsedlak and @michelmno and everyone else who's ever touched it...) are being gradually driven nuts by manually editing the test templates. The bigger the files get the more awkward it is to keep them straight and be sure we're doing it right. Upstream doesn't do things the same way we do (they mostly edit in the web UI and dump to file for the record), but we do still think making changes in the repo and posting to the web UI is the right way around to do it, we just wish the format was saner. Upstream has actually recently introduced a YAML-based approach to storing job templates which tries to condense things a bit, and you can dump to that format with dump-templates --json, but @lruzicka and I agree that that format is barely better for hand editing in a text editor than the older one our templates currently use. So, this commit introduces...Fedora Intermediate Format (FIF) - an alternative format for representing job templates - and some tools for working with it. It also contains our existing templates in this new format, and removes the old template files. The format is documented in the docstrings of the tools, but briefly, it keeps Machines, Products and TestSuites but improves their format a bit (by turning dicts-of-lists into dicts-of- dicts), and adds Profiles, which are combinations of Machines and Products. TestSuites can indicate which Profiles they should be run on. The intermediate format converter (`fifconverter`) converts existing template data (in JSON format; use tojson.pm to convert our perl templates to JSON) to the intermediate format and writes it out. As this was really intended only for one-time use (the idea is that after one-time conversion, we will edit the templates in the intermediate format from now on), its operation is hardcoded and relies on specific filenames. The intermediate format loader (`fifloader`) generates JobTemplates from the TestSuites and Profiles, reverses the quality-of-life improvements of the intermediate format, and produces template data compatible with the upstream loader, then can write it to disk and/or call the upstream loader directly. The check script (`fifcheck`) runs existing template data through both the converter and the loader, then checks that the result is equivalent to the input. Again this was mostly written for one- time use so is fairly rough and hard-coded, but I'm including it in the commit so others can check the work and so on. Signed-off-by: Adam Williamson <awilliam@redhat.com>
105 lines
5.1 KiB
Python
Executable File
105 lines
5.1 KiB
Python
Executable File
#!/bin/python3
|
|
|
|
"""
|
|
This script takes JSON-formatted openQA template data (in the older format with a JobTemplates
|
|
dict, not the newer YAML-ish format organized by job group) and converts to an intermediate format
|
|
(Fedora Intermediate Format - 'fif') intended to be easier for human editing. It extracts all the
|
|
unique 'environment profiles' - a combination of machine and product - from the JobTemplates and
|
|
stores them in a 'Profiles' dict; it then adds a 'profiles' key to each test suite, indicating
|
|
which profiles that suite is run on. It is fairly easy to reverse this process to reproduce the
|
|
openQA loader-compatible data, but the intermediate format is more friendly to a human editor.
|
|
Adding a new test suite to run on existing 'profiles' only requires adding the suite and an
|
|
appropriate 'profiles' dict. Adding a new profile involves adding the machine and/or product,
|
|
manually adding the profile to the Profiles dict, and then adding the profile to all the test
|
|
suites which should be run on it. See also fifloader.py, which handles converting FIF input to
|
|
upstream format, and optionally can pass it through to the upstream loader.
|
|
"""
|
|
|
|
import json
|
|
|
|
with open('templates.old.json', 'r') as tempfh:
|
|
tempdata = json.load(tempfh)
|
|
with open('templates-updates.old.json', 'r') as updfh:
|
|
updata = json.load(updfh)
|
|
|
|
def _synthesize_product_name(product):
|
|
"""Synthesize a product name from a product dict. We do this when
|
|
reading the templates file and also when constructing the profiles
|
|
so use a function to make sure they both do it the same way.
|
|
"""
|
|
return "-".join((product['distri'], product['flavor'], product['arch'], product['version']))
|
|
|
|
def read_templates(templates):
|
|
newtemps = {}
|
|
if 'Machines' in templates:
|
|
newtemps['Machines'] = {}
|
|
for machine in templates['Machines']:
|
|
# condense the stupid settings format
|
|
machine['settings'] = {settdict['key']: settdict['value'] for settdict in machine['settings']}
|
|
# just use a dict, not a list of dicts with 'name' keys...
|
|
name = machine.pop('name')
|
|
newtemps['Machines'][name] = machine
|
|
if 'Products' in templates:
|
|
newtemps['Products'] = {}
|
|
for product in templates['Products']:
|
|
# condense the stupid settings format
|
|
product['settings'] = {settdict['key']: settdict['value'] for settdict in product['settings']}
|
|
# synthesize a name, as we don't have any in our templates
|
|
# and we can use them in the scenarios. however, note that
|
|
# openQA itself doesn't let you use the product name as a
|
|
# key when loading templates, unlike the machine name, our
|
|
# loader has to reverse this and provide the full product
|
|
# dict to the upstream loader
|
|
name = _synthesize_product_name(product)
|
|
# this is always an empty string in our templates
|
|
del product['name']
|
|
newtemps['Products'][name] = product
|
|
if 'TestSuites' in templates:
|
|
newtemps['TestSuites'] = {}
|
|
for testsuite in templates['TestSuites']:
|
|
# condense the stupid settings format
|
|
testsuite['settings'] = {settdict['key']: settdict['value'] for settdict in testsuite['settings']}
|
|
# just use a dict, not a list of dicts with 'name' keys...
|
|
name = testsuite.pop('name')
|
|
newtemps['TestSuites'][name] = testsuite
|
|
profiles = {}
|
|
for jobtemp in templates['JobTemplates']:
|
|
# figure out the profile for each job template and add it to
|
|
# the dict. For Fedora, the group name is predictable based on
|
|
# the arch and whether it's an update test; the intermediate
|
|
# loader figures that out
|
|
profile = {
|
|
'machine': jobtemp['machine']['name'],
|
|
'product': _synthesize_product_name(jobtemp['product']),
|
|
}
|
|
profname = '-'.join([profile['product'], profile['machine']])
|
|
# keep track of all the profiles we've hit
|
|
profiles[profname] = profile
|
|
|
|
test = jobtemp['test_suite']['name']
|
|
prio = jobtemp['prio']
|
|
try:
|
|
suite = newtemps['TestSuites'][test]
|
|
except KeyError:
|
|
# this is a templates-updates JobTemplate which refers to a
|
|
# TestSuite defined in templates. What we do here is define
|
|
# a partial TestSuite which contains only the name and the
|
|
# profiles; the loader for this format knows how to combine
|
|
# dicts (including incomplete ones) from multiple source
|
|
# files into one big final-format lump
|
|
suite = {}
|
|
newtemps['TestSuites'][test] = suite
|
|
if 'profiles' in suite:
|
|
suite['profiles'][profname] = prio
|
|
else:
|
|
suite['profiles'] = {profname: prio}
|
|
|
|
newtemps['Profiles'] = profiles
|
|
return newtemps
|
|
|
|
with open('templates.fif.json', 'w') as newtempfh:
|
|
json.dump(read_templates(tempdata), newtempfh, sort_keys=True, indent=4)
|
|
|
|
with open('templates-updates.fif.json', 'w') as newtempfh:
|
|
json.dump(read_templates(updata), newtempfh, sort_keys=True, indent=4)
|