1
0
mirror of https://pagure.io/fedora-qa/createhdds.git synced 2024-11-12 11:04:22 +00:00
createhdds/openqa_trigger/conf_test_suites.py

255 lines
9.5 KiB
Python
Raw Normal View History

handling scheduling of jobs for multiple images This handles scheduling of jobs for more than one type of image; currently we'll run tests for Workstation live as well. It requires some cleverness to run some tests for *all* images (currently just default_boot_and_install) but run all the tests that can be run with any non-live installer image with the best image available for the compose. We introduce a special (openQA, not fedfind) 'flavor' called 'universal'; we run a couple of checks to find the best image in the compose for running the universal tests, and schedule tests for the 'universal' flavor with that image. The 'best' image is a server or 'generic' DVD if possible, and if not, a server or 'generic' boot.iso. ISO files have the compose's version identifier prepended to their names. Otherwise they retain their original names, which should usually be unique within a given compose, except for boot.iso files, which have their payload and arch added into their names to ensure they don't overwrite each other. This also adds a mechanism for TESTCASES (in conf_test_suites) to define a callback which will be called with the flavor of the image being tested; the result of the callback will be used as the 'test name' for relval result reporting purposes. This allows us to report results against the correct 'test instance' for the image being tested, for tests like Boot_default_install which have 'test instances' for each image. We can extend this general approach in future for other cases where we have multiple 'test instances' for a single test case.
2015-03-18 21:51:01 +00:00
def default_install_cb(flavor):
"""Figure out the correct test case name for a default_boot_and_
install pass for a given flavor.
"""
(payload, imagetype) = flavor.split('_')
imagetype = imagetype.replace('boot', 'netinst')
imagetype = imagetype.replace('dvd', 'offline')
return "{0} {1}".format(payload, imagetype)
2015-01-28 10:00:58 +00:00
TESTCASES = {
handling scheduling of jobs for multiple images This handles scheduling of jobs for more than one type of image; currently we'll run tests for Workstation live as well. It requires some cleverness to run some tests for *all* images (currently just default_boot_and_install) but run all the tests that can be run with any non-live installer image with the best image available for the compose. We introduce a special (openQA, not fedfind) 'flavor' called 'universal'; we run a couple of checks to find the best image in the compose for running the universal tests, and schedule tests for the 'universal' flavor with that image. The 'best' image is a server or 'generic' DVD if possible, and if not, a server or 'generic' boot.iso. ISO files have the compose's version identifier prepended to their names. Otherwise they retain their original names, which should usually be unique within a given compose, except for boot.iso files, which have their payload and arch added into their names to ensure they don't overwrite each other. This also adds a mechanism for TESTCASES (in conf_test_suites) to define a callback which will be called with the flavor of the image being tested; the result of the callback will be used as the 'test name' for relval result reporting purposes. This allows us to report results against the correct 'test instance' for the image being tested, for tests like Boot_default_install which have 'test instances' for each image. We can extend this general approach in future for other cases where we have multiple 'test instances' for a single test case.
2015-03-18 21:51:01 +00:00
"QA:Testcase_Boot_default_install": {
"name_cb": default_install_cb,
"section": 'Default boot and install',
2015-01-28 10:00:58 +00:00
"env": "$RUNARCH$",
"type": "Installation",
},
"QA:Testcase_install_to_VirtIO": {
"section": "Storage devices",
"env": "x86",
"type": "Installation",
},
"QA:Testcase_partitioning_guided_empty": {
"section": "Guided storage configuration",
Use python-wikitcms and fedfind The basic approach is that openqa_trigger gets a ValidationEvent from python-wikitcms - either the Wiki.current_event property for 'current', or the event specified, obtained via the newly-added Wiki.get_validation_event(), for 'event'. For 'event' it then just goes ahead and runs the jobs and prints the IDs. For 'current' it checks the last run compose version for each arch and runs if needed, as before. The ValidationEvent's 'sortname' property is the value written out to PERSISTENT to track the 'last run' - this property is intended to always sort compose events 'correctly', so we should always run when appropriate even when going from Rawhide to Branched, Branched to a TC, TC to RC, RC to (next milestone) TC. On both paths it gets a fedfind.Release object via the ValidationEvent - ValidationEvents have a ff_release property which is the fedfind.Release object that matches that event. It then queries fedfind for image locations using a query that tries to get just *one* generic-ish network install image for each arch. It passes the location to download_image(), which is just download_rawhide_iso() renamed and does the same job, only it can be simpler now. From there it works pretty much as before, except we use the ValidationEvent's 'version' property as the BUILD setting for OpenQA, and report_job_results get_relval_commands() is tweaked slightly to parse this properly to produce a correct report-auto command. Probably the most likely bits to break here are the sortname thing (see wikitcms helpers.py fedora_release_sort(), it's pretty stupid, I should re-write it) and the image query, which might wind up getting more than one image depending on how exactly the F22 Alpha composes look. I'll keep a close eye on that. We can always take the list from fedfind and further filter it so we have just one image per arch. Image objects have a .arch attribute so this will be easy to do if necessary. I *could* give the fedfind query code a 'I'm feeling lucky'- ish mode to only return one image per (whatever), but not sure if that would be too specialized, I'll think about it.
2015-02-16 17:01:58 +00:00
"env": "x86 BIOS",
2015-01-28 10:00:58 +00:00
"type": "Installation",
},
"QA:Testcase_Anaconda_User_Interface_Graphical": {
"section": "User interface",
"env": "Result",
"type": "Installation",
},
"QA:Testcase_Anaconda_user_creation": {
"section": "Miscellaneous",
"env": "x86",
"type": "Installation",
},
"QA:Testcase_install_to_PATA": {
"section": "Storage devices",
"env": "x86",
"type": "Installation",
},
"QA:Testcase_partitioning_guided_delete_all": {
"section": "Guided storage configuration",
Use python-wikitcms and fedfind The basic approach is that openqa_trigger gets a ValidationEvent from python-wikitcms - either the Wiki.current_event property for 'current', or the event specified, obtained via the newly-added Wiki.get_validation_event(), for 'event'. For 'event' it then just goes ahead and runs the jobs and prints the IDs. For 'current' it checks the last run compose version for each arch and runs if needed, as before. The ValidationEvent's 'sortname' property is the value written out to PERSISTENT to track the 'last run' - this property is intended to always sort compose events 'correctly', so we should always run when appropriate even when going from Rawhide to Branched, Branched to a TC, TC to RC, RC to (next milestone) TC. On both paths it gets a fedfind.Release object via the ValidationEvent - ValidationEvents have a ff_release property which is the fedfind.Release object that matches that event. It then queries fedfind for image locations using a query that tries to get just *one* generic-ish network install image for each arch. It passes the location to download_image(), which is just download_rawhide_iso() renamed and does the same job, only it can be simpler now. From there it works pretty much as before, except we use the ValidationEvent's 'version' property as the BUILD setting for OpenQA, and report_job_results get_relval_commands() is tweaked slightly to parse this properly to produce a correct report-auto command. Probably the most likely bits to break here are the sortname thing (see wikitcms helpers.py fedora_release_sort(), it's pretty stupid, I should re-write it) and the image query, which might wind up getting more than one image depending on how exactly the F22 Alpha composes look. I'll keep a close eye on that. We can always take the list from fedfind and further filter it so we have just one image per arch. Image objects have a .arch attribute so this will be easy to do if necessary. I *could* give the fedfind query code a 'I'm feeling lucky'- ish mode to only return one image per (whatever), but not sure if that would be too specialized, I'll think about it.
2015-02-16 17:01:58 +00:00
"env": "x86 BIOS",
2015-01-28 10:00:58 +00:00
"type": "Installation",
},
"QA:Testcase_install_to_SATA": {
"section": "Storage devices",
"env": "x86",
"type": "Installation",
},
"QA:Testcase_partitioning_guided_multi_select": {
"section": "Guided storage configuration",
Use python-wikitcms and fedfind The basic approach is that openqa_trigger gets a ValidationEvent from python-wikitcms - either the Wiki.current_event property for 'current', or the event specified, obtained via the newly-added Wiki.get_validation_event(), for 'event'. For 'event' it then just goes ahead and runs the jobs and prints the IDs. For 'current' it checks the last run compose version for each arch and runs if needed, as before. The ValidationEvent's 'sortname' property is the value written out to PERSISTENT to track the 'last run' - this property is intended to always sort compose events 'correctly', so we should always run when appropriate even when going from Rawhide to Branched, Branched to a TC, TC to RC, RC to (next milestone) TC. On both paths it gets a fedfind.Release object via the ValidationEvent - ValidationEvents have a ff_release property which is the fedfind.Release object that matches that event. It then queries fedfind for image locations using a query that tries to get just *one* generic-ish network install image for each arch. It passes the location to download_image(), which is just download_rawhide_iso() renamed and does the same job, only it can be simpler now. From there it works pretty much as before, except we use the ValidationEvent's 'version' property as the BUILD setting for OpenQA, and report_job_results get_relval_commands() is tweaked slightly to parse this properly to produce a correct report-auto command. Probably the most likely bits to break here are the sortname thing (see wikitcms helpers.py fedora_release_sort(), it's pretty stupid, I should re-write it) and the image query, which might wind up getting more than one image depending on how exactly the F22 Alpha composes look. I'll keep a close eye on that. We can always take the list from fedfind and further filter it so we have just one image per arch. Image objects have a .arch attribute so this will be easy to do if necessary. I *could* give the fedfind query code a 'I'm feeling lucky'- ish mode to only return one image per (whatever), but not sure if that would be too specialized, I'll think about it.
2015-02-16 17:01:58 +00:00
"env": "x86 BIOS",
2015-01-28 10:00:58 +00:00
"type": "Installation",
},
"QA:Testcase_install_to_SCSI": {
"section": "Storage devices",
"env": "x86",
"type": "Installation",
},
"QA:Testcase_Anaconda_updates.img_via_URL": {
2015-02-11 12:25:10 +00:00
"section": "Miscellaneous",
2015-01-28 10:00:58 +00:00
"env": "Result",
"type": "Installation",
},
"QA:Testcase_kickstart_user_creation": {
"section": "Kickstart",
"env": "Result",
"type": "Installation",
},
"QA:Testcase_Kickstart_Http_Server_Ks_Cfg": {
"section": "Kickstart",
"env": "Result",
"type": "Installation",
},
2015-02-04 13:54:20 +00:00
"QA:Testcase_install_repository_Mirrorlist_graphical": {
"section": "Installation repositories",
Use python-wikitcms and fedfind The basic approach is that openqa_trigger gets a ValidationEvent from python-wikitcms - either the Wiki.current_event property for 'current', or the event specified, obtained via the newly-added Wiki.get_validation_event(), for 'event'. For 'event' it then just goes ahead and runs the jobs and prints the IDs. For 'current' it checks the last run compose version for each arch and runs if needed, as before. The ValidationEvent's 'sortname' property is the value written out to PERSISTENT to track the 'last run' - this property is intended to always sort compose events 'correctly', so we should always run when appropriate even when going from Rawhide to Branched, Branched to a TC, TC to RC, RC to (next milestone) TC. On both paths it gets a fedfind.Release object via the ValidationEvent - ValidationEvents have a ff_release property which is the fedfind.Release object that matches that event. It then queries fedfind for image locations using a query that tries to get just *one* generic-ish network install image for each arch. It passes the location to download_image(), which is just download_rawhide_iso() renamed and does the same job, only it can be simpler now. From there it works pretty much as before, except we use the ValidationEvent's 'version' property as the BUILD setting for OpenQA, and report_job_results get_relval_commands() is tweaked slightly to parse this properly to produce a correct report-auto command. Probably the most likely bits to break here are the sortname thing (see wikitcms helpers.py fedora_release_sort(), it's pretty stupid, I should re-write it) and the image query, which might wind up getting more than one image depending on how exactly the F22 Alpha composes look. I'll keep a close eye on that. We can always take the list from fedfind and further filter it so we have just one image per arch. Image objects have a .arch attribute so this will be easy to do if necessary. I *could* give the fedfind query code a 'I'm feeling lucky'- ish mode to only return one image per (whatever), but not sure if that would be too specialized, I'll think about it.
2015-02-16 17:01:58 +00:00
"env": "Result",
2015-02-04 13:54:20 +00:00
"type": "Installation",
},
"QA:Testcase_install_repository_HTTP/FTP_graphical": {
"section": "Installation repositories",
Use python-wikitcms and fedfind The basic approach is that openqa_trigger gets a ValidationEvent from python-wikitcms - either the Wiki.current_event property for 'current', or the event specified, obtained via the newly-added Wiki.get_validation_event(), for 'event'. For 'event' it then just goes ahead and runs the jobs and prints the IDs. For 'current' it checks the last run compose version for each arch and runs if needed, as before. The ValidationEvent's 'sortname' property is the value written out to PERSISTENT to track the 'last run' - this property is intended to always sort compose events 'correctly', so we should always run when appropriate even when going from Rawhide to Branched, Branched to a TC, TC to RC, RC to (next milestone) TC. On both paths it gets a fedfind.Release object via the ValidationEvent - ValidationEvents have a ff_release property which is the fedfind.Release object that matches that event. It then queries fedfind for image locations using a query that tries to get just *one* generic-ish network install image for each arch. It passes the location to download_image(), which is just download_rawhide_iso() renamed and does the same job, only it can be simpler now. From there it works pretty much as before, except we use the ValidationEvent's 'version' property as the BUILD setting for OpenQA, and report_job_results get_relval_commands() is tweaked slightly to parse this properly to produce a correct report-auto command. Probably the most likely bits to break here are the sortname thing (see wikitcms helpers.py fedora_release_sort(), it's pretty stupid, I should re-write it) and the image query, which might wind up getting more than one image depending on how exactly the F22 Alpha composes look. I'll keep a close eye on that. We can always take the list from fedfind and further filter it so we have just one image per arch. Image objects have a .arch attribute so this will be easy to do if necessary. I *could* give the fedfind query code a 'I'm feeling lucky'- ish mode to only return one image per (whatever), but not sure if that would be too specialized, I'll think about it.
2015-02-16 17:01:58 +00:00
"env": "Result",
2015-02-04 13:54:20 +00:00
"type": "Installation",
},
"QA:Testcase_install_repository_HTTP/FTP_variation": {
"section": "Installation repositories",
Use python-wikitcms and fedfind The basic approach is that openqa_trigger gets a ValidationEvent from python-wikitcms - either the Wiki.current_event property for 'current', or the event specified, obtained via the newly-added Wiki.get_validation_event(), for 'event'. For 'event' it then just goes ahead and runs the jobs and prints the IDs. For 'current' it checks the last run compose version for each arch and runs if needed, as before. The ValidationEvent's 'sortname' property is the value written out to PERSISTENT to track the 'last run' - this property is intended to always sort compose events 'correctly', so we should always run when appropriate even when going from Rawhide to Branched, Branched to a TC, TC to RC, RC to (next milestone) TC. On both paths it gets a fedfind.Release object via the ValidationEvent - ValidationEvents have a ff_release property which is the fedfind.Release object that matches that event. It then queries fedfind for image locations using a query that tries to get just *one* generic-ish network install image for each arch. It passes the location to download_image(), which is just download_rawhide_iso() renamed and does the same job, only it can be simpler now. From there it works pretty much as before, except we use the ValidationEvent's 'version' property as the BUILD setting for OpenQA, and report_job_results get_relval_commands() is tweaked slightly to parse this properly to produce a correct report-auto command. Probably the most likely bits to break here are the sortname thing (see wikitcms helpers.py fedora_release_sort(), it's pretty stupid, I should re-write it) and the image query, which might wind up getting more than one image depending on how exactly the F22 Alpha composes look. I'll keep a close eye on that. We can always take the list from fedfind and further filter it so we have just one image per arch. Image objects have a .arch attribute so this will be easy to do if necessary. I *could* give the fedfind query code a 'I'm feeling lucky'- ish mode to only return one image per (whatever), but not sure if that would be too specialized, I'll think about it.
2015-02-16 17:01:58 +00:00
"env": "Result",
2015-02-04 13:54:20 +00:00
"type": "Installation",
},
2015-02-04 15:35:59 +00:00
"QA:Testcase_Package_Sets_Minimal_Package_Install": {
"section": "Package sets",
"env": "$RUNARCH$",
"type": "Installation",
},
"QA:Testcase_partitioning_guided_encrypted": {
"section": "Guided storage configuration",
Use python-wikitcms and fedfind The basic approach is that openqa_trigger gets a ValidationEvent from python-wikitcms - either the Wiki.current_event property for 'current', or the event specified, obtained via the newly-added Wiki.get_validation_event(), for 'event'. For 'event' it then just goes ahead and runs the jobs and prints the IDs. For 'current' it checks the last run compose version for each arch and runs if needed, as before. The ValidationEvent's 'sortname' property is the value written out to PERSISTENT to track the 'last run' - this property is intended to always sort compose events 'correctly', so we should always run when appropriate even when going from Rawhide to Branched, Branched to a TC, TC to RC, RC to (next milestone) TC. On both paths it gets a fedfind.Release object via the ValidationEvent - ValidationEvents have a ff_release property which is the fedfind.Release object that matches that event. It then queries fedfind for image locations using a query that tries to get just *one* generic-ish network install image for each arch. It passes the location to download_image(), which is just download_rawhide_iso() renamed and does the same job, only it can be simpler now. From there it works pretty much as before, except we use the ValidationEvent's 'version' property as the BUILD setting for OpenQA, and report_job_results get_relval_commands() is tweaked slightly to parse this properly to produce a correct report-auto command. Probably the most likely bits to break here are the sortname thing (see wikitcms helpers.py fedora_release_sort(), it's pretty stupid, I should re-write it) and the image query, which might wind up getting more than one image depending on how exactly the F22 Alpha composes look. I'll keep a close eye on that. We can always take the list from fedfind and further filter it so we have just one image per arch. Image objects have a .arch attribute so this will be easy to do if necessary. I *could* give the fedfind query code a 'I'm feeling lucky'- ish mode to only return one image per (whatever), but not sure if that would be too specialized, I'll think about it.
2015-02-16 17:01:58 +00:00
"env": "x86 BIOS",
2015-02-04 15:35:59 +00:00
"type": "Installation",
},
"QA:Testcase_partitioning_guided_delete_partial": {
"section": "Guided storage configuration",
Use python-wikitcms and fedfind The basic approach is that openqa_trigger gets a ValidationEvent from python-wikitcms - either the Wiki.current_event property for 'current', or the event specified, obtained via the newly-added Wiki.get_validation_event(), for 'event'. For 'event' it then just goes ahead and runs the jobs and prints the IDs. For 'current' it checks the last run compose version for each arch and runs if needed, as before. The ValidationEvent's 'sortname' property is the value written out to PERSISTENT to track the 'last run' - this property is intended to always sort compose events 'correctly', so we should always run when appropriate even when going from Rawhide to Branched, Branched to a TC, TC to RC, RC to (next milestone) TC. On both paths it gets a fedfind.Release object via the ValidationEvent - ValidationEvents have a ff_release property which is the fedfind.Release object that matches that event. It then queries fedfind for image locations using a query that tries to get just *one* generic-ish network install image for each arch. It passes the location to download_image(), which is just download_rawhide_iso() renamed and does the same job, only it can be simpler now. From there it works pretty much as before, except we use the ValidationEvent's 'version' property as the BUILD setting for OpenQA, and report_job_results get_relval_commands() is tweaked slightly to parse this properly to produce a correct report-auto command. Probably the most likely bits to break here are the sortname thing (see wikitcms helpers.py fedora_release_sort(), it's pretty stupid, I should re-write it) and the image query, which might wind up getting more than one image depending on how exactly the F22 Alpha composes look. I'll keep a close eye on that. We can always take the list from fedfind and further filter it so we have just one image per arch. Image objects have a .arch attribute so this will be easy to do if necessary. I *could* give the fedfind query code a 'I'm feeling lucky'- ish mode to only return one image per (whatever), but not sure if that would be too specialized, I'll think about it.
2015-02-16 17:01:58 +00:00
"env": "x86 BIOS",
"type": "Installation",
},
2015-03-05 10:48:31 +00:00
"QA:Testcase_partitioning_guided_free_space": {
"section": "Guided storage configuration",
"env": "x86 BIOS",
"type": "Installation",
},
2015-03-05 12:57:11 +00:00
"QA:Testcase_partitioning_guided_multi_empty_all": {
"section": "Guided storage configuration",
"env": "x86 BIOS",
"type": "Installation",
},
2015-03-06 09:35:59 +00:00
"QA:Testcase_Partitioning_On_Software_RAID": {
2015-03-12 10:09:34 +00:00
"section": "Custom storage configuration",
2015-03-06 09:35:59 +00:00
"env": "x86 BIOS",
"type": "Installation",
},
2015-03-12 10:09:34 +00:00
"QA:Testcase_Kickstart_Hd_Device_Path_Ks_Cfg": {
"section": "Kickstart",
"env": "Result",
"type": "Installation",
},
2015-01-28 10:00:58 +00:00
# "": {
handling scheduling of jobs for multiple images This handles scheduling of jobs for more than one type of image; currently we'll run tests for Workstation live as well. It requires some cleverness to run some tests for *all* images (currently just default_boot_and_install) but run all the tests that can be run with any non-live installer image with the best image available for the compose. We introduce a special (openQA, not fedfind) 'flavor' called 'universal'; we run a couple of checks to find the best image in the compose for running the universal tests, and schedule tests for the 'universal' flavor with that image. The 'best' image is a server or 'generic' DVD if possible, and if not, a server or 'generic' boot.iso. ISO files have the compose's version identifier prepended to their names. Otherwise they retain their original names, which should usually be unique within a given compose, except for boot.iso files, which have their payload and arch added into their names to ensure they don't overwrite each other. This also adds a mechanism for TESTCASES (in conf_test_suites) to define a callback which will be called with the flavor of the image being tested; the result of the callback will be used as the 'test name' for relval result reporting purposes. This allows us to report results against the correct 'test instance' for the image being tested, for tests like Boot_default_install which have 'test instances' for each image. We can extend this general approach in future for other cases where we have multiple 'test instances' for a single test case.
2015-03-18 21:51:01 +00:00
# "name_cb": callbackfunc # optional, called with 'flavor'
2015-01-28 10:00:58 +00:00
# "section": "",
# "env": "x86",
# "type": "Installation",
# },
}
TESTSUITES = {
handling scheduling of jobs for multiple images This handles scheduling of jobs for more than one type of image; currently we'll run tests for Workstation live as well. It requires some cleverness to run some tests for *all* images (currently just default_boot_and_install) but run all the tests that can be run with any non-live installer image with the best image available for the compose. We introduce a special (openQA, not fedfind) 'flavor' called 'universal'; we run a couple of checks to find the best image in the compose for running the universal tests, and schedule tests for the 'universal' flavor with that image. The 'best' image is a server or 'generic' DVD if possible, and if not, a server or 'generic' boot.iso. ISO files have the compose's version identifier prepended to their names. Otherwise they retain their original names, which should usually be unique within a given compose, except for boot.iso files, which have their payload and arch added into their names to ensure they don't overwrite each other. This also adds a mechanism for TESTCASES (in conf_test_suites) to define a callback which will be called with the flavor of the image being tested; the result of the callback will be used as the 'test name' for relval result reporting purposes. This allows us to report results against the correct 'test instance' for the image being tested, for tests like Boot_default_install which have 'test instances' for each image. We can extend this general approach in future for other cases where we have multiple 'test instances' for a single test case.
2015-03-18 21:51:01 +00:00
"default_install":[
"QA:Testcase_Boot_default_install",
2015-01-28 10:00:58 +00:00
"QA:Testcase_install_to_VirtIO",
2015-02-04 13:54:20 +00:00
"QA:Testcase_partitioning_guided_empty",
2015-01-28 10:00:58 +00:00
"QA:Testcase_Anaconda_User_Interface_Graphical",
"QA:Testcase_Anaconda_user_creation",
handling scheduling of jobs for multiple images This handles scheduling of jobs for more than one type of image; currently we'll run tests for Workstation live as well. It requires some cleverness to run some tests for *all* images (currently just default_boot_and_install) but run all the tests that can be run with any non-live installer image with the best image available for the compose. We introduce a special (openQA, not fedfind) 'flavor' called 'universal'; we run a couple of checks to find the best image in the compose for running the universal tests, and schedule tests for the 'universal' flavor with that image. The 'best' image is a server or 'generic' DVD if possible, and if not, a server or 'generic' boot.iso. ISO files have the compose's version identifier prepended to their names. Otherwise they retain their original names, which should usually be unique within a given compose, except for boot.iso files, which have their payload and arch added into their names to ensure they don't overwrite each other. This also adds a mechanism for TESTCASES (in conf_test_suites) to define a callback which will be called with the flavor of the image being tested; the result of the callback will be used as the 'test name' for relval result reporting purposes. This allows us to report results against the correct 'test instance' for the image being tested, for tests like Boot_default_install which have 'test instances' for each image. We can extend this general approach in future for other cases where we have multiple 'test instances' for a single test case.
2015-03-18 21:51:01 +00:00
],
"package_set_minimal":[
"QA:Testcase_partitioning_guided_empty",
"QA:Testcase_install_to_VirtIO",
"QA:Testcase_Anaconda_User_Interface_Graphical",
"QA:Testcase_Anaconda_user_creation",
2015-02-04 15:35:59 +00:00
"QA:Testcase_Package_Sets_Minimal_Package_Install",
2015-01-28 10:00:58 +00:00
],
"server_delete_pata":[
"QA:Testcase_install_to_PATA",
"QA:Testcase_partitioning_guided_delete_all",
"QA:Testcase_Anaconda_User_Interface_Graphical",
"QA:Testcase_Anaconda_user_creation",
2015-02-04 15:35:59 +00:00
"QA:Testcase_Package_Sets_Minimal_Package_Install",
2015-01-28 10:00:58 +00:00
],
"server_sata_multi":[
"QA:Testcase_install_to_SATA",
"QA:Testcase_partitioning_guided_multi_select",
"QA:Testcase_Anaconda_User_Interface_Graphical",
"QA:Testcase_Anaconda_user_creation",
2015-02-04 15:35:59 +00:00
"QA:Testcase_Package_Sets_Minimal_Package_Install",
2015-01-28 10:00:58 +00:00
],
"server_scsi_updates_img":[
2015-01-28 10:00:58 +00:00
"QA:Testcase_install_to_SCSI",
"QA:Testcase_partitioning_guided_empty",
"QA:Testcase_Anaconda_updates.img_via_URL",
"QA:Testcase_Anaconda_User_Interface_Graphical",
"QA:Testcase_Anaconda_user_creation",
2015-02-04 15:35:59 +00:00
"QA:Testcase_Package_Sets_Minimal_Package_Install",
2015-01-28 10:00:58 +00:00
],
"server_kickstart_user_creation":[
"QA:Testcase_install_to_VirtIO",
"QA:Testcase_Anaconda_user_creation",
"QA:Testcase_kickstart_user_creation",
"QA:Testcase_Kickstart_Http_Server_Ks_Cfg",
],
2015-02-04 13:54:20 +00:00
"server_mirrorlist_graphical":[
"QA:Testcase_install_to_VirtIO",
"QA:Testcase_partitioning_guided_empty",
"QA:Testcase_Anaconda_User_Interface_Graphical",
"QA:Testcase_Anaconda_user_creation",
"QA:Testcase_install_repository_Mirrorlist_graphical",
2015-02-04 15:35:59 +00:00
"QA:Testcase_Package_Sets_Minimal_Package_Install",
2015-02-04 13:54:20 +00:00
],
"server_repository_http_graphical":[
"QA:Testcase_install_to_VirtIO",
"QA:Testcase_partitioning_guided_empty",
"QA:Testcase_Anaconda_User_Interface_Graphical",
"QA:Testcase_Anaconda_user_creation",
"QA:Testcase_install_repository_HTTP/FTP_graphical",
2015-02-04 15:35:59 +00:00
"QA:Testcase_Package_Sets_Minimal_Package_Install",
2015-02-04 13:54:20 +00:00
],
2015-02-11 12:25:10 +00:00
"server_repository_http_variation":[
"QA:Testcase_install_to_VirtIO",
"QA:Testcase_partitioning_guided_empty",
"QA:Testcase_Anaconda_User_Interface_Graphical",
"QA:Testcase_Anaconda_user_creation",
"QA:Testcase_install_repository_HTTP/FTP_variation",
"QA:Testcase_Package_Sets_Minimal_Package_Install",
],
2015-02-04 13:54:20 +00:00
"server_mirrorlist_http_variation":[
"QA:Testcase_install_to_VirtIO",
"QA:Testcase_partitioning_guided_empty",
"QA:Testcase_Anaconda_User_Interface_Graphical",
"QA:Testcase_Anaconda_user_creation",
"QA:Testcase_install_repository_HTTP/FTP_variation",
2015-02-04 15:35:59 +00:00
"QA:Testcase_Package_Sets_Minimal_Package_Install",
],
"server_simple_encrypted": [
"QA:Testcase_install_to_VirtIO",
"QA:Testcase_partitioning_guided_empty",
"QA:Testcase_Anaconda_User_Interface_Graphical",
"QA:Testcase_Anaconda_user_creation",
"QA:Testcase_Package_Sets_Minimal_Package_Install",
"QA:Testcase_partitioning_guided_encrypted",
2015-02-04 13:54:20 +00:00
],
"server_delete_partial": [
"QA:Testcase_install_to_VirtIO",
Use python-wikitcms and fedfind The basic approach is that openqa_trigger gets a ValidationEvent from python-wikitcms - either the Wiki.current_event property for 'current', or the event specified, obtained via the newly-added Wiki.get_validation_event(), for 'event'. For 'event' it then just goes ahead and runs the jobs and prints the IDs. For 'current' it checks the last run compose version for each arch and runs if needed, as before. The ValidationEvent's 'sortname' property is the value written out to PERSISTENT to track the 'last run' - this property is intended to always sort compose events 'correctly', so we should always run when appropriate even when going from Rawhide to Branched, Branched to a TC, TC to RC, RC to (next milestone) TC. On both paths it gets a fedfind.Release object via the ValidationEvent - ValidationEvents have a ff_release property which is the fedfind.Release object that matches that event. It then queries fedfind for image locations using a query that tries to get just *one* generic-ish network install image for each arch. It passes the location to download_image(), which is just download_rawhide_iso() renamed and does the same job, only it can be simpler now. From there it works pretty much as before, except we use the ValidationEvent's 'version' property as the BUILD setting for OpenQA, and report_job_results get_relval_commands() is tweaked slightly to parse this properly to produce a correct report-auto command. Probably the most likely bits to break here are the sortname thing (see wikitcms helpers.py fedora_release_sort(), it's pretty stupid, I should re-write it) and the image query, which might wind up getting more than one image depending on how exactly the F22 Alpha composes look. I'll keep a close eye on that. We can always take the list from fedfind and further filter it so we have just one image per arch. Image objects have a .arch attribute so this will be easy to do if necessary. I *could* give the fedfind query code a 'I'm feeling lucky'- ish mode to only return one image per (whatever), but not sure if that would be too specialized, I'll think about it.
2015-02-16 17:01:58 +00:00
"QA:Testcase_partitioning_guided_delete_partial",
"QA:Testcase_Anaconda_User_Interface_Graphical",
"QA:Testcase_Anaconda_user_creation",
"QA:Testcase_Package_Sets_Minimal_Package_Install",
],
2015-03-05 10:48:31 +00:00
"server_simple_free_space": [
"QA:Testcase_install_to_VirtIO",
"QA:Testcase_partitioning_guided_free_space",
"QA:Testcase_Anaconda_User_Interface_Graphical",
"QA:Testcase_Anaconda_user_creation",
"QA:Testcase_Package_Sets_Minimal_Package_Install",
],
2015-03-05 12:57:11 +00:00
"server_multi_empty": [
"QA:Testcase_install_to_VirtIO",
"QA:Testcase_partitioning_guided_multi_empty_all",
"QA:Testcase_Anaconda_User_Interface_Graphical",
"QA:Testcase_Anaconda_user_creation",
"QA:Testcase_Package_Sets_Minimal_Package_Install",
],
2015-03-06 09:35:59 +00:00
"server_software_raid": [
"QA:Testcase_install_to_VirtIO",
"QA:Testcase_Partitioning_On_Software_RAID",
"QA:Testcase_Anaconda_User_Interface_Graphical",
"QA:Testcase_Anaconda_user_creation",
"QA:Testcase_Package_Sets_Minimal_Package_Install",
],
2015-03-12 10:09:34 +00:00
"server_kickstart_hdd":[
"QA:Testcase_install_to_VirtIO",
"QA:Testcase_Anaconda_user_creation",
"QA:Testcase_kickstart_user_creation",
"QA:Testcase_Kickstart_Hd_Device_Path_Ks_Cfg",
],
2015-01-28 10:00:58 +00:00
}